id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.01886
Auditable data structures: theory and applications
Every digital process needs to consume some data in order to work properly. It is very common for applications to use some external data in their processes, getting them by sources such as external APIs. Therefore, trusting the received data becomes crucial in such scenarios, considering that if the data are not self-produced by the consumer, the trust in the external data source, or in the data that the source produces, can not always be taken for granted. The most used approach to generate trust in the external source is based on authenticated data structures, that are able to authenticate the source when queried through the generation of proofs. Such proofs are useful to assess authenticity or integrity, however, an external user could also be interested in verifying the data history and its consistency. This problem seems to be unaddressed by current literature, which proposes some approaches aimed at executing audits by internal actors with prior knowledge about the data structures. In this paper, we address the scenario of an external auditor with no data knowledge that wants to verify the data history consistency. We analyze the terminology and the current state of the art of the auditable data structures, then we will propose a general framework to support external audits from both internal and external users.
Andrea Canciani, Claudio Felicioli, Fabio Severino, Domenico Tortola
2023-06-02T19:44:20Z
http://arxiv.org/abs/2306.01886v1
# Auditable data structures: theory and applications ###### Abstract Every digital process needs to consume some data in order to work properly. It is very common for applications to use some external data in their processes, getting them by sources such as external APIs. Therefore, trusting the received data becomes crucial in such scenarios, considering that if the data are not self-produced by the consumer, the trust in the external data source, or in the data that the source produces, can not always be taken for granted. The most used approach to generate trust in the external source is based on authenticated data structures, that are able to authenticate the source when queried through the generation of proofs. Such proofs are useful to assess authenticity or integrity, however, an external user could also be interested in verifying the data history and its consistency. This problem seems to be unaddressed by current literature, which proposes some approaches aimed at executing audits by internal actors with prior knowledge about the data structures. In this paper, we address the scenario of an external auditor with no data knowledge that wants to verify the data history consistency. We analyze the terminology and the current state of the art of the auditable data structures, then we will propose a general framework to support external audits from both internal and external users. Auditable data structures, Data consistency, Consistency proof ## I Introduction Nowadays, data are the engine for every digital process. Every category of digital activity, at some point, needs to process or evaluate some kind of data to be completed. In real-world scenarios, applications often exchange data with external sources, such as cloud service providers. This highlights the need by the user of such services to trust the received data and the operations executed on that data. There are well-established lines of research focused on data integrity and authentication when the data publisher or the outsourced data manager are untrusted, proving that the data has not been tampered with. On the other hand, the current literature that addresses the auditability of the sequence of operations executed on the data is at an early stage, even if very promising. The lack of standard definitions for key concepts such as data consistency and data history is the main motivation behind this paper. Our contributions are the proposal of some terminology and of a new framework, that can be useful when describing and comparing solutions in the line of research of data structure history auditing. We are also contributing with a small survey about a selection of existing solutions (including some commercial products not yet described in scientific literature) that make use of auditable data structures to improve the overall trust not only in the data but also in the process, considered as the sequence of operations that leads to that data. The paper is structured as follows: in the rest of this Section, we will introduce some terminology and key concepts, then we will use them to describe two authenticated data structure models commonly adopted in the scientific literature when addressing data replication and data outsourcing scenarios; in Section II we will describe the scenario of external audit of history consistency, and we will propose a framework to address it; in Section III we will discuss the current state of the art for data structure history consistency auditing, focusing on a few notable applications; Section IV concludes the paper. ### _Terminology_ For the sake of clarity, here we introduce some terminology and key concepts that will be largely used in the next sections. This terminology will be useful to fully understand the objectives and the underlying ideas behind the externally auditable data structure framework that we are proposing in Section II. Some terms also apply to the context of authenticated data structure models. * Data structure **operation**: a function defined on the data structure. It can be executed, by taking arguments and returning a result. If the execution can have the side effect to modify the state of the data structure, then it is an **edit operation**, otherwise, it is a **query operation**. * Data structure **digest**: an identifier of the current state of the data structure. It is usually compact and computed by cryptographic hash composition. In order to be safely used in the context of authenticated or auditable data structures, it must be at least second-preimage resistant (i.e., given a digest derived from a certain data structure state, it should be computationally hard to find a different state identified by the same digest value). * Data structure **consistency**: the property of two time-ordered data structure instances where the newer one can be produced starting from the older one by executing a sequence of valid edit operations. * Data structure **history**: a data structure that identifies a chronologically ordered sequence of versions of the same data structure. For instance, a simple data structure history can be a list of digests. * **History consistency**: the property of data structure history where consistency holds for every pair of data structure versions. * **Proof**: a statement, paired with some supporting data. The supporting data must be sufficient to independently verify that the statement is true with a high probability, without relying on trust. * **Integrity proof**: a proof stating that a given operation executed on a data structure with a given digest will produce a given result. Inclusion proofs (for example, the ones implemented as the hash path from the root to the leaf of a Merkle tree) are a special case of integrity proofs. * **Consistency proof**: a proof stating that two time-ordered data structures with given digests are consistent. * **Audit**: an inspection of a data structure aimed to check some properties of the data structure content in present or past times, and/or some properties of the sequence of edit operations executed on it. * **External auditor**: an auditor without prior knowledge of the data structure. * **Externally auditable data structure**: a data structure whose history consistency can be verified by an external auditor without compromising data secrecy (i.e. the history must be devoid of data, for example by using one-way data indirections). ### _Related works_ The historical roots of the externally auditable data structure framework that we are introducing can be found in the line of research of the authenticated data structures; they have first been proposed as a solution to safely distribute certificate revocation lists [1]. The use of authenticated data structures, alongside with a verifiable set of operations which includes insertion and deletion, makes the entire revocation process more transparent, improving the auditability of the revocation list. In [2] the first formal model for authenticated data structures, nowadays known as the _three-party model_, was proposed, addressing the data replication scenario. In the data replication scenario, a data structure is managed by a trusted _source_ and replicated by an untrusted _directory_ to delegate distribution and also improve scalability. When querying the _directory_, the _user_ needs a way to check that responses have not been tampered with. In the three-party model, schematized in Figure 1, the _user_ knows a digest of the data structure authenticated by the _source_, and the _directory_ produces an integrity proof when responding to queries. To be utilized in the three-party model, a data structure needs to meet certain key requirements: * It must possess a method for computing a second-preimage resistant digest, which identifies the current state of the data structure. * For each query operation intended for use by the _user_, an algorithm must be defined that generates integrity proofs. Additionally, a corresponding proof verification algorithm should be defined. The three-party model has been adopted by several works as a common framework, useful to describe and compare a large repertoire of authenticated data structures [3, 4, 5, 6] In [7] the three-party model has been extended to the more general _two-party model_, addressing the data outsourcing scenario, an increasingly common scenario in the context of cloud services. In the data outsourcing scenario, the _user_ delegates the management of a data structure to an untrusted _server_ while retaining the ownership. The _user_ is not required to maintain a replica of the data structure and needs a way to check that it has not been tampered with. In the two-party model, schematized in Figure 2, the _user_ is assumed to know the most recent digest of the data structure; the _server_ produces an integrity proof when responding to queries, and a consistency proof when responding to edit operation requests, stating the updated value for the data structure digest. The key requirements for a data structure to be utilized in the two-party model are essentially the same as those for the three-party model, with an additional one: for every edit operation, an algorithm must be defined that generates consistency proofs, and a corresponding proof verification algorithm should also be defined. This indicates that the requirements for the three-party data replication model are encompassed within those of the two-party data outsourcing model. Hence, the two-party model can be seen as a generalization of the three-party model. ## IIexternally auditable data structure framework Now we describe the scenario of external audit of history consistency, a data secrecy preserving scenario that extends the two-party authenticated data structure model, for which we propose the externally auditable data structure framework, schematized in Figure 3. In this scenario, the _source_ delegates the management of a data structure to an untrusted _server_ while retaining the ownership. The _consumer_ query the untrusted _server_. The external _auditor_ verifies the history consistency of the data structure managed by the _server_. In the externally auditable data structure framework, the _server_ produces an integrity proof when responding to queries, and a consistency proof when responding to edit operation Fig. 1: Scheme of the three-party model for data replication Fig. 2: Scheme of the two-party model for data outsourcing requests, stating the updated value for the data structure digest. The _server_ also writes to the trusted _storage_ an authenticated data structure history, including a proof stating its consistency, extending it incrementally whenever an edit operation is executed. The external _auditor_ verifies history consistency thanks to the data from the trusted _storage_, including the digest list and the consistency proofs between each pair of consecutive digests. Therefore, there is no need to interact with the _server_. The key requirements for a data structure to be utilized in this framework are essentially the same as those for the two-party model, with the additional one that the consistency proofs must preserve data privacy. Hence, the externally auditable data structure framework can be seen as a generalization of the two-party data outsourcing model. ## III Notable applications In this section, an overview of the applications of auditable data structures will be discussed. Such applications include digital certificate management (release and revocation), database query verification, log file management, and private distributed ledger auditing. The mentioned applications often implement the three-party data replication model or the two-party data outsourcing model, analyzed in Section I-B. The discussion proposed in the next paragraphs will be focused on how these implementations can handle internal and external auditing and on the data exposed during the audit process. The three approaches most commonly adopted to enable auditing are: edit operations replay, data-exposing proofs, and Merkle proofs. In the edit operations replay approach, an auditor is able to verify any property of the edit operations executed on a data structure in the most straightforward way possible: the full list of edit operations is made available and re-executed on an older version of the data structure already trusted by the auditor, then the identity between the resulting version and the one under audit is checked. Proof-based approaches are more elegant and efficient, but usually it is hard not to expose any data in the process. Merkle proofs appear to be the most suitable proving scheme when addressing use cases with the requirement of data secrecy preservation since they consist only of hashes. Merkle proofs make use of Merkle trees [8] as an underlying data structure that enables the construction of both integrity (inclusion) and consistency proofs. The former is generated in several cases, such as when a log is queried to verify the presence of a certain entry or in authenticated databases in order to prove the presence of a certain record; the latter can be generated, for example, to prove that a newer version of a log is consistent to a trusted older version, meaning that all the entries from the older one are preserved, in the same order, and are a prefix for the entry list of the newer log. ### _Digital certificate management systems_ According to [9] digital certificate management system can be modeled as a three-party data replication model (see Section I-B) where the source is represented by the certification authority (CA), directories are one or more untrusted parties that serve as certificate databases to users in order to avoid bottlenecks with direct accesses to the source, and the users get their certificates and issue queries to the publishers to get certificate information. Laurie and Kasper provided an innovative way to use verified data structures to handle certificate transparency [10] and revocation [1], with the final goal to make the most difficult as possible to a CA to release a certificate for a domain without the domain owner to know about that. Certificates are managed as a log file, where every certificate is appended when released. When required, certificates have to be accompanied with an integrity proof in one or more of these log files. Logs need to be consistent, meaning that every log must contain every certificated signed by the log itself. Logs are defined as a Merkle tree structure containing signed certificates as leaf nodes, and periodically (according to a time parameter) logs produces a digest made by the Merkle root, the number of entries, a timestamp and the log digital signature. CAs are also required to be able to provide integrity proof for each entry every time an auditor issues a query for it. Such integrity proof is a classical Merkle inclusion proof made by the path from the root to the corresponding leaf node. Regarding the certificate revocation, the authors introduced the deletion operation to the mechanism used with the certification transparency. Logs still have to be able to prove the Fig. 3: Scheme of the externally auditable data structure framework consistency of the history, meaning that a certificate could be or not be present in a particular version of the log, keeping a valid history for itself between each version. This can be done with a Merkle tree where every leaf node contains pairs from a sorted list of revoked certificates, or by using sparse Merkle trees [11] to store the status on every possible certificate. ### _Log management systems_ At a high level, a log file can be modeled as a Merkle tree where every new event captured by the log is a leaf node. To guarantee that the log works properly, the appending of a new element (which equals to a new event captured by the log) is the only valid operation. With such configuration, a log can be used as an externally auditable data structure, and the consistency between any two version of the same log file can always been proved by computing Merkle consistency proofs. Auditability is a key property for a log management system in order to improve transparency and to guarantee the tamper-evidence of the log itself [12]. In [13], three verifiable data structures that can be used to successfully implement a verifiable log management system are proposed: * **Verifiable log**: a verifiable log is an append-only tree data structure designed to be used, as the name suggests, for log management. This persistent data structure starts empty and generates a new version of itself every time a new row is appended. Periodically, an authenticated digest is published, which includes the tree root hash value, the tree size, and a digital signature. The tree can be queried to check entry inclusion, return all the entries in the tree, and verify the append-only property. * **Verifiable map**: a verifiable map is a tree data structure where leaf nodes are key-value pairs. Besides the differences between stored data format, verifiable maps work similarly to the verified log and can be implemented as a sparse Merkle tree [11]. Periodically, an authenticated digest is generated from an instance of the map, including the root hash and a digital signature. A verifiable map can be queried to verify the inclusion of a certain key, to retrieve a value by key, and to enumerate all the key-value pairs. * **Log-backed map**: this data structure is a combination of a verifiable log and a verifiable map: the map is populated with key-value pairs, while the log describes an ordered list of edit operations, which can be replayed in order to verify the history consistency of the map. For this data structure, the digest production has to keep track of the log version related to the specific map version. This hybrid design address the verification of data structure consistency between two different versions of the map by adopting the edit operation replay approach (using the operations included in the log), while the log supports data structure consistency verification by adopting the Merkle proof approach. Trillian1 is a commercial log management provider that uses the aforementioned verified data structures [14] to implement a log system, with the goal of guaranteeing log immutability and to provide an auditable event history. Footnote 1: [https://transparency.dev/#wrillian](https://transparency.dev/#wrillian) [Accessed on 22 May 2023] The Trillian design allows the creation of integrity proofs for query results, and consistency proofs as explained in Section II, making it possible to externally verify the data structure consistency between consecutive versions of the log. ### _Auditable database providers_ The main role of auditability for databases is to create a tamper-evidence database, generating inclusion proofs for each record included in any query response, and also consistency proofs to verify the transaction history. LedgerDB [15] is a centralized database service that offers tamper-evidence, non-repudiation and strong auditability. In LedgerDB, auditability is guaranteed by using a data structure called _journal_, related to the database, and by implementing a transaction execution flow that ensures persistency and tamper-resistance of the processed transactions. When an internal client requests an audit, the journal is populated with client-related data fields (such as client ID, nonce and timestamp) and operator related fields (URI, type, transaction). A hash for the request is calculated according to such data and signed by the client. Then the database server adds some fields such as the request hash, the data stream location and a timestamp. Finally, the server calculates the entire data structure hash, sign it and returns to the client as a _journal receipt_. This protocol ensures that an internal client can always verify the operation validity too, using the journal sequence as an operation log. On the other hand, the transaction execution flow is divided into four phases: _verification_ (of the transaction validity), _execution_, _commit_ (journal data structure generation and processing) and _indexing_ (index generation for the committed transaction, in order to optimize data retrieval and verification). With this setup, LedgerDB supports external audits for transaction inclusion, notary time anchors and verified data removal. By requiring access to the journal and its underlying data, accordingly to the definitions provided in Section I-A, an auditor must be internal to the system. Therefore, LedgerDB does not support external auditability of history consistency of the database. QLDB (_Quantum Ledger Database_) [16] is a document based ledger database developed by Amazon that provides immutable, tamper-proof transaction log useful to track database changes and to generate a unique and complete change history. The database is backed with an append only log, meaning that every change is verifiable and auditable. Users can ask for the document digest, calculated as a SHA-256 hash of the document, and use it to verify the document correctness. It is also possible to verify the document history to check the data history. This setup allows easy internal audits, and also external audits with the drawback to share data with the auditor. ### _Auditable private distributed ledger systems_ Blockchain technology, born with Bitcoin [17], builds a P2P network whose peers do not know and do not trust each other but work together to maintain a shared state (the ledger of all the transactions between the users). A consensus algorithm defines the rules to upgrade the state, e.g. by deciding which peer will compute and communicate the next chunk of data [18]. The generalization of this technology is also known as Distributed Ledger Technology (DLT), since not all implementations use a chain of blocks (e.g., IOTA utilizes a Direct Acyclic Graph called Tangle [19]). A DLT is public and permissionless when any peer can send and receive the transactions, and join the consensus without restrictions. Private and permissioned DLTs have been developed to address different use cases, shifting the attention to smaller scale system where the peers do know but do not trust each other [20]. Public and permissionless DLTs guarantee high security, decentralization and provide transparent systems; however, they suffer of scalability problems (causing a low transaction throughput), require fees to execute transactions (which can result very expensive in some networks), and do not comply with privacy-preserving regulations. Instead, private and permissioned DLTs have higher throughput, lower infrastructural costs, and are able to limit data access. On the other hand, they are less decentralized and the security properties apply only to the internal participants [20]. Typically, for industrial use cases the trend is to adopt private and permissioned DLTs [21], with Hyperledger Fabric [22] being one of the most popular frameworks used to build them. In the case of public DLTs, where there is no distinction between internal and external, auditability derives naturally from the immutability and transparency properties [23]. Conversely, in the case of private DLTs there are known issues preventing external auditability: when a piece of data from a private DLT is made public for the first time, is not possible for somebody outside of the network to verify its consistency with the contents of the private ledger due to access limitations. More critically, even when full access to the private network is granted to an auditor, it is still possible for the participants of the private network to agree to re-write history right before the auditor would join the network, effectively forking the ledger (a history re-writing fork has been executed publicly on Ethereum in response to the DAO attack [24], a fork can be done more easily, and secretly, on a smaller and controlled network [25]). This problem is present also when a new node joins the private network. Traent2 hybrid blockchain [26] is an enterprise blockchain solution where private ledgers produced on-demand inside a private network are notarized using a public blockchain as the trusted storage, in order to obtain external auditability of the history consistency while preserving data secrecy. Footnote 2: [https://traent.com/](https://traent.com/) [Accessed on 22 May 2023] According to the externally auditable data structure framework modeled in Section II, the Traent hybrid blockchain network covers all the data management zone, with nodes in the private network acting as both source and consumer (they are coordinated to produce and query the data in collaboration with other peers), while the notarization system has the role of server. An auditor component, known as a _monitor_, is available and can be operated outside of the private network. The notarization system periodically updates the notarized data structure history of a specific private ledger by publishing on a public blockchain (_Algorand3_ by default, but the implementation is agnostic) the digest of the current version of the ledger paired with the consistency proof between it and the previously published digest. Footnote 3: [https://developer.algorand.org/](https://developer.algorand.org/) [Accessed on 22 May 2023] When a ledger history is notarized, it is possible to verify several properties, such as the integrity and authenticity of the data structure or a portion of it, its history consistency (fork detection), and the uniqueness of the history. By implementing all the elements of the externally auditable data structure framework, Traent make it possible for an auditor to verify that the only edit operations executed on a ledger are the allowed ones (i.e. that the ledger data structure has a unique consistent history) without requiring auditors to be inside the private network or to have access to any data block content, hence the ledger data remain undisclosed. In order to scale the notarization to an increasing number of managed ledgers and minimize the amount of data to be written on a public blockchain, Traent employs another data structure, called ledgers tree, to identify with a single sequence of digests the data structure history of every ledger included in a persistent collection. By writing to the public blockchain that sequence of digests, it is possible to produce a proof stating the history consistency for any specific ledger included in the collection; this kind of proof can be verified externally using just the ledgers tree sequence of digests and preserving the privacy of the data contained in the ledger. ## IV Conclusion In this paper, we proposed some formal definition for terminology and key concepts pertinent to the field of authenticated and auditable data structures. These definitions aim to facilitate the functional comparison of solutions within these lines of research, focusing on the different approaches and features rather than quantitative or performance-related aspects. We discussed the data replication and outsourcing scenarios, describing the existing models (the three-party model addressing data replication and the two-party model addressing data outsourcing) before proposing a new, more general, framework for external auditability of data structures, that extends the authenticated data structure models to address the scenario of data secrecy preserving external audit of history consistency. We surveyed the current uses of auditable data structures, focusing on a selection of applications such as digital certificates management, log management, auditable database providers, and auditable private DLT systems. As a future work, we plan to describe in details the ledgers tree data structure discussed in III-D and the properties of the correlated history consistency proofs. Also, we are planning to discuss the potential benefits of the adoption of the externally auditable data structure framework in very specific real world use cases, as the implementation of a privacy preserving digital product passport. ## V Acknowledgements Our heartfelt thanks go to Professor Laura Ricci and the dedicated researchers of the Pisa Distributed Ledger Laboratory4 for the invaluable insights and constructive feedback. We are also deeply grateful to Professor Paolo Ferragina, whose meticulous review of this manuscript has considerably enhanced the quality of our paper. We truly appreciate the time, effort, and expertise they have invested in helping us refine our research. Footnote 4: [https://sites.google.com/unipi.it/pisadtltaboratory/](https://sites.google.com/unipi.it/pisadtltaboratory/) [Accessed on 22 May 2023] ## VI Disclaimer This research is sponsored by Traent. An embodiment of the externally auditable data structure framework applied to the distributed ledger technology, as well as the notarization systems described in III-D, are patents pending by Traent.
2304.11234
Advances in Deep Concealed Scene Understanding
Concealed scene understanding (CSU) is a hot computer vision topic aiming to perceive objects exhibiting camouflage. The current boom in terms of techniques and applications warrants an up-to-date survey. This can help researchers to better understand the global CSU field, including both current achievements and remaining challenges. This paper makes four contributions: (1) For the first time, we present a comprehensive survey of deep learning techniques aimed at CSU, including a taxonomy, task-specific challenges, and ongoing developments. (2) To allow for an authoritative quantification of the state-of-the-art, we offer the largest and latest benchmark for concealed object segmentation (COS). (3) To evaluate the generalizability of deep CSU in practical scenarios, we collect the largest concealed defect segmentation dataset termed CDS2K with the hard cases from diversified industrial scenarios, on which we construct a comprehensive benchmark. (4) We discuss open problems and potential research directions for CSU. Our code and datasets are available at https://github.com/DengPingFan/CSU, which will be updated continuously to watch and summarize the advancements in this rapidly evolving field.
Deng-Ping Fan, Ge-Peng Ji, Peng Xu, Ming-Ming Cheng, Christos Sakaridis, Luc Van Gool
2023-04-21T20:01:18Z
http://arxiv.org/abs/2304.11234v2
# Advances in ###### Abstract Concealed scene understanding (CSU) is a hot computer vision topic aiming to perceive objects exhibiting camouflage. The current boom in terms of techniques and applications warrants an up-to-date survey. This can help researchers to better understand the global CSU field, including both current achievements and remaining challenges. This paper makes four contributions: (1) For the first time, we present a comprehensive survey of deep learning techniques aimed at CSU, including a taxonomy, task-specific challenges, and ongoing developments. (2) To allow for an authoritative quantification of the state-of-the-art, we offer the largest and latest benchmark for concealed object segmentation (COS). (3) To evaluate the generalizability of deep CSU in practical scenarios, we collect the largest concealed defect segmentation dataset termed CDS2K with the hard cases from diversified industrial scenarios, on which we construct a comprehensive benchmark. (4) We discuss open problems and potential research directions for CSU. Our code and datasets are available at [https://github.com/DengPingFan/CSU](https://github.com/DengPingFan/CSU), which will be updated continuously to watch and summarize the advancements in this rapidly evolving field. Concealed Scene Understanding Segmentation Detection Survey Introductory Taxonomy Deep Learning Machine Learning ## 1 Introduction Concealed scene understanding (CSU) aims to recognize objects that exhibit forms of camouflage. By its very nature, CSU clearly is a challenging problem compared with conventional object detection [1, 2]. It has numerous real-world applications, including search-and-rescue work, rare species discovery, healthcare (_e.g._, automatic diagnosis for colorectal polyps [3] and lung lesions [4]), agriculture (_e.g._, pest identification [5] and fruit ripeness assessment [6]), content creation (_e.g._, recreational art [7]), _etc_. In the past decade, both academia and industry have widely studied CSU, and various types of images with camouflaged objects have been handled with traditional computer vision and pattern recognition techniques, including hand-engineered patterns (_e.g._, motion cues [8, 9], optical flow [10, 11]), heuristic priors (_e.g._, color [12], texture [13], intensity [14, 15]) and combination techniques [16, 17, 18]. In recent years, thanks to benchmarks becoming available (_e.g._, COD10K [19, 22] and NC4K [23]) and the rapid development of deep learning, this field has made important strides forward. In 2020, Fan _et al._[19] released the first large-scale public dataset - COD10K - geared towards the advancement of perception tasks having to deal with concealment. This has also inspired other related disciplines. For instance, Mei _et al._[24, 25] proposed a distraction-aware framework for the segmentation of camouflaged objects, which can be extended to the identification of transparent materials in natural scenes [26]. In 2023, Ji _et al._[27] developed an efficient model that learns textures from object-level gradients, and its generalizability has been verified through diverse downstream applications, _e.g._, medical polyp segmentation and road crack detection. Although multiple research teams have addressed tasks concerned with concealed objects, we believe that stronger interactions between the ongoing efforts would be beneficial. Thus, we mainly review the state and recent deep learning-based advances of CSU. Meanwhile, we contribute a large-scale concealed defect segmentation dataset termed CDS2K. This dataset consists of hard cases from diverse industrial scenarios, thus providing an effective benchmark for CSU. **Previous Surveys and Scope.** To the best of our knowledge, only a few survey papers were published in the CSU community, which [28, 29] mainly review non-deep techniques. There are some benchmarks [30, 31] with narrow scopes, such as image-level segmentation, where only a few deep methods were evaluated. In this paper, we present a comprehensive survey of deep learning CSU techniques, thus widening the scope. We also Fig. 1: **Sample gallery of concealed scenarios.** (a-d) show natural animals selected from [19]. (e) depicts a concealed human in art from [20]. (f) features a synthesized “lon” by [21]. offer more extensive benchmarks with a more comprehensive comparison and with an application-oriented evaluation. **Contributions.** Our contributions are summarized as follows: (1) We represent the initial effort to examine deep learning techniques tailored towards CSU thoroughly. This includes an overview of its classification and specific obstacles, as well as an assessment of its advancements during the era of deep learning, achieved through an examination of existing datasets and techniques. (2) To provide a quantitative evaluation of the current state-of-the-art, we have created a new benchmark for Concealed Object Segmentation (COS), which is a crucial and highly successful area within CSU. This benchmark is the most up-to-date and comprehensive available. (3) To assess the applicability of deep CSU in real-world scenarios, we have restructured the CDS2K dataset - the largest dataset for concealed defect segmentation - to include challenging cases from various industrial settings. We have utilized this updated dataset to create a comprehensive benchmark for evaluation. (4) Our discussion delves into the present obstacles, available prospects, and future research areas for the CSU community. ## 2 Background ### _Task Taxonomy and Formulation_ #### 2.1.1 Image-level CSU In this section, we introduce five commonly used image-level CSU tasks, which can be formulated as a mapping function \(\mathbf{F}:\mathbf{X}\mapsto\mathbf{Y}\) that converts the input space \(\mathbf{X}\) into the target space \(\mathbf{Y}\). \(\bullet\)**Concealed Object Segmentation (COS)**[22, 27] is a class-agnostic dense prediction task, segmenting concealed regions or objects with unknown categories. As presented in Fig. 2 (a), the model \(\mathbf{F}_{\text{COS}}\colon\mathbf{X}\mapsto\mathbf{Y}\) is supervised by a binary mask \(\mathbf{Y}\) to predict a probability \(\mathbf{p}\in[0,1]\) for each pixel \(\mathbf{x}\) of image \(\mathbf{X}\), which is the confidence level that the model determines whether \(\mathbf{x}\) belongs to the concealed region. **Concealed Object Localization (COL)**[23, 32] aims to identify the most noticeable region of concealed objects, which is in line with human perception psychology [32]. This task is to learn a dense mapping \(\mathbf{F}_{\text{COL}}:\mathbf{X}\mapsto\mathbf{Y}\). The output \(\mathbf{Y}\) is a non-binary fixation map captured by an eye tracker device, as illustrated in Fig. 2 (b). Essentially, the probability prediction \(\mathbf{p}\in[0,1]\) for a pixel \(\mathbf{x}\) indicates how conspicuous its camouflage is. **Concealed Instance Ranking (CIR)**[23, 32] is to rank different instances in a concealed scene based on their detectability. The level of camouflage is used as the basis for this ranking. The objective of the CIR task is to learn a dense mapping \(\mathbf{F}_{\text{CR}}:\mathbf{X}\mapsto\mathbf{Y}\) between the input space \(\mathbf{X}\) and the camouflage ranking space \(\mathbf{Y}\), where \(\mathbf{Y}\) represents per-pixel annotations for each instance with corresponding rank levels. For example, in Fig. 2 (c), there are three toads with different camouflage levels, and their ranking labels are from [23]. To perform this task, one can replace the category ID for each instance with rank labels in instance segmentation models like Mask R-CNN [33]. **Concealed Instance Segmentation (CIS)**[34, 35] is a technique that aims to identify instances in concealed scenarios based on their semantic characteristics. Unlike general instance segmentation [36, 37], where each instance is assigned a category label, CIS recognizes the attributes of concealed objects to distinguish between different entities more effectively. To achieve this, CIS employs a mapping function \(\mathbf{F}_{\text{CIS}}:\mathbf{X}\mapsto\mathbf{Y}\), where \(\mathbf{Y}\) is a scalar set comprising various entities used to parse each pixel. This concept is illustrated in Fig. 2 (d). **Concealed Object Counting (COC)**[38] is a newly emerging topic in CSU that aims to estimate the number of instances concealed within their surroundings. As illustrated in Fig. 2 (e), the COC is to estimate center coordinates for each instance and generate their counts. Its formulation can be defined as \(\mathbf{F}_{\text{COC}}:\mathbf{X}\mapsto\mathbf{Y}\), where \(\mathbf{X}\) is the input image and \(\mathbf{Y}\) represents the output density Fig. 2: **Illustration of the representative CSU tasks.** Five of these are image-level tasks: (a) concealed object segmentation (COS), (b) concealed object localization (COL), (c) concealed instance ranking (CIR), (d) concealed instance segmentation (CIS), and (e) concealed object counting (OOC). The remaining two are video-level tasks: (f) video concealed object detection (VCOD) and (g) video concealed object segmentation (VCOS). Each task has its own corresponding annotation visualization, which is explained in detail §2.1. map that indicates the concealed instances in scenes. Overall, the image-level CSU tasks can be categorized into two groups based on their semantics: object-level (COS and COL) and instance-level (CIR, COC, and CIS). Object-level tasks focus on perceiving objects while instance-level ones aim to recognize semantics to distinguish different entities. Additionally, COC is regarded as a sparse prediction task based on its output form, whereas the others belong to dense prediction tasks. Among the literature reviewed in Table I, COS has been extensively researched while research on the other three tasks is gradually increasing. #### 2.1.2 Video-level CSU Given a video clip \(\{\mathbf{X}_{t}\}_{t=1}^{T}\) containing \(T\) concealed frames, video-level CSU can be formulated as a mapping function \(\mathbf{F}:\{\mathbf{X}_{t}\}_{t=1}^{T}\mapsto\{\mathbf{Y}_{t}\}_{t=1}^{T}\) for parsing dense spatial-temporal correspondences, where \(\mathbf{Y}_{t}\) is the label of frame \(\mathbf{X}_{t}\). **Video Concealed Object Detection (VCOD)**[39] is similar to video object detection [40]. This task aims to identify and locate concealed objects within a video by learning a spatial-temporal mapping function \(\mathbf{F}_{\text{VCOD}}:\{\mathbf{X}_{t}\}_{t=1}^{T}\mapsto\{\mathbf{Y}_{t} \}_{t=1}^{T}\) that predicts the location \(\mathbf{Y}_{t}\) of an object for each frame \(\mathbf{X}_{t}\). The location label \(\mathbf{Y}_{t}\) is provided as a bounding box (See Fig. 2 (f)) consisting of four numbers \((x,y,w,h)\) indicating the target's location. Here, \((x,y)\) represents its top-left coordinate, while \(w\) and \(h\) denote its width and height, respectively. **Video Concealed Object Segmentation (VCOS)**[41] originated from the task of camouflaged object discovery [39]. Its goal is to segment concealed objects within a video. This task usually utilizes spatial-temporal cues to drive the models to learn the mapping \(\mathbf{F}_{\text{VCOD}}:\{\mathbf{X}_{t}\}_{t=1}^{T}\mapsto\{\mathbf{Y}_{t} \}_{t=1}^{T}\) between input frames \(\mathbf{X}_{t}\) and corresponding segmentation mask labels \(\mathbf{Y}_{t}\). Fig. 2 (g) shows an example of its segmentation mask. In general, compared to image-level CSU, video-level CSU is developing relatively slowly. Because collecting and annotating video data is labor-intensive and time-consuming. However, with the establishment of the first large-scale VCOS benchmark on MoCA-Mask [41], this field has made fundamental progress while still having ample room for exploration. #### 2.1.3 Task Relationship Among image-level CSU tasks, the CIR task requires the highest level of understanding as it may not only involve four subtasks, _e.g._, segmenting pixel-level regions (_i.e._, COS), counting (_i.e._, COC), or distinguishing different instances (_i.e._, CIS), but also ranking these instances according to their fixation probabilities (_i.e._, COL) under different difficulty levels. Additionally, regarding two video-level tasks, VCOS is a downstream task for VCOD since the segmentation task requires the model to provide pixel-level classification probabilities. ### _Related Topics_ Next, we will briefly introduce salient object detection (SOD), which, like COS, requires extracting properties of target objects, but one focuses on saliency while the other on the concealed attribute. **Image-level SOD** aims to identify the most attractive objects in an image and extract their pixel-accurate silhouettes [42]. Various network architectures have been explored in deep SOD models, _e.g._, multi-layer perceptron [43, 44, 45], fully convolutional [47, 48, 49, 50, 51], capsule-based [52, 53, 54], transformer-based [55], and hybrid [56, 57] networks. Meanwhile, different learning strategies are also studied in SOD models, including data-efficient methods (_e.g._, weakly-supervised with categorical tags [58, 59, 60, 61, 62] and unsupervised with pseudo masks [63, 64, 65]), multi-task paradigms (_e.g._, object subitizing [66, 67], fixation prediction [68, 69], semantic segmentation [70, 71], edge detection [72, 73, 74, 75, 76], image captioning [77]), instance-level paradigms [78, 79, 80, 81], _etc_. To learn more about this field comprehensively, readers can refer to popular surveys or representative studies on visual attention [82], saliency prediction [83], co-saliency detection [84, 85, 86], RGB SOD [87, 88, 89], RGB-D (depth) SOD [90, 91], RGB-T (thermal) SOD [92, 93], and light-field SOD [94]. **Video-level SOD.** The early development of video salient object detection (VSOD) originated from introducing attention mechanisms in video object segmentation (VOS) tasks. At that stage, the task scenes were relatively simple, containing only one object moving in the video. As moving objects tend to attract visual attention, VOS and VSOD were equivalent tasks. For instance, Wang _et al._[95] used a fully convolutional neural network to address the VSOD task. With the development of VOS techniques, researchers introduced more complex scenes (_e.g._, with complex backgrounds, object movements, and two objects), but the two tasks remained equivalent. Thus, later works have exploited semantic-level spatial-temporal features [96, 97, 98, 99], recurrent neural networks [100, 101], or offline motion cues such as optical flow [100, 102, 103, 104]. However, with the introduction of more challenging video scenes (containing three or more objects, simultaneous camera, and object movements), VOS and VSOD were no longer equivalent. Yet, researchers continued to approach the two tasks as equivalent, ignoring the issue of visual attention allocation in multi-object movement in video scenes, which seriously hindered the development of the field. To address this issue, in 2019, Fan _et al._ introduced eye trackers to mark the changes in visual attention in multi-object movement scenarios, for the first time posing the scientific problem of _attention shift_[105] in VSOD asks, and constructed the first large-scale VSOD benchmark - DAVSOD1, as well as the baseline model SSAV, which propelled VSOD into a new stage of development. Footnote 1: [https://github.com/DengPingFan/DAVSOD](https://github.com/DengPingFan/DAVSOD) **Remarks.** COS and SOD are distinct tasks, but they can mutually benefit via the CamDiff approach [106]. This has been demonstrated through adversarial learning [107], leading to joint research efforts such as the recently proposed dichotomous image segmentation [108]. In SS6, we will explore potential directions for future research in these areas. ## 3 Deep CSU Models This section systematically reviews deep CSU approaches based on task definitions and data types. We have also created a GitHub base2 as a comprehensive collection to provide the latest information in this field. Footnote 2: [https://github.com/GevelIs/ISNet-V2/blob/main/AWSOME_COD_LIST.md](https://github.com/GevelIs/ISNet-V2/blob/main/AWSOME_COD_LIST.md) ### _Image-level CSU Models_ We review the existing four image-level CSU tasks: concealed object segmentation (COS), concealed object localization (COL), concealed instance ranking (CIR), and concealed instance segmentation (CIS). Table I summarizes the key features of these reviewed approaches. #### 3.1.1 Concealed Object Segmentation This section discusses previous solutions for camouflage object segmentation (COS) from two perspectives: network architecture and learning paradigm. **Network Architecture.** Generally, fully convolutional networks (FCNs [149]) are the standard solution for image segmentation as they can receive input of a flexible size and undergo a single feed-forward propagation. As expected, FCN-shaped frameworks dominate the primary solutions for COS, which fall into three categories: _a) Multi-stream framework_, shown in Fig. 3 (a), contains multiple input streams to learn multi-source representations explicitly. MirrorNet [109] was the first attempt to add an extra data stream as a bio-inspired attack, which can break the camouflaged state. Several recent works have adopted a multi-stream approach to improve their results, such as supplying pseudo-depth generation [148], pseudo-edge uncertainty [113], adversarial learning paradigm [107], frequency enhancement stream [134], multi-scale [133] or multi-view [140] inputs, and multiple backbones [146]. Unlike other supervised settings, CRNet [141] is the only weakly-supervised framework that uses scribble labels as supervision. This approach helps to alleviate overfitting problems on limited annotated data. _b) Bottom-up and top-down framework_, as shown in Fig. 3 (b), uses deeper features to enhance shallower ones gradually in a \begin{table} \begin{tabular}{|c|c||c|c|c c c c|c|} \hline \# & **Model** & **Pubk** & **Core Component** & **Arc.** & **MC.** & **SL.** & **TL\({}_{\blacksquare}\)** & **Code** \\ \hline \hline 1 & ANet [20] & CVU\({}_{19}\) & classification \& segmentation streams & BF & ✓ & ✗ & • & Link \\ \hline 2 & SNNet [119] & CVPR\({}_{23}\) & search and identification modules & BTF & - & ✗ & • & Link \\ 3 & MirrorNet [109] & Access\({}_{11}\) & fine input and mirror data streams & MSF & - & ✗ & • & Link \\ 4 & DEC [130] & anXive\({}_{21}\) & depth contribution exploration, confidence-aware loss & BF & ✓ & ✗ & • & Link \\ 5 & DEC2Net [112] & TE\({}_{21}\) & dual-branch, dual-guidance \& cross-refine & BTC & - & ✗ & • & Link \\ 6 & CZNet [112] & UAC\({}_{21}\) & context-aware cross-level fusion & BTF & - & ✗ & • & Link \\ 7 & UR-COD [113] & MMAsa\({}_{21}\) & uncertainty of pseudo-edge labels & MSF & - & ✗ & • & Link \\ 8 & TINE [114] & AAAA\({}_{21}\) & texture perception \& feature interaction guidance & BF & ✓ & ✗ & • & N/A \\ 9 & JSCOD [107] & CVPR\({}_{21}\) & uncertainty-aware adversarial learning & MSF & - & ✗ & • & Link \\ 10 & LSR [23] & CVPR\({}_{21}\) & localize, segment, \& rank objects simultaneously & BF & ✓ & ✗ & Link \\ 11 & MGL [15] & CVPR\({}_{21}\) & mutual graph learning & BF & ✓ & ✗ & • & Link \\ 12 & PFNet [24] & CVPR\({}_{21}\) & distraction mining, positioning and focus modules & BTF & - & ✗ & • & Link \\ 13 & UGTR [116] & ICCV\({}_{21}\) & uncertainty-guided transformer reasoning & BF & ✓ & ✗ & • & Link \\ 14 & BAS [117] & anXive\({}_{21}\) & residual refinement module, hybrid loss & BTF & - & ✗ & • & Link \\ 15 & OSFormer [34] & ECCV\({}_{22}\) & location-sensing transformer, coarse-to-fine fusion & BF & ✓ & ✗ & • & Link \\ 16 & CFL [135] & TIP\({}_{22}\) & camouflage fusion learning & BF & ✓ & ✗ & • & Link \\ 17 & NCNHT [118] & CVU\({}_{12}\) & neighbor connection, hierarchical information transfer & BTF & - & ✗ & • & N/A \\ 18 & DTPC-Net [119] & TMMZ\({}_{32}\) & local bilinear \& spatial coherence organization & BTF & - & ✗ & • & Link \\ 19 & CZNet-VV [120] & TCSV\({}_{21}\) & context-aware cross-level fusion & BTF & - & ✗ & • & Link \\ 20 & CubeNet [121] & PR\({}_{22}\) & encoder-decoder framework with X-connection & BF & ✓ & ✗ & • & Link \\ 21 & ERRCNet [122] & PR\({}_{22}\) & selective edge aggregation, reversible \(\epsilon\)-calibration & BF & ✓ & ✗ & • & Link \\ 22 & TPNet [123] & TVX\({}_{22}\) & transformer-induced progressive refinement & BTF & - & ✗ & • & Link \\ 23 & ANAS-Net [124] & UCNN\({}_{22}\) & attention-based neighbor selective aggregation & BF & ✓ & ✗ & • & N/A \\ 24 & BSANet [125] & AAAA\({}_{21}\) & boundary-guided segmentation & BF & ✓ & ✗ & • & Link \\ 25 & FAPNet [126] & ATP\({}_{22}\) & boundary guidance, feature aggregation \& propagation \& propagation & BF & ✓ & ✗ & Link \\ 26 & FinnNet [127] & TIP\({}_{22}\) & boundary-and-feature cues (extension of [125]) & BTF & ✓ & ✗ & • & N/A \\ 27 & PNet [128] & ICME\({}_{22}\) & cascaded deconcathode module, label weighting & BTF & - & ✗ & • & N/A \\ 28 & OGENet [129] & WACV\({}_{22}\) & online confidence estimation, dynamic uncertainty loss & BF & ✓ & ✗ & • & Link \\ 29 & BCNet [130] & UAC\({}_{21}\) & edge-guidance feature \& context aggregation modules & BTF & ✓ & ✗ & • & Link \\ 30 & PResNet [131] & M\({}_{22}\) & bidirectional building interaction, predator learning & BF & ✓ & ✗ & • & Link \\ 31 & DTNet [132] & ICPR\({}_{22}\) & dual-task interactive transformer & BF & ✓ & ✗ & • & Link \\ 32 & ZoomNet [133] & CVPR\({}_{22}\) & scale integration \& hierarchical mixed-scale units & MSF & - & ✗ & • & Link \\ 33 & FDNet [134] & CVPR\({}_{22}\) & frequency enhancement \& \(k\) high-order relation modules & MSF & - & ✗ & • & N/A \\ 34 & SegMark [135] & CVPR\({}_{22}\) & segment, magnify, reiterate in a iterative manner & BTF & - & ✗ & • & Link \\ 35 & SNetV2 [22] & TPAMI\({}_{22}\) & neighbor connection decoder, group-reversal attention & BTF & - & ✗ & • & Link \\ 36 & MGL-V2 [136] & TIP\({}_{23}\) & multi-source attention recovery (extension of [135]) & BFD & ✓ & ✗ & • & Link \\ 37 & FFNNet [137] & TMC\({}_{23}\) & frequency-aware context aggregation \& attention & BTF & - & ✗ & • & N/A \\ 38 & TANet [138] & TCSV\({}_{21}\) & texture-aware refinement, boundary-consistency loss & BTF & ✓ & ✗ & • & N/A \\ 39 & LSR+ [32] & TCSVT\({}_{23}\) & triple task learning (extension of [231]) & BF & ✓ & ✗ & • & Link \\ 40 & SAFNet [139] & TCSVT\({}_{23}\) & triple-stage refinement (search-amplify, recognize) & BTF & ✗ & • & Link \\ 41 & MFFN [140] & WACV\({}_{23}\) & co-attention of multi-view, channel fusion unit & MSF & - & ✗ & • & Link \\ 42 & CRNet [141] & AAAI\({}_{23}\) & feature-guided and consistency losses & MSF & - & ✗ & • & Link \\ 43 & HueNet [142] & AAAI\({}_{23}\) & high-resolution iterative feedback & BTF & - & ✗ & • & Link \\ 44 & DGNet [27] & MIR\({}_{23}\) & gradient-based texture learning, efficient network & BFD & ✓ & ✗ & • & Link \\ 45 & FSFNet [143] & CVPR\({}_{23}\) & feature shrinkage pyramid with transformer & BTF & - & ✗ & • & Link \\ 46 & FEEDER [144] & CVPR\({}_{23}\) & deep wavelet-like decomposition & BTF & - & ✗ & • & Link \\ 47 & DCNet [145] & CVPR\({}_{23}\) & pixel-level decoupling, instance-level supression & BF & ✓ & ✗ & • & Link \\ 48 & IOCFormer [33] & CVPR\({}_{23}\) & unify density- and regression-based strategies & BF & ✓ & ✗ & • & Link \\ 49 & FPNet+ [25] & SCI\({}_{23}\) & extension of PNet [24] & BTF & - & ✗ & • & Link \\ 50 & DQuet [146] & anX\({}_{23}\) & cross-modal detail querying, relation-based querying & MSF & - & ✗ & • & Link \\ 51 & Camouflomer [147] & anX\({}_{23}\) & masked separable attention & BTF & - & ✗ & • & Link \\ 52 & PopNet [148] & arXiv\({}_{23}\) & source-free depth, object pop-out prior & MSF & - & ✗ & • & Link \\ \hline \end{tabular} \end{table} TABLE I: **Essential characteristics of reviewed image-based methods. This summary outlines the single feed-forward pass. For example, C2FNet [112] adopts this design to improve concealed features from coarse-to-fine levels. In addition, SegMaR [135] employs an iterative refinement network with a sub-network based on this strategy. Furthermore, other studies [119, 22, 24, 25, 111, 117, 118, 119, 120, 123, 124, 128, 137, 138, 139, 142, 143, 144, 147] utilized a deeply-supervised strategy [150, 151] on various intermediate feature hierarchies using this framework. This practical, also utilized by the feature pyramid network [152], combines more comprehensive multi-context features through dense top-down and bottom-up propagation and introduces additional supervision signals before final prediction to provide more dependable guidance for deeper layers. _c) Branched framework_, shown in Fig. 3 (c), is a single-input-multiple-output architecture, consisting of both segmentation and auxiliary task branches. It should be noted that the segmentation part of this branched framework may have some overlap with previous frameworks, such as single-stream [20] and bottom-up & top-down [23, 27, 107, 110, 114, 115, 116, 121, 122, 124, 125, 126, 127, 129, 130, 131, 132, 136] frameworks. For instance, ERRNet [122] and FAPNet [126] are typical examples of jointly learning concealed objects and their boundaries. Since these branched frameworks are closely related to the multi-task learning paradigm, we will provide further details. **Learning Paradigm.** We discuss two common types of learning paradigms for COS tasks: single-task and multi-task. _a) Single-task learning_ is the most commonly used paradigm in COS, which involves only a segmentation task for concealed targets. Based on this paradigm, most current works [19, 22, 120] focus on developing attention modules to identify target regions. _b) Multi-task learning_ introduces an auxiliary task to coordinate or complement the segmentation task, leading to robust COS learning. These multi-task frameworks can be implemented by conducting confidence estimation [107, 116, 129, 131], localization/ranking [23, 32], category prediction [20] tasks and learning depth [110, 148], boundary [115, 121, 122, 125, 126, 130], and texture [27, 114] cues of camouflaged object. #### 3.1.2 Concealed Instance Ranking There has been limited research conducted on this topic. Lv _et al._[23] observed for the first time existing COS approaches could not quantify the difficulty level of camouflage. Regarding this issue, they used an eye tracker to create a new dataset, called CAM-LDR [32], that contains instance segmentation masks, fixation labels, and ranking labels. They also proposed two unified frameworks, LSR [23] and its extension LSR+ [32], to simultaneously learn triple tasks, _i.e._, localizing, segmenting, and ranking camouflaged objects. The insight behind it is that discriminative localization regions could guide the segmentation of the full scope of camouflaged objects, and then, the detectability of different camouflaged objects could be inferred by the ranking task. #### 3.1.3 Concealed Instance Segmentation This task advances the COS task from the regional to the instance level, a relatively new field compared with the COS. Then, Le _et al._[35] build a new CIS benchmark, CAMO++, via extending on previous CAMO [20] dataset. They also proposed a camouflage fusion learning strategy to fine-tune existing instance segmentation models (_e.g._, Mask R-CNN [33]) by learning image contexts. Based on instance benchmarks as in COD10K [19] and NC4K [23], the first one-stage transformer framework, OSFormer [34], was proposed for this field by introducing two core designs: location-sensing transformer and coarse-to-fine fusion. Recently, Luo _et al._[145] proposed to segment camouflaged instances with two designs: a pixel-level camouflage decoupling module and an instance-level camouflage suppression module. #### 3.1.4 Concealed Object Counting Sun _et al._[38] recently introduced a new challenge for the community called indiscernible object counting (IOC), which involves counting objects that are difficult to distinguish from their surroundings. They created IOCfish5K, a large-scale dataset containing high-resolution images of underwater scenes with many indiscernible objects (focus on fish) and dense annotations to address the lack of appropriate datasets for this challenge. They also proposed a baseline model called IOCFormer by integrating density-based and regression-based methods in a unified framework. Based on the above summaries, the COS task is experiencing a rapid development period, resulting in numerous contemporary publications each year. However, very few proposed solutions are still proposed for the COL, CIR, and CIS tasks. This suggests that these fields remain under-explored and offer significant room for further research. Notably, many previous studies are available as references (such as saliency prediction [83], salient object subitizing [67], and salient instance segmentation [81]), providing a Fig. 3: **Network architectures for COS at a glance.** We present four types of frameworks from left to right: (a) multi-stream framework, (b) bottom-up/top-down framework and its variant with deep supervision (optional), and (c) branched framework. See §3.1.1 for more details. solid foundation for understanding these tasks from a camouflaged perspective. ### _Video-level CSU Models_ There are two branches for the video-level CSU task, including detecting and segmenting camouflaged objects from videos. Refer Table II for details. #### 3.2.1 Video Concealed Object Detection Most works [155, 157] formulated this topic as the degradation problem of the segmentation task since the scarcity of pixel-wise annotations. They, as usual, trained on segmentation datasets (_e.g._, DAVIS [160], FBMS [161]) but evaluated the generalizability performance on video camouflaged object detection dataset, MoCA [39]. These methods consistently opt to extract offline optical flow as motion guidance for the segmentation task, but diversifying over the learning strategies, such as fully-supervised learning with real [159, 39, 156] or synthetic [154, 157] data and self-supervised learning [155, 158]. #### 3.2.2 Video Concealed Object Segmentation Xie _et al._[153] proposed the first work on camouflaged object discovery in videos. They used a pixel-trajectory recurrent neural network to cluster foreground motion for segmentation. However, this work is limited to a small-scale dataset, CAD [162]. Recently, based upon localization-level dataset MoCA [39] with bounding box labels, Cheng _et al._[41] extended this field by creating a large-scale VCOS benchmark MoCA-Mask with pixel-level masks. They also introduced a two-stage baseline SLTNet to implicitly utilize motion information. From what we have reviewed above, the current approaches for VCOS tasks are still in a nascent state of development. Several concurrent works in well-established video segmentation fields (_e.g._, self-supervised correspondence learning [163, 164, 165, 166, 167], unified framework for different motion-based tasks [168, 169, 170]) points the way to further explore. Besides, considering high-level semantic understanding has a research gap that merits being supplied, such as semantic segmentation and instance segmentation in the camouflaged scenes. ## 4 CSU Datasets In recent years, various datasets have been collected for both image- and video-level CSU tasks. In Table III, we summarize the features of the representative datasets. ### _Image-level Datasets_ \(\bullet\)**CAMO-COCO**[20] is tailor-made for COS tasks with 2,500 image samples across eight categories, divided into two sub-datasets, _i.e._, CAMO with camouflaged objects and MS-COCO with non-camouflaged objects. Both CAMO and MS-COCO contain 1,250 images with a split of 1,000 for training and 250 for testing. **NC4K**[23] is currently the largest testing set for evaluating COS models. NC4K consists of 4,121 camouflaged images sourced from the Internet and can be divided into two primary categories: natural scenes and artificial scenes. In addition to the images, this dataset also provides localization labels that include both object-level segmentation and instance-level masks, making it a valuable resource for researchers working in this field. In a recent study by Lv _et al._[23], an eye tracker was utilized to collect fixation information for each image. As a result, a CAM-FR dataset of 2,280 images was created, with 2,000 images used for training and 280 for testing. The dataset was annotated with three types of labels: localization, ranking, and instance labels. **CAMO++**[35] is a newly released dataset that contains 5,500 samples, all of which have undergone hierarchical pixel-wise annotation. The dataset is divided into two parts: camouflaged samples (1,700 images for training and 1,000 for testing) and non-camouflaged samples (1,800 images for training and 1,000 for testing). **COD10K**[19, 22] is currently the largest-scale dataset, featuring a wide range of camouflaged scenes. The dataset contains 10,000 images from multiple open-access photography websites, covering ten super-classes and 78 sub-classes. Out of these images, 5,066 are camouflaged, 1,934 are non-camouflaged pictures and 3,000 are background images. The camouflaged subset of COD10K is annotated using different labels such as category labels, bounding boxes, object-level masks, and instance-level masks, providing a diverse set of annotations. **CAM-LDR**[32] comprises of 4,040 training and 2,026 testing samples. These samples were selected from commonly-used hybrid training datasets (_i.e._, CAMO with 1,000 training samples and COD10K with 3,040 training samples), along with the testing dataset (_i.e._, COD10K with 2,026 testing samples). CAM-LDR is an extension of NC4K [23] that includes four types of annotations: localization labels, ranking labels, object-level segmentation masks, and instance-level segmentation masks. The ranking labels are categorized into six difficulty levels - background, easy, medium1, medium2, medium3, and hard. \begin{table} \begin{tabular}{c||c||c|c||c|c||c} \hline \hline \# & **Model** & **Pubs** & **Core Components** & **O.K.** & **SLs.** & **TL.** & **Project** \\ \hline \hline 1 & PMC [153] & CVPR\({}_{19}\) & pixel trajectory recurrent neural network and clustering & ✓ & ✗ & ✗ & **N/A** \\ 2 & VRS [39] & ACCV\({}_{29}\) & video registration and motion segmentation network & ✓ & ✗ & ✗ & Link \\ 3 & SIMO [154] & BMVC\({}_{21}\) & dual-head architecture, synthetic dataset & ✓ & ✗ & ✗ & Link \\ \hline 4 & MG [155] & ICCV\({}_{21}\) & self-supervised motion grouping & ✓ & ✗ & ✗ & Link \\ 5 & RCF [156] & arXiv\({}_{22}\) & rotation-compensated flow, camera motion estimation & ✓ & ✗ & ✗ & NA \\ \hline 6 & OCLR [157] & NeurIPS\({}_{22}\) & object-centric layered representation, synthetic dataset & ✓ & ✗ & ✗ & NA/A \\ 7 & GPS [158] & TPAMI\({}_{22}\) & expectation-maximization method, motion augmentation & ✓ & ✗ & ✗ & Link \\ 8 & QSDI [159] & CVPR\({}_{22}\) & quantifying the static and dynamic biases & ✓ & ✗ & ✗ & Link \\ \hline 9 & SLTNet [41] & CVPR\({}_{22}\) & implicit motion handling, short- and long-term modules & - & ✗ & **✗** & Link \\ \hline \end{tabular} \end{table} TABLE II: **Essential characteristics of reviewed video-level methods. Optical flow (Q.F.): whether pre-generating optical flow map. Supervision level (S.L.): fully-supervision with real data (✗) or synthetic data (✗), and self-supervision (✗). Task level (TL.L.): video camouflaged object detection (\(\land\)) and segmentation (✗). For further details, refer to §3.2.2.** **S-COD**[141] is the first dataset designed specifically for the COS task under the weakly-supervised setting. The dataset includes 4,040 training samples, with 3,040 samples selected from COD10K and 1,000 from CAMO. These samples were re-labeled using scribble annotations that provide a rough outline of the primary structure based on first impressions, without pixel-wise ground-truth information. **IOCfish5K**[38] is a distinct dataset that focuses on counting instances of fish in camouflaged scenes. This COC dataset comprises 5,637 high-resolution images collected from YouTube, with 659,024 center points annotated. The dataset is divided into three subsets, with 3,137 images allocated for training, 500 for validation, and 2,000 for testing. **Remarks.** In summary, three datasets (CAMO, COD10K, and NC4K) are commonly used as benchmarks to evaluate camouflage object segmentation (COS) approaches, with the experimental protocols typically described in SS5.2. For the concealed instance segmentation (CIS) task, two datasets (COD10K and NC4K) containing instance-level segmentation masks can be utilized. The CAM-LDR dataset, which provides fixation information and three types of annotations collected from a physical eye tracker device, is suitable for various brain-inspired explorations in computer vision. Additionally, there are two new datasets from CSU: S-COD, designed for weakly-supervised COS, and IOCfish5K, focused on counting objects within camouflaged scenes. ### _Video-level Datasets_ \(\bullet\)**CAD**[162] is a small dataset comprising nine short video clips and 836 frames. The annotation strategy used in this dataset is sparse, with camouflaged objects being annotated every five frames. As a result, there are 191 segmentation masks available in the dataset. **MoCA**[39] is a comprehensive video database from YouTube that aims to detect moving camouflaged animals. It consists of 141 video clips featuring 67 categories and comprises 37,250 high-resolution frames with corresponding bounding box labels for 7,617 instances. **MoCA-Mask**[41], an extension of MoCA dataset [39], provides human-annotated segmentation masks every five frames based on MoCA dataset [39]. MoCA-Mask is divided into two parts: a training set consisting of 71 short clips (19,313 frames with 3,946 segmentation masks) and an evaluation set containing 16 short clips (3,626 frames with 745 segmentation masks). To label those unlabeled frames, pseudo-segmentation labels were synthesized using a bidirectional optical flow-based strategy [171]. **Remarks.** The MoCA dataset is currently the largest collection of videos with concealed objects, while it only offers detection labels. As a result, researchers in the community [155, 157] typically assess the performance of well-trained segmentation models by converting segmentation masks into detection bounding boxes. Recently, there has been a shift towards video segmentation in concealed scenes with the introduction of MoCA-Mask. Despite these advancements, the quantity and quality of data annotations remain insufficient for constructing a reliable video model that can effectively handle complex concealed scenarios. ## 5 CSU Benchmarks In this investigation, our benchmarking is built on COS tasks since this topic is relatively well-established and offers a variety of competing approaches. The following sections will detail the evaluation metrics (SS5.1), benchmarking protocols (SS5.2), quantitative analyses (SS5.3, SS5.4, SS5.5), and qualitative comparisons (SS5.6). ### _Evaluation Metrics_ As suggested in [22], there are five commonly used metrics3 available for COS evaluation. We compare a prediction mask \(\mathbf{P}\) with its corresponding ground-truth mask \(\mathbf{G}\) at the same image resolution. Footnote 3: [https://github.com/DengPingFan/CSU/tree/main/cos_eval_toolbox](https://github.com/DengPingFan/CSU/tree/main/cos_eval_toolbox) **MAE** (mean absolute error, \(M\)) is a conventional pixel-wise measure, which is defined as: \[M=\frac{1}{W\times H}\sum_{x}^{W}\sum_{y}^{H}|\mathbf{P}(x,y)-\mathbf{G}(x,y)|, \tag{1}\] where \(W\) and \(H\) are the width and height of \(\mathbf{G}\), and \((x,y)\) are pixel coordinates in \(\mathbf{G}\). **F-measure** could be defined as: \[F_{\beta}=\frac{(1+\beta^{2})\mathrm{Precision}\times\mathrm{Recall}}{\beta^{2 }\mathrm{Precision}+\mathrm{Recall}}, \tag{2}\] where \(\beta^{2}=0.3\) is used to emphasize precision value over recall value, as recommended in [89]. Other two metrics are derived from: \[\mathrm{Precision}=\frac{|\mathbf{P}(T)\cap\mathbf{G}|}{|\mathbf{P}(T)|}, \ \mathrm{Recall}=\frac{|\mathbf{P}(T)\cap\mathbf{G}|}{|\mathbf{G}|}, \tag{3}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline **\#** & **Dataset** & **Year** & **Pub.** & **Train** & **Test** & **N.Cam.** & **Cls.** & **B.Box** & **Obj.** & **Ins.** & **Fix.** & **Rank** & **Scr. & **Cov.** & **Weight** \\ \hline \hline 1 & CAD [162] & 2016 & ECCV & 0 & 836 & Video & - & ✓ & - & ✓ & - & - & - & - & Link \\ \hline 2 & CAMOCO [20] & 2019 & CVIU & 2000 & 500 & Image: & ✓ & - & ✓ & - & - & - & - & - & Link \\ 3 & MoCA [39] & 2020 & ACCV & 0 & 37,250 & Video & - & ✓ & ✓ & - & - & - & - & Link \\ 4 & NC4K [23] & 2021 & CVPR & 0 & 4,121 & Image & - & ✓ & ✓ & ✓ & - & - & - & - & Link \\ 5 & MoCA-Mask [41] & 2022 & CVPR & 19,133 & 3,626 & Video & - & ✓ & ✓ & - & - & - & - & Link \\ 6 & CAMO++ [25] & 2022 & TIP & 3,500 & 2,000 & Image & ✓ & - & ✓ & ✓ & - & - & - & Link \\ 7 & COD10K [19, 22] & 2022 & TPAMI & 6,000 & 4,000 & Image & ✓ & ✓ & ✓ & ✓ & - & - & - & Link \\ 8 & CAMLDR [23] & 2023 & TCSVT & 4,040 & 2,026 & Image: & - & - & ✓ & ✓ & ✓ & - & - & Link \\ 9 & S-COD [14] & 2023 & AAAI & 4,040 & 0 & Image & - & - & - & - & - & ✓ & - & Link \\ 10 & Camfish5K [38] & 2023 & CVPR & 3,637 & 2,000 & Image & - & ✓ & - & - & - & - & - & ✓ & Link \\ \hline \end{tabular} \end{table} TABLE III: **Essential characteristics for CSU datasets. Train(Test: number of samples for training/testing (_e.g._, images for image dataset or frames for video dataset) Task: data type of dataset. N.Cam.: whether collecting non-camouflaged samples. Cls.: whether providing classification labels. B.Box: whether providing bounding box labels for the detection task. Obj./Ins.: whether providing object- or instance-level segmentation masks for segmentation tasks. Rank: whether providing ranking labels for instances. Scr.: whether providing case labels in scribble form. Cou.: whether providing dense object counting labels. See §4.1 and §4.2 for more descriptions.** where \(\mathbf{P}(T)\) is a binary mask obtained by thresholding the non-binary predicted map \(\mathbf{P}\) with a threshold value \(T\in[0,255]\). The symbol \(|\cdot|\) calculates the total area of the mask inside the map. Therefore, it is possible to convert a non-binary prediction mask into a series of binary masks with threshold values ranging from 0 to 255. By iterating over all thresholds, three metrics are obtained with maximum (\(F_{\beta}^{\text{max}}\)), mean (\(F_{\beta}^{\text{min}}\)), and adaptive (\(F_{\beta}^{ad}\)) values of F-measure. **Enhanced-alignment measure (\(E_{\phi}\))**[172, 173] is a recently proposed binary foreground evaluation metric, which considers the both local and global similarity between two binary maps. Its formulation is defined as: \[E_{\phi}=\frac{1}{W\times H}\sum_{x}^{W}\sum_{y}^{H}\phi\left[\mathbf{P}(x,y), \mathbf{G}(x,y)\right], \tag{4}\] where \(\phi\) is the enhanced-alignment matrix. Similar to \(F_{\beta}\), this metric also includes three values computed over all the thresholds, _i.e._, maximum (\(E_{\phi}^{\text{max}}\)), mean (\(E_{\phi}^{\text{max}}\)), and adaptive (\(E_{\phi}^{ad}\)) values. **Structure measure (\(S_{\alpha}\))**[181, 182] is used to measure the structural similarity between a non-binary prediction map and a ground-truth mask: \[S_{\alpha}=(1-\alpha)S_{\omega}(\mathbf{P},\mathbf{G})+\alpha S_{r}(\mathbf{P},\mathbf{G}), \tag{5}\] where \(\alpha\) balances the object-aware similarity \(S_{\omega}\) and region-aware similarity \(S_{r}\). As in the original paper, we use the default setting for \(\alpha=0.5\). ### _Experimental Protocols_ Suggested by Fan _et al_. [22], all competing approaches in the benchmarking were trained on a hybrid dataset comprising the training portions of COD10K [19] and CAMO [20] datasets, totaling 4,040 samples. The models were then evaluated on three popular used benchmarks: COD10K's testing portion with 2,026 samples [19], CAMO with 250 samples [20], and NC4K with 4,121 samples [23]. ### _Quantitative Analysis on CAMO_ As reported in Table IV, we evaluated 36 deep-based approaches on the CAMO testing dataset [20] using various metrics. These models were classified into two groups based on the backbones they used: 32 convolutional-based and four transformer-based. As \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline **Model** & **Pub/Year** & **Bickhouse** & **Input** & **Pars.** & **MACs** & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\text{max}}\) & \(M\downarrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) & \(E_{\phi}^{\text{max}}\uparrow\) \\ \hline \hline \multicolumn{10}{|c|}{**Construction-based Backbone**} \\ \hline \hline \multicolumn{10}{|c|}{**S1Net [19]**} & CVPR\({}_{20}\) & ResNet-50 & 352\({}^{2}\) & 48.95M & 19.42G & 0.745 & 0.644 & 0.092 & 0.825 & 0.804 & 0.829 & 0.712 & 0.702 & 0.708 \\ D2CNet [111] & TE\({}_{21}\) & RE2Net-50 & 320\({}^{2}\) & - & 0.774 & 0.683 & 0.087 & 0.844 & 0.818 & 0.838 & 0.747 & 0.735 & 0.743 \\ CZNet [112] & UCLA\({}_{21}\) & RE2Net-50 & 352\({}^{2}\) & 28.41M & 13.12G & 0.796 & 0.719 & 0.080 & 0.865 & 0.854 & 0.864 & 0.764 & 0.762 & 0.771 \\ TINet [114] & AAAI\({}_{21}\) & ResNet-50 & 352\({}^{2}\) & 28.56M & 8.586 & 0.781 & 0.678 & 0.087 & 0.847 & 0.836 & 0.848 & 0.729 & 0.728 & 0.745 \\ JSCOD [107] & CVPR\({}_{21}\) & ResNet-50 & 352\({}^{2}\) & 121.63M & 25.20G & 0.800 & 0.728 & 0.073 & 0.872 & 0.859 & 0.873 & 0.779 & 0.772 & 0.779 \\ LSR [23] & CVPR\({}_{21}\) & ResNet-50 & 352\({}^{2}\) & 57.90M & 25.21G & 0.787 & 0.696 & 0.080 & 0.859 & 0.838 & 0.854 & 0.756 & 0.744 & 0.753 \\ R.MGL [115] & CVPR\({}_{21}\) & ResNet-50 & 473\({}^{2}\) & 67.64M & 249.89M & 0.775 & 0.673 & 0.088 & 0.848 & 0.812 & 0.842 & 0.738 & 0.726 & 0.740 \\ S-MGL [115] & CVPR\({}_{21}\) & ResNet-50 & 473\({}^{2}\) & 63.60M & 236.66M & 0.772 & 0.664 & 0.089 & 0.850 & 0.807 & 0.842 & 0.733 & 0.721 & 0.739 \\ PFNet [24] & CVPR\({}_{21}\) & ResNet-50 & 416\({}^{4}\) & 56.04M & 25.64G & 0.782 & 0.695 & 0.085 & 0.854 & 0.814 & 0.855 & 0.751 & 0.746 & 0.758 \\ UCIR [116] & ICCV\({}_{21}\) & ResNet-50 & 473\({}^{2}\) & 48.87M & 127.12G & 0.785 & 0.686 & 0.866 & 0.861 & 0.823 & 0.854 & 0.749 & 0.738 & 0.754 \\ BAS [117] & avXiv\({}_{21}\) & ResNet-34 & 288\({}^{2}\) & 87.06M & 161.19G & 0.749 & 0.646 & 0.096 & 0.808 & 0.796 & 0.808 & 0.696 & 0.692 & 0.703 \\ NCHTT [118] & CVPL\({}_{22}\) & ResNet-50 & 288\({}^{2}\) & - & - & 0.784 & 0.652 & 0.088 & 0.841 & 0.805 & 0.840 & 0.723 & 0.707 & 0.739 \\ C2FNet-V2 [120] & TCSVT\({}_{22}\) & ResNet-50 & 352\({}^{2}\) & 44.94M & 18.10G & 0.799 & 0.730 & 0.707 & 0.869 & 0.859 & 0.869 & 0.777 & 0.770 & 0.779 \\ CubeNet [121] & PR\({}_{22}\) & ResNet-50 & 352\({}^{2}\) & - & - & 0.788 & 0.682 & 0.085 & 0.852 & 0.838 & 0.860 & 0.734 & 0.732 & 0.750 \\ ERNet [122] & PR\({}_{22}\) & ResNet-50 & 352\({}^{2}\) & 69.76M & 20.050 & 0.779 & 0.698 & 0.085 & 0.852 & 0.842 & 0.858 & 0.731 & 0.729 & 0.742 \\ TPFNet [123] & TV\({}_{22}\) & RE2Net-50 & 352\({}^{2}\) & 32.95M & 1.296M & 0.807 & 0.725 & 0.074 & 0.880 & 0.861 & 0.883 & 0.77 & 0.772 & 0.785 \\ FAPNet [126] & TIP\({}_{22}\) & RE2Net-50 & 352\({}^{2}\) & 29.52M & 29.69M & 0.815 & 0.734 & 0.706 & 0.877 & 0.865 & 0.880 & 0.776 & 0.766 & 0.792 \\ BSANet [125] & AAAI\({}_{22}\) & ResNet-50 & 384\({}^{2}\) & 23.58M & 27.90G & 0.794 for those models using convolutional-based backbones, several interesting findings are observed: \(\bullet\) CamoFormer-C [147] achieved the best performance on CAMO with the ConvNeXt [177] based backbone, even surpassing some metrics produced by transformer-based methods, such as \(S_{\alpha}\) value: 0.859 (CamoFormer-C) _vs._ 0.856 (DTINet [132]) _vs._ 0.849 (HitNet [142]). However, CamoFormer-R [147] with ResNet-50 backbone was unable to outperform competitors with the same backbone, such as using multi-scale zooming (ZoomNet [133]) and iterative refinement (SegMaR [135]) strategies. \(\bullet\) As for those Res2Net-based models, FDNet [134] achieves the top performance on CAMO with high-resolution input of 416\({}^{2}\). Besides, SINetV2 [22] and FAPNet [126] also achieve satisfactory results using the same backbone but with a small input size of 352\({}^{2}\). \(\bullet\) DGNet [27], is an efficient model that stands out with its top#3 performance compared to heavier models like JSCOD [107] (121.63M) and PopNet [148] (181.05M), despite having only 19.22M parameters and 1.20G computation costs. Its performance-efficiency balance makes it a promising architecture for further exploration of its potential capabilities. \(\bullet\) Interestingly, CRNet [141] - a weakly-supervised model - competes favorably with early fully-supervised model SINet [19]. It suggests that there is room for developing models to bridge the gap toward better data-efficient learning, _e.g._, self-/semi-supervised learning. Furthermore, transformer-based methods significantly improve performance due to their superior long-range modeling capabilities. We here test four transformer-based models on the CAMO testing dataset, yielding three noteworthy findings: \(\bullet\) CamoFormer-S [147], utilizes a Swin transformer design to enhance the hierarchical modeling ability on concealed content, resulting in superior performance across the entire CAMO benchmark. We also observed that the PVT-based variant CamoFormer-P [147] achieves comparable results but with fewer parameters, _i.e._, 71.40M (CamoFormer-P) _vs._ 97.27M (CamoFormer-R). \(\bullet\) DTINet [132] is a dual-branch network that utilizes the MiT-B5 semantic segmentation model from SegFormer [178] as backbone. Despite having 266.33M parameters, it has not delivered impressive performance due to the challenges of balancing such two heavy branches. Nevertheless, this attempt defies our preconceptions and inspires us to investigate the generalizability of semantic segmentation models in concealed scenarios. \(\bullet\) We also investigate the impact of input resolution on the performance of different models. HitNet [142] uses a high-resolution image of 704\({}^{2}\), which can improve the detection of small targets, but at the expense of increased computation costs. Similarly, convolutional-based approaches like ZoomNet [133] achieved impressive results by taking multiple inputs with different resolutions (the largest being 576\({}^{2}\)) to enhance segmentation performance. However, not all models benefit from this approach. For instance, PopNet [148] with a resolution of 480\({}^{2}\) fails to outperform SINetV2 [22] with 352\({}^{2}\) in all metrics. This observation raises two critical questions: should high-resolution be used in concealed scenarios, and how can we develop an effective strategy for detecting concealed objects of varying sizes? We will propose potential solutions to these questions and present an interesting analysis of the COD10K in SS5.5. ### _Quantitative Analysis on NC4K_ Compared to the CAMO dataset, the NC4K [23] dataset has a larger data scale and sample diversity, indicating subtle changes \begin{table} \begin{tabular}{|r|c|c|c|c|c|c|c|c|c|c|} \hline **Model** & **Puff/Year** & **Backbone** & \(S_{\alpha}\) & \(F_{\alpha}^{\prime}\) & \(M\)\(\downarrow\) & \(F_{\alpha}^{\prime\prime}\) & \(F_{\alpha}^{\prime\prime}\) & \(F_{\beta}^{\prime\prime}\) & \(F_{\beta}^{\prime\prime}\) & \(F_{\beta}^{\prime\prime}\) \\ \hline \hline \multicolumn{10}{|c|}{**\(\bullet\) Convolution-based Backbone**} \\ \hline SINet [19] & CVPR\({}_{20}\) & ResNet-50 & 0.808 & 0.723 & 0.058 & 0.883 & 0.871 & 0.883 & 0.768 & 0.769 & 0.775 \\ C2FNet [112] & IJCAI\({}_{21}\) & Res2Net-50 & 0.838 & 0.762 & 0.049 & 0.901 & 0.897 & 0.904 & 0.788 & 0.795 & 0.810 \\ TINet [114] & AAAI\({}_{21}\) & ResNet-50 & 0.829 & 0.734 & 0.055 & 0.882 & 0.879 & 0.890 & 0.766 & 0.773 & 0.793 \\ ISCOD [107] & CVPR\({}_{20}\) & ResNet-50 & 0.842 & 0.771 & 0.047 & 0.906 & 0.898 & 0.907 & 0.803 & 0.806 & 0.816 \\ LSR [23] & CVPR\({}_{21}\) & ResNet-50 & 0.840 & 0.766 & 0.048 & 0.904 & 0.895 & 0.907 & 0.802 & 0.804 & 0.815 \\ R-MG. [115] & CVPR\({}_{21}\) & ResNet-50 & 0.833 & 0.740 & 0.052 & 0.890 & 0.867 & 0.893 & 0.778 & 0.800 \\ S-MGL [115] & CVPR\({}_{21}\) & ResNet-50 & 0.829 & 0.731 & 0.055 & 0.885 & 0.863 & 0.893 & 0.771 & 0.777 & 0.797 \\ PFNet [24] & CVPR\({}_{21}\) & ResNet-50 & 0.829 & 0.745 & 0.053 & 0.894 & 0.887 & 0.898 & 0.779 & 0.784 & 0.799 \\ UGIR [116] & ICCV\({}_{21}\) & Res2Net-50 & 0.839 & 0.747 & 0.052 & 0.889 & 0.874 & 0.899 & 0.779 & 0.787 & 0.807 \\ BAS [117] & avX\({}_{21}\) & ResNet-34 & 0.817 & 0.732 & 0.058 & 0.868 & 0.859 & 0.872 & 0.767 & 0.772 & 0.782 \\ NCMT [118] & CVU may have occurred. Table V presents quantitative results on the current largest COS testing dataset with 4,121 samples. The benchmark includes 28 convolutional-based and four transformer-based approaches. Our observations are: \(\bullet\) CamoFormer-C [147] still outperforms all methods on NC4K. In contrast to the awkward situation observed on CAMO as described in SS5.3, the ResNet-50 based CamoFormer-R [147] now performs better than two other competitors (_i.e._, SegMaR [135] and ZoomNet [133]) on NC4K. These results confirm the effectiveness of CamoFormer's decoder design in mapping latent features back to the prediction space, particularly for more complicated scenarios. \(\bullet\) DGNet [27] shows less promise on the challenging NC4K dataset, possibly due to its restricted modeling capability with small model parameters. Nevertheless, this drawback provides an opening for modification since the model has a lightweight and simple architecture. \(\bullet\) While PopNet [148] may not perform well on small-scale CMAO datasets, it has demonstrated potential in challenging NC4K dataset. This indicates that using extra network to synthesize depth priors would be helpful for challenging samples. When compared to SINeV2 based on Res2Net-50 [22], PopNet has a heavier design (188.05M _vs._ 26.98M) and larger input resolution (512\({}^{2}\)_vs._ 352\({}^{2}\)), but only improves the \(E_{\phi}^{mn}\) value by 0.6%. \(\bullet\) Regarding the CamoFormer [147] model, there is now a noticeable difference in performance between its two variants. Specifically, the CamoFormer-S variant based on Swin-B lags behind while the CamoFormer-P variant based on PVTV2-B4 performs better. ### _Quantitative Analysis on COD10K_ In Table VI, we present a performance comparison of 36 competitors, including 32 convolutional-based models and four transformer-based models, on the COD10K dataset with diverse concealed samples. Based on our evaluation, we have made the following observations: \(\bullet\) CamoFormer-C [147], which has a robust backbone, remains the best-performing method among all convolutional-based methods. Similarly to its performance on NC4K, CamoFormer-R [147] has once again outperformed strong competitors with identical backbones such as SegMaR [135] and ZoomNet [133]. \(\bullet\) Similar to its performance on the NC4K dataset, PopNet [148] achieves consistently high results on the COD10K dataset, ranking second only to CamoFormer-C [147]. We believe that prior knowledge of the depth of the scene plays a crucial role in enhancing the understanding of concealed environments. This insight will motivate us to investigate more intelligent ways to learn structural priors, such as incorporating multi-task learning or heuristic methods into our models. \(\bullet\) Notably, HitNet [142] achieves the highest performance on the COD10K benchmark, outperforming models with stronger backbones like Swin-B and PVTV2-B4. To understand why this is the case, we calculated the average resolution of all samples in the CAMO (W=693.89 and H=564.22), NC4K (W=709.19 and H=529.61), and COD10K (W=963.34 and H=740.54) datasets. We found that the testing set for COD10K has the highest overall resolution, which suggests that models utilizing higher resolutions or multi-scale modeling would benefit from this characteristic. \begin{table} \begin{tabular}{|r|c|c|c|c|c|c|c|c|c|c|} \hline **Model** & **Puff/Year** & **Backbone** & \(\mathbf{S_{\alpha}}\) & \(\mathbf{F_{\theta}}\) & \(\mathbf{M\downarrow}\) & \(\mathbf{E_{\theta}}^{m}\) & \(\mathbf{F_{\theta}}^{m}\) & \(\mathbf{F_{\theta}}^{m}\) & \(\mathbf{F_{\theta}}^{m}\) & \(\mathbf{F_{\theta}}^{m}\) \\ \hline \hline \multicolumn{10}{|c|}{**\(\bullet\) Convolution-based Backbone**} \\ \hline SINet [19] & CVPR\({}_{20}\) & ResNet-50 & 0.776 & 0.631 & 0.043 & 0.867 & 0.864 & 0.874 & 0.667 & 0.679 & 0.691 \\ D2CNet [111] & TIE\({}_{21}\) & ResNet-50 & 0.807 & 0.680 & 0.037 & 0.879 & 0.876 & 0.887 & 0.702 & 0.720 & 0.736 \\ C2FNet [112] & IJCAI\({}_{1}\) & Res2Net-50 & 0.813 & 0.686 & 0.036 & 0.886 & 0.890 & 0.900 & 0.703 & 0.723 & 0.743 \\ TINet [114] & AAAI\({}_{21}\) & ResNet-50 & 0.793 & 0.635 & 0.042 & 0.848 & 0.861 & 0.878 & 0.652 & 0.679 & 0.712 \\ JSCOD [107] & CVPR\({}_{20}\) & ResNet-50 & 0.809 & 0.684 & 0.035 & 0.882 & 0.884 & 0.891 & 0.705 & 0.721 & 0.738 \\ LSR [23] & CVPR\({}_{21}\) & ResNet-50 & 0.804 & 0.673 & 0.037 & 0.883 & 0.880 & 0.892 & 0.669 & 0.715 & 0.732 \\ R-MGL [115] & CVPR\({}_{21}\) & ResNet-50 & 0.814 & 0.666 & 0.035 & 0.865 & 0.852 & 0.890 & 0.681 & 0.711 & 0.738 \\ S-MGL [115] & CVPR\({}_{21}\) & ResNet-50 & 0.811 & 0.655 & 0.037 & 0.851 & 0.845 & 0.899 & 0.667 & 0.702 & 0.733 \\ PFNet [24] & CVPR\({}_{21}\) & ResNet-50 & 0.800 & 0.660 & 0.040 & 0.868 & 0.877 & 0.890 & 0.676 & 0.701 & 0.725 \\ UGFR [116] & ICCV\({}_{21}\) & ResNet-50 & 0.818 & 0.667 & 0.035 & 0.850 & 0.853 & 0.891 & 0.671 & 0.712 & 0.742 \\ BAS [117] & AVRV\({}_{21}\) & ResNet-34 & 0.802 & 0.677 & 0.038 & 0.869 & 0.855 & 0.870 & 0.707 & 0.715 & 0.729 \\ NCHIT [118] & CVUU\({}_{22}\) & ResNet-50 & 0.792 & 0.991 & 0.046 & 0.794 & 0.819 & 0.879 & 0.596 & 0.649 & 0.698 \\ C2FNet-V2 [120] & TCSVTZ\({}_{21}\) & Res2Net-50 & 0.811 & 0.691 & 0.036 & 0.890 & 0.887 & 0.896 & 0.718 & 0.725 & 0.742 \\ CohenNet [121] & PR\({}_{22}\) & ResNet-50 & 0.795 & 0.643 & 0.041 & 0.862 & 0.865 & 0.883 & 0.669 & 0.692 & 0.715 \\ ERRNet [122] & PR\({}_{22}\) & ResNet-50 & 0.786 & 0.630 & 0.043 & 0.845 & 0.867 & 0.886 & 0.646 & 0.675 & 0.702 \\ TPRank [123] & TVCZ\({}_{22}\) & Res2Net-50 & 0.817 & 0.683 & 0.036 & 0.869 & 0.887 & 0.903 & 0.694 & 0.724 & 0.748 \\ FAPNet [125] & TIP\({}_{22}\) & Res2Net-50 & 0.822 & 0.694 & 0.036 & 0.875 & 0.888 & 0.902 & 0.707 & 0.731 & 0.758 \\ BSANet [125] & AAAI\({}_{22}\) & Res2Net-50 & 0.818 & 0.699 & 0.034 & 0.894 & 0.891 & 0.901 & 0.723 & 0.738 & 0.753 \\ OCENet [129] & WVV2\({}_{22}\) & ResNet-50 & 0.827 & 0.707 & 0.033 & 0.885 & 0.894 & 0.905 & 0.718 & 0.741 & 0.764 \\ BGNet [130] & IJCAI\({}_{22}\) & Res2Net-50 & 0.831 & 0.722 & 0.033 & 0.902 & 0.901 & 0.911 & 0.739 & 0.753 & 0.774 \\ PreyNet [131] & MJA\({}_{2}\) & ResNet-50 & 0.813 & 0.697 & 0.034 & 0.894 & 0.881 & 0.891 & 0.731 & 0.736 & 0.747 \\ ZomNet [133] & CVPR\({}_{22}\) & Res2Net-50 & 0.803 & 0.729 & 0.029 & 0.893 & 0.888 & 0.991 & 0.741 & 0.766 & 0.780 \\ FDNet [134] & CVPR\({}_{22}\) & Res2Net-50 & 0.840 & 0.729 & 0. Therefore, HitNet is an excellent choice for detecting concealed objects in scenarios where high-resolution images are available. ### _Qualitative Comparison_ This section visually assesses the performance of current top models on challenging and complex samples that are prone to failure. We compare qualitative results predicted by ten groups of top-performing models, including six convolutional-based models (_i.e._, CamoFormer-C [147], DGNet [27], PopNet [148], ZoomNet [133], FDNet [134] and SINetV2 [22]), two transformer-based models (_i.e._, CamoFormer-S [147] and HitNet [142]), as well as two other competitors (_i.e._, the earliest baseline SINet [19] and a weakly-supervised model CRNet [141]). All samples are selected from the COD10K testing dataset according to seven fine-grained attributes. The qualitative comparison is presented in Fig. 4, revealing several interesting findings. \(\bullet\) The attribute of multiple objects (MO) poses a challenge due to the high false-negative rate in current top-performing models. As depicted in the first column of Fig. 4, only two out of ten models could locate the white flying bird approximately, as indicated by the red circle in the GT mask. These two models are CamoFormer-S [147], which employs a robust transformer-based encoder, and FDNet [134], which utilizes a frequency domain learning strategy. \(\bullet\) The models we tested can accurately detect big objects (BO) by precisely locating the target's main part. However, these models struggle to identify smaller details such as the red circles highlighting the load's claws in the second column of Fig. 4. \(\bullet\) Small object (SO) attribute presents a challenge as it only occupies a small area in the image, typically less than 10% of the total pixels as reported by COD10K [19]. As shown in the third column of Fig. 4, only two models (CamoFormer-S and CamoFormer-C [147]) can detect a cute cat lying on the ground in the distance. Such a difficulty arises for two main reasons: firstly, models struggle to differentiate small objects from complex backgrounds or other irrelevant objects in an image; secondly, detectors may miss small regions due to down-sampling operations caused by low-resolution inputs. \(\bullet\) Out-of-view (OV) attribute refers to objects partially outside the image boundaries, leading to incomplete representation. To address this issue, a model should have a better holistic understanding of the concealed scene. As shown in the fourth column of Fig. 4, both CamoFormer-C [147] and FDNet [134] can handle the OV attribute and maintain the object's integrity. However, two transformer-based models failed to do so. This observation has inspired us to explore more efficient methods, such as local modeling within convolutional frameworks and cross-domain learning strategies. \(\bullet\) Shape complexity (SC) attribute indicates that an object contains thin parts, such as an animal's foot. In the fifth column of Fig. 4, the stick insect's feet are a good example of this complexity, being elongated and slender and thus difficult to predict accurately. Only HitNet [142] with high-resolution inputs can predict a right-bottom foot (indicated by a red circle). \(\bullet\) The attribute of occlusion (OC) refers to the partial occlusion of objects, which is a common challenge in general scenes [183]. In Fig. 4, for example, the sixth column shows two owls partially occluded by a wire fence, causing their visual regions to be separated. Unfortunately, most of the models presented were unable to handle such cases. \(\bullet\) Indefinable boundary (IB) attribute is hard to address since its uncertainty between foreground and background. As shown in the last column of Fig. 4, a matting-level sample. \(\bullet\) In the last two rows of Fig. 4, we display the predictions generated by SINet [19], which was our earliest baseline model. Current models have significantly improved location accuracy, boundary details, and other aspects. Additionally, CRNet [141], a weakly-supervised method with only weak label supervision, can effectively locate target objects to meet satisfactory standards. ## 6 Discussion and Outlook Based on our literature review and experimental analyses, we discuss five challenges and potential CSU-related directions in this section. **Annotation-Efficient Learning.** Deep learning techniques have significantly advanced the field of CSU. However, conventional supervised deep learning is data-hungry and resource-consuming. In practical scenarios, we hope the models can work on limited resources and have good generalizability. Thus developing effective learning strategies for CSU tasks is a promising direction, _e.g._, weakly-supervised strategy in CRNet [141]. **Domain Adaptation.** Camouflaged samples are generally collected from natural scenes. Thus, deploying the models to detect concealed objects in auto-driving scenarios is challenging. Recent practice demonstrates that various techniques can be used to alleviate this problem, _e.g._, domain adaptation [184], transfer learning [185], few-shot learning [186], and meta-learning [187]. **High-Fidelity Synthetic Dataset.** To alleviate algorithmic biases, increasing the diversity and scale of data is crucial. The rapid development of AI-generated content (AIGC) [188] and deep generative models, such as generative adversarial networks [189, 190, 191] and diffusion models [192, 193], is making it easier to create synthetic data for general domains. Recently, to address the scarcity of multi-pattern training images, Luo _et al._[106] proposed a diffusion-based image generation framework that generates salient objects on a camouflaged sample while preserving its original label. Therefore, a model should be capable of distinguishing between camouflaged and salient objects to achieve a robust feature representation. **Neural Architecture Search.** Automatic network architecture search (NAS) is a promising research direction that can discover optimal network architectures for superior performance on a given task. In the context of concealment, NAS can identify more effective network architectures to handle complex background scenes, highly variable object appearances, and limited labeled data. This can lead to the developing of more efficient and effective network architectures, resulting in improved accuracy and efficiency. Combining NAS with other research directions, such as domain adaptation and data-efficient learning, can further enhance the understanding of concealed scenes. These avenues of exploration hold significant potential for advancing the state-of-the-art and warrant further investigation in future research. **Large Model and Prompt Engineering.** This topic has gained popularity and has even become a direction for the natural language processing community. Recently, the Segment Anything Model (SAM) [194] has revolutionized computer vision algorithms, although it has limitations [195] in unprompted settings on several concealed scenarios. One can leverage the prompt Fig. 4: **Qualitative results of ten COS approaches. More descriptions on visual attributes in each column refer to §5.6.** engineering paradigm to simplify workflows using a well-trained robust encoder and task-specific adaptions, such as task-specific prompts and multi-task prediction heads. This approach is expected to become a future trend within the computer vision community. Large language models (LLMs) have brought both new opportunities and challenges to AI, moving towards artificial general intelligence further. However, it is challenging for academia to train the resource-consuming large models. There could be a promising paradigm that the state-of-the-art deep CSU models are used as the domain experts, and meanwhile, the large models could work as an external component to assist the expert models by providing an auxiliary decision, representation, _etc._ ## 7 Defect Segmentation Dataset Industrial defects usually originate from the undesirable production process, _e.g._, mechanical impact, workpiece friction, chemical corrosion, and other unavoidable physical, whose external visual form is usually with unexpected patterns or outliers, _e.g._, surface scratches, spots, holes on industrial devices; color difference, indentation on fabric surface; impurities, breakage, stains on the material surface, _etc._ Though previous works achieve promising advances for identifying visual defects by vision-based techniques, such as classification [196, 197, 198], detection [199, 200, 201], and segmentation [202, 203, 204]. These techniques work on the assumption that defects are easily detected, but they ignore those challenging defects that are "seamlessly" embedded in their materials surroundings. With this, we elaborately collect a new multi-scene benchmark, named CDS2K, for the concealed defect segmentation task, whose samples are selected from existing industrial defect databases. ### _Dataset Organisation_ To create a dataset of superior quality, we established three principles for selecting data: (a) The chosen sample should include at least one defective region, which will serve as a positive example. (b) The defective regions should have a pattern similar to the background, making them difficult to identify. (c) We also select normal cases as negative examples to provide a contrasting perspective with the positive ones. These samples were selected from the following well-known defect segmentation databases. \(\bullet\) MVTecAD4[205, 206] contains several positive and negative samples for unsupervised anomaly detection. We manually select 748 positive and 746 negative samples with concealed patterns from two main categories: (a) object category as in the 1\({}^{st}\) row of Fig. 5: pill, screw, tile, transistor, wood, and zipper. (b) texture category as in the 2\({}^{nd}\) row of Fig. 5: bottle, capsule, carpet, grid, leather, and metal nut. The number of positive/negative samples is shown with yellow circles in Fig. 5 Footnote 4: [https://www.mvtec.com/company/research/datasets/mvtec-ad](https://www.mvtec.com/company/research/datasets/mvtec-ad) \(\bullet\) NEU5 provides three different database: oil pollution defect images [207] (OPDI), spot defect images [208] (SDI), and steel pit defect images [209] (SPDI). As shown in the third row (green Fig. 5: **Sample gallery of our CDS2K.** It is collected from five sub-databases: (a-l) MVTecAD, (m-o) NEU, (p) CrackForest, (q) KolektorSDD, and (r) MagneticTile. The defective regions are highlighted with red rectangles. (Top-Right) Word cloud visualization of CDS2K. (Bottom) The statistic number of positive/negative samples of each category in our CDS2K. circles) of Fig. 5, we select 10, 20, and 15 positive samples from these databases separately. \(\bullet\) CrackForest6[210], [211] is a densely-annotated road crack image database for the health monitoring of urban road surface. We select 118 samples with concealed patterns from them, and the samples are shown in the third row (red circle) of Fig. 5. Footnote 6: [https://github.com/cuillimeng/CrackForest-dataset](https://github.com/cuillimeng/CrackForest-dataset) \(\bullet\) KolektorSD7[203] collected and annotated by Kolektor Group, which contains several defective and non-defective surfaces from the controlled industrial environment in a real-world case. We manually select 31 positive and 30 negative samples with concealed patterns, and the samples are shown in the third row (blue circle) of Fig. 5. Footnote 7: [https://www.vicos.is/resources/kolektorsdd/](https://www.vicos.is/resources/kolektorsdd/) \(\bullet\) Magnetic Tile Defect8[212] datasets contains six common magnetic tile defects and corresponding dense annotations. We picked 388 positive and 386 negative examples, displayed as white circles in Fig. 5. Footnote 8: [https://github.com/abin24/Magnetic-tile-defect-datasets](https://github.com/abin24/Magnetic-tile-defect-datasets) ### _Dataset Description_ The CDS2K comprises 2,492 samples, consisting of 1,330 positive and 1,162 negative instances. Three different human-annotated labels are provided to each sample - category, bounding box, and pixel-wise segmentation mask. Fig. 6 illustrates examples of these annotations. The average ratio of defective regions for each category is presented in Table VII, which indicates that most of the defective regions are relatively small. ### _Evaluation on CDS2K_ Here, we evaluate the generalizability of current cutting-edge COS models on the positive samples of CDS2K. Regrading the code availability, we here choose four top-performing COS approaches: SINetV2[22], DGNet[27], CamoFormer-P[147], and HitNet[142]. As reported in Table VIII, our observations indicate that these models are not effective in handling cross-domain samples, highlighting the need for further exploration of the domain gap between natural scene and downstream applications. ## 8 Conclusion This paper aims to provide an overview of deep learning techniques tailored for concealed scene understanding (CSU). To help the readers view the global landscape of this field, we have made four contributions: Firstly, we provide a detailed survey of CSU, which includes its background, taxonomy, task-specific challenges, and advances in the deep learning era. To the best of our knowledge, this survey is the most comprehensive one to date. Secondly, we have created the largest and most up-to-date benchmark for concealed object segmentation (COS), which is a foundational and prosperous direction at CSU. This benchmark allows for a quantitative comparison of state-of-the-art techniques. Thirdly, we have collected the largest concealed \begin{table} \begin{tabular}{|r|c||c c c c c c|c|} \hline \multicolumn{2}{|c||}{Category} & 0\% \(\leq\)\(r\)\(<\)15\% & 15\% \(\leq\)\(r\)\(<\)10\% & 10\% \(\leq\)\(r\)\(\leq\)20\% & 20\% \(\leq\)\(r\)\(<\)30\% & 30\% \(\leq\)\(r\)\(<\)40\% & 40\% \(\leq\)\(r\)\(<\)50\% & **Total** \\ \hline \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & Objects-Full & 41 & 53 & 0 & 0 & 0 & 0 & **96** \\ & Objects-Server & 71 & 1 & 0 & 0 & 0 & 0 & 72 \\ & Objects-Tile & 0 & 30 & 28 & 7 & 2 & 0 & 67 \\ & Objects-Transistor & 1 & 7 & 0 & 0 & 0 & 0 & **8** \\ & Objects-Wood & 2 & 26 & 2 & 0 & 0 & 0 & **30** \\ & Objects-Zipper & 16 & 102 & 1 & 0 & 0 & 0 & **119** \\ & Texture-Bottle & 3 & 39 & 20 & 1 & 0 & 0 & 63 \\ & Texture-Capsule & 17 & 8 & 0 & 0 & 0 & 0 & **25** \\ & Texture-Carpet & 37 & 45 & 0 & 0 & 0 & 0 & **82** \\ & Texture-Grid & 39 & 18 & 0 & 0 & 0 & 0 & 57 \\ & Texture-Leader & 70 & 21 & 0 & 0 & 0 & 0 & **91** \\ & Texture-Metal Nut & 6 & 31 & 1 & 0 & 0 & 0 & **38** \\ \hline \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & OPID & 10 & 0 & 0 & 0 & 0 & 0 & **10** \\ & SDI & 20 & 0 & 0 & 0 & 0 & 0 & **20** \\ & SPID & 15 & 0 & 0 & 0 & 0 & 0 & **15** \\ \hline \hline \multicolumn{2}{|c||}{CrackForest} & 28 & 90 & 0 & 0 & 0 & 0 & **118** \\ \hline \multicolumn{2}{|c||}{KolekteSDD} & 31 & 0 & 0 & 0 & 0 & 0 & **31** \\ \hline \multicolumn{2}{|c||}{Magnetic Tile Defect} & 216 & 70 & 27 & 27 & 24 & 24 & **388** \\ \hline \multicolumn{2}{|c||}{**Total**} & **623** & **543** & **79** & **35** & **26** & **24** & **1330** \\ \hline \end{tabular} \end{table} TABLE VII: **Statsistic of positive samples in CDS2K. The region ratio is calculated by \(r\)\(=\) defective pixels/all pixels for a given image. Of note, we only count the number of positive samples in five sub-datasets.** \begin{table} \begin{tabular}{|c|c|c||c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c||}{**Model**} & Pub/Year & Backbone & \(\mathbf{S}_{a}\)\(\uparrow\) & \(\mathbf{J}_{a}^{a}\)\(\uparrow\) & \(\mathbf{M}\)\(\downarrow\) & \(\mathbf{E}_{a}^{a}\)\(\uparrow\) & \(\mathbf{E}_{a}^{a}\)\(\uparrow\) & \(\mathbf{E}_{a}^{a}\)\(\uparrow\) & \(\mathbf{F}_{a}^{a}\)\(\uparrow\) & \(\mathbf{F}_{a}^{a}\)\(\uparrow\) & \(\mathbf{F}_{a}^{a}\)\(\uparrow\) \\ \hline \hline SINetV2[22] & TPAMI2 & Res2Net-50 & 0.551 & 0.215 & 0.102 & 0.509 & 0.567 & 0.597 & 0.223 & 0.248 & 0.258 \\ HitNet[142] & AAAI23 & PVTV2-B2 & 0.563 & 0.276 & 0.118 & 0.574 & 0.564 & 0.570 & 0.298 & 0.298 & 0.299 \\ DGNet[27] & MIR23 & EfficientNet-B4 & 0.578 & 0.258 & 0.089 & 0.552 & 0.569 & 0.579 & 0.274 & 0.291 & 0.297 \\ CamoFormer-P[147] & arXiv23 & PVTV2-B4 & 0.589 & 0.298 & 0.100 & 0.590 & 0.588 & 0.596 & 0.330 & 0.329 & 0.339 \\ \hline \end{tabular} \end{table} TABLE VIII: **Quantitative comparison on the positive samples of CDS2K.** Fig. 6: **Visualization of different annotations. We select a group of images from the MVTecAD database, including a negative (a) and a positive (b) sample. Corresponding annotations are provided: category (scratches on wood) and defect locations: bounding box (c) and segmentation mask (d).** defect segmentation dataset, CDS2K, by including hard cases from diverse industrial scenarios. We have also constructed a comprehensive benchmark to evaluate the generalizability of deep CSU in practical scenarios. Finally, we discuss open problems and potential directions for this community. We aim to encourage further research and development in this area. We would conclude from the following perspectives. (1) **Model.** The most common practice is based on the architecture of sharing UNet, which is enhanced by various attention modules. In addition, injecting extra priors and/or introducing auxiliary tasks improve the performance, while there are many potential problems to explore. (2) **Training.** Fully-supervised learning is the mainstream strategy in COS, but few researchers have addressed the challenge caused by insufficient data or labels. CRNet [141] is a good attempt to alleviate this issue. (3) **Dataset.** The existing datasets are still not large and diverse enough. This community needs more concealed samples involving more domains (_e.g._, autonomous driving and clinical diagnosis). (4) **Performance.** Transformer and ConvNext based models outperform other competitors by a clear margin. Cost-performance tradeoff is still under-studied, for which DGNet [27] is a good attempt. (5) **Metric.** There is no well-defined metrics that can consider the different camouflage degree of different data to give a comprehensive evaluation. This causes unfair comparisons. Besides, existing CSU methods focus on the appearance attributes of the concealed scenes (_e.g._, color, texture, boundary) to distinguish concealed objects without enough perception and output from the semantic perspective (_e.g._, relationships between objects). However, semantics is a good tool for bridging the human and machine intelligence gap. Therefore, beyond the visual space, semantic level awareness is key to the next-generation concealed visual perception. In the future, CSU models should incorporate various semantic abilities, including integrating high-level semantics, learning vision-language knowledge [213], and modeling interactions across objects. We hope that this survey provides a detailed overview for new researchers, presents a convenient reference for relevant experts, and encourages future research.
2310.03977
Perfect Alignment May be Poisonous to Graph Contrastive Learning
Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones. However, few of researchers have focused on the inner law behind specific augmentations used in graph-based learning. What kind of augmentation will help downstream performance, how does contrastive learning actually influence downstream tasks, and why the magnitude of augmentation matters so much? This paper seeks to address these questions by establishing a connection between augmentation and downstream performance. Our findings reveal that GCL contributes to downstream tasks mainly by separating different classes rather than gathering nodes of the same class. So perfect alignment and augmentation overlap which draw all intra-class samples the same can not fully explain the success of contrastive learning. Therefore, in order to understand how augmentation aids the contrastive learning process, we conduct further investigations into the generalization, finding that perfect alignment that draw positive pair the same could help contrastive loss but is poisonous to generalization, as a result, perfect alignment may not lead to best downstream performance, so specifically designed augmentation is needed to achieve appropriate alignment performance and improve downstream accuracy. We further analyse the result by information theory and graph spectrum theory and propose two simple but effective methods to verify the theories. The two methods could be easily applied to various GCL algorithms and extensive experiments are conducted to prove its effectiveness. The code is available at https://github.com/somebodyhh1/GRACEIS
Jingyu Liu, Huayi Tang, Yong Liu
2023-10-06T02:22:49Z
http://arxiv.org/abs/2310.03977v2
# Perfect alignment may be poisonous to graph contrastive learning ###### Abstract Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones. However, limited research has been conducted on the inner law behind specific augmentations used in graph-based learning. What kind of augmentation will help downstream performance, how does contrastive learning actually influence downstream tasks, and why the magnitude of augmentation matters? This paper seeks to address these questions by establishing a connection between augmentation and downstream performance, as well as by investigating the generalization of contrastive learning. Our findings reveal that GCL contributes to downstream tasks mainly by separating different classes rather than gathering nodes of the same class. So perfect alignment and augmentation overlap which draw all intra-class samples the same can not explain the success of contrastive learning. Then in order to comprehend how augmentation aids the contrastive learning process, we conduct further investigations into its generalization, finding that perfect alignment that draw positive pair the same could help contrastive loss but is poisonous to generalization, on the contrary, imperfect alignment enhances the model's generalization ability. We analyse the result by information theory and graph spectrum theory respectively, and propose two simple but effective methods to verify the theories. The two methods could be easily applied to various GCL algorithms and extensive experiments are conducted to prove its effectiveness. ## 1 Introduction Graph Neural Networks (GNNs) have been successfully applied in various fields such as recommendation systems (He et al., 2020), drug discovery (Liu et al., 2018), and traffic analysis (Wu et al., 2019), etc. However, most GNNs require labeled data for training, which may not always be available or easily accessible. To address this issue, Graph Contrastive Learning (GCL), which does not rely on labels, has gained popularity as a new approach to graph representation learning (Velickovic et al., 2018; You et al., 2020). GCL often generates new graph views through data augmentation (Chen et al., 2020; Zhu et al., 2020). GCL uses nodes augmented from the same node as positive samples and other nodes as negative samples, then maximize similarity between positive samples and minimize similarity between negative ones (Wang and Isola, 2020; Hassani and Khasahmadi, 2020). Data augmentation can be categorized into three types (Zhao et al., 2022): random augmentation (Velickovic et al., 2018; Zhu et al., 2020), rule-based augmentation (Zhu et al., 2021; Wei et al., 2023; Liu et al., 2022), and learning-based augmentation (Suresh et al., 2021; Jiang et al., 2019). For instance, Zhu et al. (2020) randomly masks node attributes and edges in graph data to obtain augmented graphs; Zhu et al. (2021) uses node degree to measure its importance and mask those unimportant with higher probability; And Suresh et al. (2021) uses a model to learn the best augmentation and remove irrelevant information as much as possible. However, most data augmentation algorithms are designed heuristically, and there is a lack of theoretical analysis on how these methods will influence the downstream performance. Some researchers have explored the generalization ability of contrastive learning (Arora et al., 2019; Wang and Isola, 2020; Huang et al., 2021). They propose contrastive learning works by gathering positive pairs and separating negative samples uniformly. Wang et al. (2022b) argues that perfect alignment and uniformity alone cannot guarantee optimal performance. They propose that through stronger augmentation, there will be support overlap between different intra-class samples, which is called augmentation overlap (Saunshi et al., 2022; Huang et al., 2021). Thus, the alignment of positive samples will also cluster all the intra-class samples together, and lead to class-separated representations due to the limited inter-class overlap. However, Saunshi et al. (2022) points out that augmentation overlap may be relatively rare despite the excellent performance of contrastive learning methods. Hence, chances are that the success of contrastive learning cannot be solely attributed to alignment and augmentation overlap. It is of vital importance to evaluate how augmentation works in the contrastive learning process, why the magnitude of augmentation matters so much and how to perform better augmentation? As data augmentation on graphs could be more customized and the magnitude of augmentation can be clearly represented by the number of modified edges/nodes (You et al., 2020), we mainly study the augmentation on graphs. But it works the same in other fields. In this paper, we provide a new understanding of Graph Contrastive Learning and use a theoretical approach to analyze the impact of augmentation on model generalization. We find that with a better augmentation, the model is performing better mainly because of inter-class separating rather than intra-class gathering brought by augmentation overlap. So perfect augmentation overlap and alignment are not the key factor for contrastive learning. To further analyze the phenomena, we formulate a relationship between downstream performance, contrastive learning loss, and augmentation, reveal the reason why stronger augmentation helps class separating, and find stronger augmentation could benefit the generalization by weaken the positive pair alignment. Therefore, perfect alignment is not the key to success, and may be poisonous to contrastive learning. Then aiming to achieve a better balance between generalization and contrastive loss, we further analyze the contrastive process through information theory and graph spectrum theory. From the information theory perspective, we find augmentation should be stronger while reducing the information loss, which is actually adopted explicitly or implicitly by designed algorithms (Zhu et al., 2021; 2020; Suresh et al., 2021). From the graph spectrum theory perspective, we analyze how the graph spectrum will affect the contrastive loss and generalization (Liu et al., 2022), finding that non-smooth spectral distribution will have a negative impact on generalization. Then we propose two methods based on the theories to verify our findings. Our main contributions are as follows. (1) We reveal that when stronger augmentation is applied, contrastive learning benefits from inter-class separating more than intra-class gathering, and imperfect alignment could be more beneficial as it enlarges the inter-class distance. (2) We establish the relationship between downstream performance, contrastive learning loss, and data augmentation. Further explains why stronger augmentation helps, then we analyze the result from information theory and graph spectrum theory to guide algorithm design. (3) Based on the proposed theoretical results, we provide specific algorithms that verify the correctness of the theory. We also show that these algorithms can be extended to various contrastive learning methods to enhance their performance. (4) Extensive experiments are conducted on different contrastive learning algorithms and datasets using our proposed methods to demonstrate its effectiveness. ## 2 Augmentation and Generalization ### Preliminaries A graph can be represented as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of \(N\) nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the edge set. The feature matrix and the adjacency matrix are denoted as \(\mathbf{X}\in\mathbb{R}^{N\times F}\text{and }\mathbf{A}\in\{0,1\}^{N\times N}\), where \(F\) is the dimension of input feature, \(\mathbf{x}_{i}\in\mathbb{R}^{F}\) is the feature of node \(v_{i}\) and \(\mathbf{A}_{i,j}=1\) iff \((v_{i},v_{j})\in\mathcal{E}\). The node degree matrix \(\mathbf{D}=\operatorname{diag}(d_{1},d_{2},...,d_{N})\), where \(d_{i}\) is the degree of node \(v_{i}\). In contrastive learning, data augmentation is used to create new graphs \(\mathcal{G}^{1},\mathcal{G}^{2}\in\text{G}^{\text{aug}}\), and the corresponding nodes, edges, and adjacency matrices are denoted as \(\mathcal{V}^{1},\mathcal{E}^{1},\mathbf{A}^{1},\mathcal{V}^{2},\mathcal{E}^{2}, \mathbf{A}^{2}\). In the following of the paper, \(v\) is used to represent all nodes including the original nodes and the augmented nodes; \(v_{i}^{\top}\) is used to represent the augmented nodes including both \(v_{i}^{1}\) and \(v_{i}^{2}\); \(v_{i}^{0}\) represents the original nodes only. Nodes augmented from the same one, such as \((v_{i}^{1},v_{i}^{2})\), are considered as positive pairs, while others are considered as negative pairs. It is worth noting that a negative pair could come from the same graph, for node \(v_{i}^{1}\), its negative pair could be \(v_{i}^{-}\in\{v_{i}^{+}|j\neq i\}\). Graph Contrastive Learning (GCL) is a method to learn an encoder that draws the embeddings of positive pairs similar and makes negative ones dissimilar (Chen et al., 2020; Wang and Isola, 2020). The encoder calculates the embedding of node \(v_{i}\) by \(f(\mathbf{X},\mathbf{A})[i]\), which can be summarized as \(f(v_{i})\) for better comprehension, we assume that \(||f(v_{i})||=1\). The commonly used InfoNCE loss (Zhu et al., 2020) can be formulated as follows: \[\mathcal{L}_{\mathrm{NCE}}=\mathbb{E}_{p(v_{i}^{1},v_{i}^{2})}\mathbb{E}_{\{p (v_{i}^{-})\}}\left[-\log\frac{\exp(f(v_{i}^{1})^{T}f(v_{i}^{2}))}{\sum_{i=1}^ {M}\exp(f(v_{i}^{1})^{T}f(v_{i}^{-}))}\right]. \tag{1}\] We use two augmented views to perform GCL (Zhu et al., 2020; Chen et al., 2020). However, \(v_{i}^{1}\) could be replaced by \(v_{i}^{0}\)(Liu et al., 2022; He et al., 2020), as \(v_{i}^{0}\) is a special case of \(v_{i}^{1}\) where the augmentation happen to change nothing about the original view. ### How augmentation infect downstream performance Previous work (Wang and Isola, 2020) proposes that effective contrastive learning should satisfy alignment and uniformity, meaning that positive samples should have similar embeddings, _i.e._, \(f(v_{i}^{1})\approx f(v_{i}^{2})\), and features should be uniformly distributed in the unit hypersphere. However, Wang et al. (2022) pointed out that when \(\{f(v_{i}^{0})\}_{i=1}^{N}\) are uniformly distributed and \(f(v_{i}^{0})=f(v_{i}^{+})\), there is a chance that the model may converge to a trivial solution that only projects very similar features to the same embedding, and projects other features randomly, then it will perform random classification in downstream tasks although it achieves perfect alignment and uniformity. Wang et al. (2022) argues that perfect alignment and intra-class augmentation overlap would be the best solution. If two intra-class samples have augmentation overlap, then the best solution is projecting the two samples and their augmentation to the same embedding. For example, if two different nodes \(v_{i}^{0}\), \(v_{j}^{0}\) get the same augmentation \(v^{+}\), then the best solution to contrative learning is \(f(v_{i}^{0})=f(v^{+})=f(v_{j}^{0})\). So perfect alignment and augmentation overlap could project all intra-class nodes to the same embedding, and project inter-class nodes differently because of limited inter-class overlap. However, a stronger augmentation would connect more intra-class nodes by overlap, but will inevitably make achieving perfect alignment more challenging. Conversely a weaker augmentation would help alignment but augmentation overlap would be rare. Therefore, it is difficult to achieve both optimal augmentation and perfect alignment. And Saunshi et al. (2022) proposes that augmentation overlap is actually rare in practice, even with quite strong augmentation. So the better performance may not be brought by perfect alignment and overlap, in order to further study how exactly augmentation helps contrastive learning, we evaluate how the downstream performance changes while the augmentation being stronger. To begin with, we give an assumption on the label consistency between positive samples, which means the class label does not change after augmentation. **Assumption 2.1** (View Invariance).: _For node \(v_{i}^{0}\), the corresponding augmentation nodes \(v_{i}^{+}\) get consistent labels: \(p(y|v_{i}^{0})=p(y|v_{i}^{+})\). Also the encoder should learn similar embeddings \(\mathbb{E}_{p(v_{i}^{0},v_{i}^{+})}||f(v_{i}^{0})-f(v_{i}^{+})||^{2}\leq\delta _{aug}^{2}\)._ This assumption is widely adopted (Arora et al., 2019; Wang et al., 2022; Saunshi et al., 2022) and reasonable. If the augmentation still keeps the basic structure and most of feature information is kept, the class label would not likely to change. Else if the augmentation destroys basic label information, the model tends to learn a trivial solution, so it is meaningless and we do not discuss the situation. As \(\delta_{aug}\) is largely decided by the augmentation when the model and dataset are given, we need to clarify the relationship between \(\delta_{aug}\) and augmentation. First, we use graph edit distance (GED) to denote the magnitude of augmentation, Trivedi et al. (2022) proposes that all allowable augmentations can be expressed using GED which is defined as minimum cost of graph edition (node insertion, node deletion, edge deletion, feature transformation) transforming graph \(\mathcal{G}^{0}\) to \(\mathcal{G}^{+}\). So a stronger augmentation could be defined as augmentation with a larger GED. **Assumption 2.2** (Positive Pair difference and Augmentation).: _While Assumption 2.1 holds i.e., \(p(y|v_{i}^{0})=p(y|v_{i}^{+})\), as the augmentation getting stronger, positive pair difference \(\delta_{aug}\) will increase, i.e., \(\delta_{aug}\propto\mathrm{GED}(\mathcal{G}^{0},\mathcal{G}^{+})\). \(\mathrm{GED}(\mathcal{G}^{0},\mathcal{G}^{+})\) indicates the graph edit distance between \(\mathcal{G}^{0}\) and \(\mathcal{G}^{+}\)._ This is a natural assumption that is likely to hold because input with a bigger difference will lead to a bigger difference in output when the model is correctly trained, which is guaranteed by \(p(y|v_{i}^{0})=p(y|v_{i}^{+})\). With the assumptions, we can get the theorem below: **Theorem 2.3** (Augmentation and Classification).: _If Assumption 2.1 holds, we know that:_ \[\mathbb{E}_{p(v,y)}f(v_{y}^{0})^{T}\mu_{y} \geq 1-\frac{1}{3}\delta_{aug}^{2}-\frac{2}{3}\delta_{aug}\delta_{ y^{+}}-\frac{1}{2}\delta_{y^{+}}^{2}, \tag{2}\] \[\mathbb{E}_{p(v,y,y^{-})}f(v_{y}^{0})^{T}\mu_{y^{-}} \geq 1-\frac{1}{3}\delta_{aug}^{2}-\frac{2}{3}\delta_{aug}\delta_{ y^{-}}-\frac{1}{2}\delta_{y^{-}}^{2}, \tag{3}\] _where \(\mu_{y}=\mathbb{E}_{p(v,y)}\left[f(v)\right]\), \(\delta_{y^{+}}^{2}=\mathbb{E}_{p(v,i,j)}||f(v_{y,i}^{0})-f(v_{y,j}^{0})||^{2}\) and \(\delta_{y^{-}}^{2}=\mathbb{E}_{p(y,y-,i,j)}||f(v_{y,i}^{0})-f(v_{y^{-}j}^{0})||^ {2}\), \(y^{-}\) stands for a class different from \(y\)._ The proof can be found in Appendix A.1. This shows that the similarity between a node and the center could be roughly represented by the positive pair difference \(\delta_{aug}\) and the inter-class/intra-class divergence \(\delta_{y^{-}}\), \(\delta_{y^{+}}\). We then use positive and negative center similarity to represent \(\mathbb{E}_{p(v,y)}f(v_{y}^{0})^{T}\mu_{y}\) and \(\mathbb{E}_{p(v,y,y^{-})}f(v_{y}^{0})^{T}\mu_{y^{-}}\), respectively. Note that we calculate the class center \(\mu_{y}\) by averaging nodes from both original view and augmented views, as the class label of nodes after augmentation remains unchanged, our class center should be more precise as more nodes are included. As we assumed in Assumption 2.2, when the augmentation becomes stronger, positive pair difference _i.e., \(\delta_{aug}\)_ would increase, and based on previous researches (Zhu et al., 2020; You et al., 2020; Velickovic et al., 2018), the express power of the model would also be enhanced initially, causing intra-class divergence \(\delta_{y^{+}}\) decreasing and inter-class divergence \(\delta_{y^{-}}\) increasing. Therefore, from Inequality (2) we can conclude that when we perform a stronger augmentation, initially, the similarity between node \(v_{y}\) and its center \(\mu_{y}\) is hard to predict as \(\delta_{aug}\) increases and \(\delta_{y^{+}}\) decreases. However, the right hand side of Inequality (3) should decrease gradually as both \(\delta_{aug}\) and \(\delta_{y^{-}}\) increase, so the negative center similarity would more likely to be lower. The experiment shown in Figure 1 confirms our suspicion. We use dropout on edges and features to perform augmentation, and the dropout rate naturally represents the magnitude of augmentation _i.e.,_ graph edit distance. Initially, as the dropout rate increases, positive center similarity may decrease sometimes, but downstream performance could be enhanced as negative center similarity decreases much faster. Therefore contrastive learning mainly contributes to downstream tasks by separating nodes of different classes rather than gathering nodes of the same class, and perfect alignment may not help as it hinders class separating. Also we can observe from Figure 1 that when we drop too much edges/features, downstream performance decreases sharply, and both positive and negative center similarity increases as too much Figure 1: PCS means positive center similarity (\(f(v_{y})^{T}\mu_{y}\)), NCS means negative center similarity (\(f(v_{y})^{T}\mu_{y^{-}}\)) and accuracy is the downstream performance. X-axis stands for dropout rate of both edge and feature, and the y-axis stands for the normalized values. information is lost and the basic assumption \(p(y|v_{i}^{0})=p(y|v_{i}^{+})\) does not hold, then a trivial solution is learned. ### augmentation and generalization Although GCL with a stronger augmentation may help to improve downstream performance, why it works stays unclear. We need to figure out the relationship between positive pair difference, contrastive loss and downstream performance to further guide algorithm design. We first define the mean cross-entropy (CE) loss below, and use it to represent downstream performance. **Definition 2.4** (Mean CE loss).: _For an encoder \(f\) and downstream labels \(y\in[1,K]\), we use the mean \(\mathrm{CE}\) loss \(\hat{\mathcal{L}}_{\mathrm{CE}}=\mathbb{E}_{p(v^{0},y)}\left[-\log\frac{ \exp(f(v^{0})^{T}\mu_{y})}{\sum_{j=1}^{k}\exp(f(v^{0})^{T}\mu_{j})}\right]\) to evaluate downstream performance, where \(\mu_{j}=\mathbb{E}_{p(v|y=j)}\left[f(v)\right]\)._ It is easy to see that mean CE loss could indicate downstream performance as it requires nodes similar to their respective class center, and different from others class centers. Also it is an upper bound of CE loss \(\mathcal{L}_{\mathrm{CE}}=\mathbb{E}_{(v^{0},y)}\left[-\log\frac{\exp(f(v^{0} )^{T}\omega_{y})}{\sum_{i=1}^{k}\exp(f(v^{0})^{T}\omega_{y_{i}})}\right]\), Arora et al. (2019) showed that the mean classifier could achieve comparable performance to learned weights, so we analyze the mean CE loss instead of the CE loss in this paper. Similar to Theorem 2.3, we calculate the class center using both original and augmented view nodes, instead of using only the original view nodes (Arora et al., 2019; Wang et al., 2022b). **Theorem 2.5** (Generalization and Positive Pair Difference).: _If Assumption 2.1 holds, and \(\mathrm{ReLU}\) is applied as activation, then the relationship between downstream performance and \(\mathrm{InfoNCE}\) loss could be represented as:_ \[\hat{\mathcal{L}}_{\mathrm{CE}} \geq\mathcal{L}_{\mathrm{NCE}}-3\delta_{aug}^{2}-2\delta_{aug}- \log\frac{M}{K}-\frac{1}{2}\operatorname{Var}(f(v^{+})|y)\] \[\quad-\sqrt{\operatorname{Var}(f(v^{0})|y)}-e\operatorname{Var}( \mu_{y})-O(M^{-\frac{1}{2}}),\] _where \(M\) is number of negative samples1, \(K\) is number of classes, \(\operatorname{Var}(f(v^{0})|y)=\mathbb{E}_{p(y)}\mathbb{E}_{p(v^{0}|y)}||f(v^ {0})-\mathbb{E}_{p(v|y)}f(v)||^{2}\) and \(\operatorname{Var}(f(v^{+})|y)=\mathbb{E}_{p(y)}\mathbb{E}_{p(v^{+}|y)}||f(v^ {+})-\mathbb{E}_{p(v|y)}f(v)||^{2}\) both mean intra-class variance, and \(\operatorname{Var}(\mu_{y})=\mathbb{E}_{p(y)}||\mathbb{E}_{p(v|y)}f(v)- \mathbb{E}_{p(v)}f(v)||^{2}\) represents the variance of \(K\) class centers._ Footnote 1: the generalization are correlated with \(-\log M-O(M^{-\frac{1}{2}})\), which is decreasing when \(M\) increases and \(M\) is large, so the theorem encourages large negative samples. The proof can be found in Appendix A.2. Theorem 2.5 suggests a gap between \(\hat{\mathcal{L}}_{\mathrm{CE}}\) and \(\mathcal{L}_{\mathrm{NCE}}\), meaning that the encoders that minimize \(\mathcal{L}_{\mathrm{NCE}}\) may not yield optimal performance on downstream tasks. Furthermore, it suggests that a higher positive pair difference \(\delta_{aug}\) and \(\operatorname{Var}(f(v)|y)\) would enhance generalization and potentially improve performance on downstream tasks. Also Inequality (2) also demonstrates that \(f(v_{y})^{T}\mu_{y}\propto[-\operatorname{Var}(f(v)|y)]\) and \(f(v_{y})^{T}\mu_{y}\propto[-\delta_{aug}]\). So better generalization correlates with worse positive center similarity. This aligns with the findings before that better downstream performance may come with a lower positive center similarity. Theorem 2.5 explains why as the augmentation becomes stronger, negative center similarity decreases, while positive center similarity remains unpredictable. The InfoNCE loss \(\mathcal{L}_{\mathrm{NCE}}\) can be written as \(\mathcal{L}_{\mathrm{NCE}}=\mathbb{E}_{p(v_{i}^{1},v_{i}^{2})}\mathbb{E}_{p(v_ {i}^{-})}\left[-\log\frac{\exp(f(v_{i}^{1})^{T}f(v_{i}^{2}))}{\sum_{\{v_{i}^{-} \}}\exp(f(v_{i}^{1})^{T}f(v_{i}^{-}))}\right]\), and the numerator \(f(v_{i}^{1})^{T}f(v_{i}^{2})\propto 1-\mathbb{E}_{p(v_{i})}||f(v_{i}^{1})-f(v_{i}^{2})|| \propto 1-\mathbb{E}_{p(v_{i})}||f(v_{i}^{0})-f(v_{i}^{+})||\propto 1-\delta_{aug}\). This implies that a higher \(\delta_{aug}\) caused by stronger augmentation tends to make the numerator harder to maximize. So GCL would pay more attention to the minimize the denominator just as shown in Figure 5. And minimizing the denominator is actually separating negative samples which is mainly performing inter-class separating as most negative samples are from the different classes. Thus inter-class separating is enhanced. In contrast, intra-class gathering is weakened due to the existence of same-class samples in the negative set, while the worse alignment performance and better augmentation overlap can hardly help (Wang et al., 2022b; Saunshi et al., 2022). As a result intra-class nodes may not gather closer. Theorem 2.5 also highlights the significance of augmentation magnitude in graph contrastive learning algorithms like GRACE (Zhu et al., 2020). A weak augmentation leads to better alignment but also a weak generalization, InfoNCE loss might be relatively low but downstream performance could be terrible (Saunshi et al., 2022). When augmentation gets stronger, although perfect alignment cannot be achieved, it promotes better generalization and potentially leads to improved downstream performance. And when the augmentation is too strong, minimizing the InfoNCE loss becomes challenging (Li et al., 2022), leading to poorer downstream performance. Therefore, it is crucial to determine the magnitude of augmentation and how to perform augmentation as it directly affects contrastive performance and generalization. ## 3 Finding better augmentation Previous sections have revealed that perfect alignment, which minimizes the positive pair difference \(\delta_{aug}\) to \(0\) may not help downstream performance. Instead a stronger augmentation that leads to larger \(\delta_{aug}\) will benefit generalization while weakening contrastive learning process. Therefore, we need to find out how to perform augmentation to strike a better balance between positive pair difference and contrastive loss, leading to better downstream performance. ### information theory perspective As shown by Oord et al. (2018), \(\mathcal{L}_{\mathrm{NCE}}\) is actually a lower bound of mutual information. Additionally, \(\mathrm{Var}(f(v^{0})|y)\), \(\mathrm{Var}(f(v^{+}|y))\) and \(\mathrm{Var}(\mu_{y})\) can be represented by inherent properties of the graph and the positive pair difference \(\delta_{aug}\). Thus, Theorem 2.5 could be reformulated as follows: **Corollary 3.1** (CE with Mutual Information).: _If Assumption 2.1 holds, the relationship between downstream performance, mutual information between views and positive pair difference could be represented as:_ \[\hat{\mathcal{L}}_{\mathrm{CE}}\geq\log(K)-I(f(v^{1}),f(v^{2}))-g(\delta_{ aug})-O(M^{-\frac{1}{2}}), \tag{4}\] _where \(I(f(v^{1}),f(v^{2}))\) stands for the mutual information between \(f(v^{1})\) and \(f(v^{2})\), \(g(\delta_{aug})\) is monotonically increasing, and is defined in Appendix A.3._ The proof can be found in Appendix A.3. Corollary 3.1 suggests that the best augmentation would be one that maximize the mutual information and positive pair difference. To verify this, we propose a simple but effective method. We recognize important nodes, features and edges, then leave them unchanged during augmentation to increase mutual information. Then for those unimportant ones, we should perform stronger augmentation to increase the positive pair difference. Similar to Wei et al. (2023), we utilize gradients to identify which feature of node \(v\) is relatively important and carries more information. We calculate the importance of feature by averaging the feature importance across all nodes, the importance of node \(v\) could be calculated by simply averaging the importance of its features, and then use the average of the two endpoints to represent the importance of an edge: \[\alpha_{v,p}=\frac{\partial\mathcal{L}_{\mathrm{NCE}}}{\partial x _{v,p}},\quad\alpha_{p}=\mathrm{ReLU}\left(\frac{1}{|V^{\prime}|}\sum_{v} \alpha_{v,p}\right),\] \[\alpha_{v}=\mathrm{ReLU}\left(\frac{1}{|P^{\prime}|}\sum_{p} \alpha_{v,p}\right),\quad\alpha_{e_{i,j}}=\left(\alpha_{v_{i}}+\alpha_{v_{j}} \right)/2,\] where \(\alpha_{v,p}\) means importance of the \(p^{th}\) feature of node \(v\), \(\alpha_{p}\) means the importance of \(p^{th}\) feature, \(\alpha_{v}\) means importance of node \(v\), and \(\alpha_{e_{i,j}}\) means the importance of edge \((v_{i},v_{j})\). For those edges/features with high importance, we should keep them steady and do no modification during augmentation. For those with relatively low importance, we can freely mask those edges/features, but we should make sure that the number of masked edges/features is greater than the number of kept ones to prevent \(\delta_{aug}\) from decreasing. The process can be described by the following equation: \[\tilde{\mathbf{A}}=\mathbf{A}*(\mathbf{M}_{e}\vee\mathbf{S}_{e}\wedge\mathbf{D}_{e}),\quad\tilde {\mathbf{F}}=\mathbf{F}*(\mathbf{M}_{f}\vee\mathbf{S}_{f}\wedge\mathbf{D}_{f}),\] where \(*\) is hadamard product, \(\vee\) stands for logical OR, \(\wedge\) stands for logical AND. \(\mathbf{M}_{e}\), \(\mathbf{M}_{f}\) represent the random mask matrix, which could be generated using any mask method, \(\mathbf{S}_{e}\), \(\mathbf{S}_{f}\) are the importance based retain matrix, it tells which edge/feature is of high importance and should be retained. For the top \(\mathcal{S}\) important edges/features, we set \(\mathbf{S}_{e}\), \(\mathbf{S}_{f}\) to \(1\) with a probability of 50% and to \(0\) otherwise. \(\mathbf{D}_{e}\), \(\mathbf{D}_{f}\) show those edges/features should be deleted to increase \(\delta_{aug}\), for the least \(2\mathcal{E}\) important edges/features, we also set \(\mathbf{D}_{e}\), \(\mathbf{D}_{f}\) to \(0\) with a probability of 50% and to \(1\) otherwise. It is worth noting that \(\delta_{aug}\) is defined as \(\mathbb{E}_{p(v_{i}^{0},v_{i}^{0})}||f(v_{i}^{+})-f(v_{i}^{0})||\) rather than \(\mathbb{E}_{p(v_{i}^{1},v_{i}^{2})}||f(v_{i}^{1})-f(v_{i}^{2})||\), therefore, we applied this deletion on both views. This is a simple method, and the way to measure importance can be replaced by any other methods. It can be easily integrated into any other graph contrastive learning methods that require edge/feature augmentation. There are many details that could be optimized, such as how to choose which edges/features to delete and the number of deletions. However, since this algorithm is primarily intended for theoretical verification, we just randomly select edges to be deleted and set the number to twice the number of edges kept. In fact, most graph contrastive learning methods follow a similar framework to maximize mutual information and implicitly increase positive pair difference as discussed in Appendix B.1. ### graph spectrum perspective In this section, we attempt to analyze InfoNCE loss and positive pair difference from graph spectrum perspective. We start by representing them using the spectrum of the adjacency matrix \(\mathbf{A}\). **Theorem 3.2** (Theorem 1 of Liu et al. (2022) Restated).: _Given adjacency matrix \(\mathbf{A}\) and the generated augmentation \(\mathbf{A}^{\prime}\), \(\mathbf{A}^{\prime\prime}\), the \(i^{th}\) eigenvalues of \(\mathbf{A}^{\prime}\) and \(\mathbf{A}^{\prime\prime}\) are \(\lambda_{i}^{\prime}\), \(\lambda_{i}^{\prime\prime}\), respectively. The following upper bound is established:_ \[\mathcal{L}_{\mathrm{NCE}}\geq N\log N-(N+1)\sum_{i}\theta_{i}\lambda_{i}^{ \prime}\lambda_{i}^{\prime\prime}, \tag{5}\] _where \(\theta_{i}\) is the adaptive weight of the \(i^{th}\) term, the detail of \(\theta_{i}\) is discussed in Appendix C.4._ **Corollary 3.3** (Spectral Representation of \(\delta_{aug}\)).: _If Assumption 2.1 holds, and \(\lambda_{i}^{\prime},\lambda_{i}^{\prime\prime}\) are \(i^{th}\) eigenvalues of \(\mathbf{A}^{\prime}\) and \(\mathbf{A}^{\prime\prime}\), respectively, then:_ \[2\delta_{aug}\geq\mathbb{E}_{p(v_{i}^{1},v_{i}^{2})}||f(v_{i}^{1})-f(v_{i}^{ 2})||\geq\sqrt{2-\frac{2}{N}\sum_{i}\theta_{i}\lambda_{i}^{\prime}\lambda_{i} ^{\prime\prime}}. \tag{6}\] Theorem 2.5 suggests that we should strive to make \(\mathcal{L}_{\mathrm{NCE}}\) small while increase \(\delta_{aug}\), but they are kindly mutually exclusive. As shown in Theorem 3.2, and Corallary 3.3 proved in Appendix A.4, when \(\theta_{i}\) is positive, a small \(\mathcal{L}_{\mathrm{NCE}}\) requires for large \(|\lambda_{i}|\) while a large \(\delta_{aug}\) requires for small \(|\lambda_{i}|\), and it works exclusively too when \(\theta_{i}\) is negative. As contrastive learning is trained to minimize \(\mathcal{L}_{\mathrm{NCE}}\), \(\theta\)s are going to increase as the training goes, so we can assume that \(\theta\)s will be positive, the detailed discussion and exact definition of \(\theta\) can be found in Appendix C.4. Since \(\theta\)s are trained parameters that we have limited control over, we turn to adjusting \(\lambda\)s through data augmentation. Therefore, to achieve a better trade-off, we should decrease \(|\lambda_{i}|\) while keep InfoNCE loss also decreasing. In fact, reducing \(|\lambda_{i}|\) actually reduces the positive \(\lambda_{i}\) and increases the negative \(\lambda_{i}\), which is trying to smoothen the graph spectrum and narrow the gap between the spectrum. As suggested by Yang et al. (2022), graph convolution operation with an unsmooth spectrum results in signals correlated to the eigenvectors corresponding to larger magnitude eigenvalues and orthogonal to the eigenvectors corresponding to smaller magnitude eigenvalues. So if \(|\lambda_{i}|\gg|\lambda_{j}|\), then \(\sin(f(v),e_{i})\gg\sin(f(v),e_{j})\), where \(e_{i}\) denotes the eigenvector corresponding to \(\lambda_{i}\), causing all representations similar to \(e_{i}\). Therefore, an unsmooth spectrum may lead to similar representations. This can also be observed from Inequality (6), where a higher \(|\lambda_{i}|\) will reduce the positive pair difference, making \(f(v_{i}^{1})\) and \(f(v_{i}^{2})\) more similar. We now know that smoothing the graph spectrum can help with graph contrastive learning. The question is how to appropriately smooth the spectrum. We propose a simple method. As the training aims to minimize \(\mathcal{L}_{\mathrm{NCE}}\), the parameter \(\theta_{i}\)s are supposed to increase. Therefore, we can use \(\theta_{i}\) as a symbol to show whether the model is correctly trained. When \(\theta_{i}\) gradually increases, we can decrease \(\lambda\) as needed. However, when \(\theta_{i}\) starts to decrease, it is likely that the change on the spectrum is too drastic, and we should take a step back. The process could be described as follows: \[\lambda_{i}=\lambda_{i}+\mathrm{direction}_{i}*\lambda_{i}*\alpha,\quad\mathrm{ direction}_{i}=\begin{cases}-1,&\mathrm{cur}(\theta_{i})-\mathrm{pre}(\theta_{i}) \geq\epsilon\\ 1,&\mathrm{cur}(\theta_{i})-\mathrm{pre}(\theta_{i})\leq-\epsilon,\\ 0,&\mathrm{otherwise}\end{cases}\] where \(\alpha\) is a hyperparameter that determines how much we should decrease/increase \(\lambda_{i}\). \(\epsilon\) is used to determine whether \(\theta_{i}\) is increasing, decreasing, or just staying steady. \(\mathrm{cur}(\theta_{i})\) and \(\mathrm{pre}(\theta_{i})\) represents the current and previous \(\theta_{i}\) respectively. In this way, the contrastive training will increase \(\theta_{i}\) and result in a lower \(\mathcal{L}_{\mathrm{NCE}}\), while we justify \(\lambda_{i}\) to achieve a better positive pair difference, which promises a better generalization ability. However, just like Section 3.1, the method is quite simple, in fact this method is more a data preprocessing rather than augmentation, but it is capable of verifying the theory and guide algorithm design, as we decrease \(\lambda^{\prime}\) by directly decreasing \(\lambda\), but it could also be achieved by augmentation methods. Also some spectral augmentations implicitly decreases \(|\lambda|\)s as shown in Appendix B.2. ## 4 Experiments In this section, we mainly evaluate the performance of the methods we proposed on six datasets: Cora, CiteSeer, PubMed, DBLP, Amazon-Photo and Amazon-Computer. We select 3 contrastive learning GNN, GRACE (Zhu et al., 2020), GCA (Zhu et al., 2021), AD-GCL (Suresh et al., 2021), and integrate those models with our proposed methods to verify its applicability and correctness of the theory. Details of datasets and baselines are in Appendix C.1. The results are summarized in Table 1. We further investigate the positive/negative center similarity in Appendix C.6, the hyperparameter sensitivity is studied in Appendix C.7, and the change of \(\theta\) and the spectrum while training is shown in Appendix C.5. From Table 1, we can observe that GRACE+1 (GRACE with information augmentation) and GRACE+S (GRACE with spectrum augmentation) both improve the downstream performance. This improvement is significant for GRACE since it primarily performs random dropout, resulting in the loss of valuable information. But for GCA, the information augmentation only brings minor improvements. This is because GCA already drops the unimportant ones with a higher probability, allowing it to capture sufficient information, especially on large graphs. AD-GCL aggressively drops as much information as possible to eliminate irrelevant information while some important ones are also dropped, so the information augmentation helps greatly. Overall, our methods improve the performance of original algorithm and helps downstream tasks. ### positive pair difference Figure 2 demonstrates that for all three algorithms, our methods maintain similar mutual information while achieving a larger positive pair difference. This indicates that we preserve the contrastive loss \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{CiteSeer} & \multicolumn{2}{c}{PubMed} & \multicolumn{2}{c}{DBLP} & \multicolumn{2}{c}{Amazon-P} & \multicolumn{2}{c}{Amazon-C} \\ \cline{2-11} & Mi-Fi & Ma-Fi & Mi-Fi & Ma-Fi & Mi-Fi & Ma-Fi & Mi-Fi & Ma-Fi & Mi-Fi & Ma-Fi & Ma-Fi \\ \hline GCN & 83.31 & 81.97 & 69.81 & 66.44 & 85.36 & 84.88 & 81.26 & 75.40 & 93.28 & 91.78 & 88.11 & 81.57 \\ GAT & 83.83 & 82.45 & 70.31 & 66.76 & 84.04 & 83.43 & 81.92 & 75.87 & 93.17 & 91.84 & 86.82 & 78.37 \\ \hline GRACE & 82.52 & 81.23 & 68.44 & 63.73 & 84.97 & 84.51 & 84.01 & 79.63 & 91.17 & 89.09 & 86.36 & 84.15 \\ GRACE+1 & **83.33** & **82.23** & **70.47** & 64.83 & 84.99 & 84.57 & 84.39 & 80.24 & 91.13 & 89.11 & **86.61** & **84.77** \\ GRACE+S & 83.25 & 81.85 & 69.87 & **64.92** & **85.03** & **84.62** & **84.47** & **80.33** & **91.91** & **90.09** & 86.61 & 84.66 \\ \hline GCA & 83.74 & 82.28 & 71.09 & 66.43 & 85.38 & **85.07** & 83.99 & 79.82 & 91.67 & 90.21 & 86.77 & 85.18 \\ GCA+1 & **84.71** & **83.42** & **71.24** & **67.23** & **85.38** & 84.89 & 84.92 & 79.91 & 93.94 & **94.00** & 86.60 & 84.12 \\ GCA+S & 83.51 & 82.30 & 70.95 & 65.31 & 85.28 & 84.98 & **84.49** & **80.28** & **92.02** & 90.36 & **86.97** & **85.30** \\ \hline AD GCL & 81.68 & 79.83 & 70.01 & 64.17 & 84.77 & 84.29 & 83.14 & 78.86 & 91.34 & 89.28 & 84.80 & 82.04 \\ AD GCL+1 & **83.06** & 81.20 & 71.06 & **64.69** & **85.52** & **85.00** & **83.51** & 79.05 & **91.91** & **90.24** & **86.62** & **84.12** \\ AD GCL+S & 82.96 & **81.39** & **71.35** & 63.88 & 85.08 & 84.60 & 83.45 & **79.13** & 91.79 & 89.94 & 85.49 & 82.52 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results on node classification, algorithm+I stands for the algorithm with information augmentation, and algorithm+S stands for the aligirithm with spectrum augmentation while enhancing its generalization, resulting in improved downstream performance. Similar to the result of Table 1, the improvement of \(\delta_{aug}\) on GRACE and AD-GCL are much sharper because GCA has already tried to increase positive pair difference and achieve a balance with the InfoNCE loss as discussed in Appendix B.1. ### over-smooth While reducing \(|\lambda_{i}|\), we obtain a graph with smoother spectrum, and could relieve the over-smooth. This, in turn, enables the application of relatively more complex models. We can verify this by simply stacking more layers. As shown in Figure 3, if applied spectrum augmentation, the model tends to outperform the original algorithm especially with more layer, and the best performance may come with a larger number of layers, which indicates that more complicated models could be applied and our method successfully relieve over-smooth. ## 5 Conclusion In this paper, we investigate the impact of contrastive learning on downstream tasks and propose that perfect alignment does not necessarily lead to improved performance. Instead, we find that a relatively large positive pair difference is more beneficial for generalization. Building upon this insight, we introduce two simple yet effective methods to strike a balance between the contrastive loss and positive pair difference. **Limitations and future work**. In our work, we focus on manipulating positive pair difference through augmentation. However, the model architecture and loss function can also have a significant impact. And when performing spectrum augmentation, large matrix multiplications are needed. It would be more efficient if the augmentation could be represented by simple edge perturbations. Figure 3: Accuracy on downstream tasks with different number of layers. GRACE is the original algorithm (Zhu et al., 2020), and GRACE+S stands for GRACE with spectrum augmentation. Figure 2: Positive pair difference and InfoNCE, GRACE+I stands for GRACE with information augmentation, and GRACE+S stands for GRACE with spectrum augmentation. GRACE+x_MI means mutual information between two views after training, and GRACE+x_\(\delta_{aug}\) is positive pair difference caused by the method.
2304.06990
On a Repulsion-Diffusion Equation with Immigration
We study a repulsion-diffusion equation with immigration, whose asymptotic behaviour is related to stability of long-term dynamics in spatial population models and other branching particle systems. We prove well-posedness and find sharp conditions on the repulsion under which a form of the maximum principle and a strong notion of global boundedness of solutions hold. The critical asymptotic strength of the repulsion is $|x|^{1-d}$, that of the Newtonian potential.
Peter Koepernik
2023-04-14T08:25:27Z
http://arxiv.org/abs/2304.06990v2
# On a repulsion-diffusion equation with immigration ###### Abstract. We study a repulsion-diffusion equation with immigration, whose asymptotic behaviour is related to stability of long-term dynamics in spatial population models and other branching particle systems. We prove well-posedness and find sharp conditions on the repulsion under which a form of the maximum principle and a strong notion of global boundedness of solutions hold. The critical asymptotic strength of the repulsion is \(|x|^{1-d}\), that of the Newtonian potential. Key words and phrases:Repulsion-diffusion, immigration, asymptotics, maximum principle, population dynamics 2020 Mathematics Subject Classification: 35B40, 35B50, 35Q92 \({}^{2}\)The author has benefitted while working on this article from an EPSRC grant EP/W523781/1. BPS with immigration (with or without repulsion or other interactions) appear in many contexts from biology and physics. They can be used to model air showers of particles produced by extraterrestrial cosmic rays entering the atmosphere [9], families of neutrons in subcritical nuclear reactors, which sustain the reaction with a constant stream of neutrons from an outside source [38], or biological populations. Immigration in the biological context could arise from different sources. One might consider a steady flow of individuals from a large, stable population migrating into a new, uncontested habitat. Secondly, a BPS as considered here goes extinct in finite time almost-surely, but conditioned on survival it looks exactly like a BPS with a certain kind of immigration [23]. We go into more detail on this particular example in Appendix A. In many of these examples, it is of interest whether the system has stable long-term dynamics, which in the mean-field limit is reflected in the asymptotic behaviour of \(\rho_{t}\). If we consider first the case where there is no repulsion, then the equation reduces to \(\partial_{t}\rho=\frac{1}{2}\Delta\rho+f\), and the solution (started from zero) is given by \[\rho_{t}=\int_{0}^{t}G_{s}\star f\,\mathrm{d}s,\] where \(G_{s}(x)=(2\pi s)^{-d/2}\mathrm{e}^{-x^{2}/(2s)},\,s>0\), is the heat kernel. Provided that \(f\not\equiv 0\) (see Lemma 3.10): 1. If \(d\leq 2\), then \(\rho_{t}\uparrow\infty\) locally uniformly. 2. If \(d\geq 3\), then \(\rho_{t}\) converges to a bounded stationary distribution. Indeed it is a known fact that critical branching processes in one and two dimensions tend to be unstable in the sense that, after a long time, they have either gone extinct, or they have a lot of mass that concentrates in large "clumps" [24, 30]. We elaborate on this phenomenon in Appendix A. One of the reasons this does not occur in the real world is that individuals tend to migrate away from overcrowded areas, which can be modelled by a pairwise repulsion between individuals. This motivates the question whether solutions to (1.1) remain asymptotically bounded in dimensions \(d\leq 2\) if the repulsion is sufficiently strong. ### Related Work #### SuperBrownian Motion with Immigration A lot of work has been done on superBrownian motion with immigration (and no interaction), which is a measure-valued stochastic process that formally solves (1.1) without the interaction and an additional term \(\sqrt{\rho_{t}}\,\mathrm{d}W\) for a space-time white noise \(W\). It can be obtained from the same BPS whose hydrodynamic limit is given by (1.1), except that particles don't interact and the branching rate is scaled up simultaneously with the particle density. We go into more detail on superBrownian motion in Appendix A. Amongst the known results for this process are central limit theorems [26] and large deviation principles for large [39, 40] and small times [41]. Further results on this and more general measure-valued diffusions with immigration can be found in [32] and references therein. #### Branching Brownian Motion with Interaction Some work has also been done on branching Brownian motion (BBM) with interaction. Classical BBM, without interaction, is a system of particles that move as independent Brownian motions, and branch at constant rates into exactly two offspring. The number of particles grows exponentially and there is no chance of extinction, which makes it quite different from the BPS with critical branching considered here. Questions of interest in this setting are often of extremal type, such as the structure of the process close to the furthest particle from the origin [1, 2], or large time limits of the population's empirical measure if scaled appropriately [19, 20]. Some authors have studied BBM with repulsive or attractive interactions. Englander considered the case where particles have an Ornstein-Uhlenbeck-type attraction or repulsion (that is, \(W(x)=bx^{2}\) for \(b\in\mathbb{R}\)) to or from their common centre of mass [19], and from each other [20]. Similar results have been obtained in the context of supercritical superBrownian motion [25]. This particular interaction often allows for explicit calculations because \(\nabla W(x)\propto x\) is linear. Note also that in light of our motivation, it is not the most natural choice of repulsion, since its strength grows, rather than decays with distance. In another recent paper [5], authors study a BBM in which they introduce short-range pairwise repulsion through a change of measure that penalises the total time that particles spend within close range of each other. They show that the dominant effect of the penalisation is a drastic reduction in branching rate, and that this model is well-approximated by a simplified model in which only branching events are penalised, and there is no repulsion between individuals once they are born. #### Aggregation-Diffusion Equations Equation (1.1) is also related to a well-studied class of non-local, nonlinear partial differential equations known as _aggregation-diffusion equations_, \[\partial_{t}\rho=\frac{1}{2}\Delta\rho+\nabla\cdot(\rho\nabla W\star\rho), \tag{1.2}\] which differ from (1.1) in that there is no immigration, and the interaction potential is attractive at long range rather than repulsive. A review of this class of equations is [10]. Variants of (1.2) also exist with non-linear diffusion [34, 37]; when the diffusion is linear as above, (1.2) is commonly called a _McKean Vlasov equation_. Aggregation-diffusion equations have attracted significant interest in the literature because they describe the large scale dynamics of a wide variety of interacting particle systems arising in biology, physics, social and other life sciences, which are often driven by long-range attraction and short range repulsion. Examples include chemotaxis, bacteria orientation, or motion of human crowds, see [31, 10]. Significant mathematical interest is further due to the delicate competition between aggregation and diffusion, which leads to a dichotomy between well-posedness and finite time blowup [3, 4, 8, 36, 42, 14, 15]. For sufficiently weak interaction, the diffusion dominates and the solution asymptotically simplifies to the solution of the heat equation [11], while a balance between diffusion and aggregation can lead to the existence of non-trivial steady states [7, 12, 13, 29, 33]. Even though (1.1) looks similar to (1.2), its behaviour is markedly different. Where the competition between aggregation and diffusion decides the behaviour of (1.2), that of (1.1) is decided by the competition between the immigration against the diffusion _and_ the repulsion, which both work to spread the immigrated mass. ### Summary of Results We first establish well-posedness of (1.1) under mild regularity assumptions on \(f\) and \(W\). Then we find sharp conditions on \(W\) under which the following global boundedness property holds: \[\exists M>0\colon\left\|\rho_{0}\right\|_{\infty}\leq M\implies\sup_{t\geq 0 }\left\|\rho_{t}\right\|_{\infty}\leq M. \tag{1.3}\] Here \(L^{p}=L^{p}(\mathbb{R}^{d})\) for \(p\in[1,\infty]\) denote the usual \(L^{p}\) spaces. In particular, (1.3) gives sufficient conditions on \(W\) under which \(\sup_{t\geq 0}\left\|\rho_{t}\right\|_{\infty}<\infty\) for any bounded initial condition. Under the same conditions on \(W\), a form of the maximum principle holds: \[\exists M>0\colon\left\|\rho_{0}\right\|_{\infty}\geq M\implies\max_{t\geq 0} \left\|\rho_{t}\right\|_{\infty}=\left\|\rho_{0}\right\|_{\infty}. \tag{1.4}\] Both results follow from a differential inequality of the form \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\leq\left\|f\right\|_{\infty} -c\left\|\rho_{t}\right\|_{\infty}^{2},\] for a sharp value \(c=c_{W}\in\mathbb{R}\), which implies (1.3) and (1.4) if \(c_{W}>0\). See Theorems 2.6 to 2.8 for precise statements. It will turn out that the critical strength of the repulsion for \(c_{W}>0\) to hold is \(|\nabla W(x)|\sim|x|^{1-d}\), and \(W\) must be singular at the origin. A natural example that satisfies both assumptions is the Newtonian potential, see (2.3) below, which has the physical interpretation of the electrodynamical repulsive potential in \(\mathbb{R}^{d}\). ### Outline In the following section we will give precise statements of our results, and Section 3 contains the proofs, followed by a short outlook in Section 4. In Appendix A we elaborate on the connection between (1.1) and stability of long-term dynamics in spatial population models, and in Appendix B we give a brief definition of fractional Sobolev spaces, and recall and proof some basic facts about them. ## 2. Results We first consider well-posedness of (1.1). Let \(d\in\mathbb{N}\), and denote by \(\mathcal{W}^{\gamma,p}(\Omega)\) the usual Sobolev spaces on a domain \(\Omega\subset\mathbb{R}^{d}\), where \(p\in[1,\infty]\), and \(\gamma\geq 0\) may not be an integer (if it is, we usually write \(k\) instead of \(\gamma\)). See Appendix B for a brief and [17, 16, 6] for a comprehensive introduction to fractional Sobolev spaces. If \(\Omega=\mathbb{R}^{d}\), we simply write \(\mathcal{W}^{\gamma,p}\). We further write, for \(\gamma\geq 0\), \[\mathcal{X}^{\gamma}\coloneqq\mathcal{W}^{\gamma,1}\cap\mathcal{W}^{\gamma, \infty},\quad\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left 2. Assumption 1 (i) is required to ensure that the mild solution is also a classical solution. A simple sufficient criterion is that \(f\) is compactly supported and Holder continuous (with any positive exponent). For the remaining results, we make the following additional regularity assumptions. Denote by \(g_{+}=g\lor 0\) and \(g_{-}=(-g)\lor 0\) respectively the positive and negative parts of a function \(g\colon\mathbb{R}^{d}\to\mathbb{R}\). If either \(g_{+}\) or \(g_{-}\) is integrable, we write \(\int g\coloneqq\int_{\mathbb{R}^{d}}g(x)\,\mathrm{d}x\in[-\infty,\infty]\). **Assumption 2**.: 1. \(\Delta W\) is bounded on \(\mathbb{R}^{d}\setminus B(0,r)\) for all \(r>0\), and locally integrable. 2. At least one of \((\Delta W)_{-}\) and \((\Delta W)_{+}\) is integrable. In particular, \(\int\Delta W\in[-\infty,\infty]\) exists. As we will see shortly, the fact that \(W\) is eventually decreasing implies \(\int\Delta W<\infty\) and hence that in fact \((\Delta W)_{+}\) must be integrable. Examples for interaction potentials that satisfy Assumptions 1 and 2 include repulsive power laws \[W(x)=-P_{A}(x)\coloneqq-\begin{cases}\frac{|x|^{A}}{A},&A\neq 0,\\ \log|x|,&A=0,\end{cases}\] with \(1\geq A\geq 2-d\) (for Assumption 1 and therefore Theorem 2.1 it is sufficient if \(A>1-d\)), and Morse potentials [18], \[W(x)=-C_{A}\mathrm{e}^{-|x|/\ell_{A}}+C_{R}\mathrm{e}^{-|x|/\ell_{R}}, \tag{2.2}\] with \(C_{A},C_{R},\ell_{A},\ell_{B}>0\), which are repulsive at long range if \(\ell_{R}<\ell_{A}\). In our setting the most important example, which will be a crucial special case of our main results, is the _Newtonian potential_, \[W_{N}(x)=-c_{d}^{-1}P_{2-d}(x)=c_{d}^{-1}\begin{cases}\frac{1}{d-2}|x|^{2-d},& d\neq 2,\\ -\log|x|,&d=2,\end{cases} \tag{2.3}\] where \(c_{d}\) is the surface area of the unit ball in \(\mathbb{R}^{d}\). The Newtonian potential is the Green's function of the Laplace equation in \(\mathbb{R}^{d}\) (that is \(\Delta W_{N}=-\delta_{0}\) in a distributional sense), and as mentioned before, has the physical interpretation of the electrodynamical repulsive potential in \(\mathbb{R}^{d}\). Let us denote the average radial part of \(\nabla W\) at a distance \(R>0\) by \[\left\langle\nabla W\right\rangle(R)\coloneqq\frac{1}{|\partial B(0,R)|}\int \limits_{\partial B(0,R)}\nabla W\cdot\mathrm{d}\widehat{n}, \tag{2.4}\] where \(|\partial B(0,R)|=c_{d}R^{d-1}\) is the surface area of \(B(0,R)\). If \(W\) is radially symmetric, say \(W(x)=g(|x|)\), then \(\left\langle\nabla W\right\rangle(R)=g^{\prime}(R)\). **Lemma 2.3**.: _The following limits exist,_ \[\eta_{W}\coloneqq\lim_{r\to 0}\frac{\left\langle\nabla W\right\rangle(r)}{ \left\langle\nabla W_{N}\right\rangle(r)},\qquad\alpha_{W}\coloneqq\lim_{R\to \infty}\frac{\left\langle\nabla W\right\rangle(R)}{\left\langle\nabla W_{N} \right\rangle(R)}, \tag{2.5}\] _with \(\eta_{W}\in\mathbb{R}\) and \(\alpha_{W}\in[0,\infty]\), and_ \[\alpha_{W}=\eta_{W}-\int(\Delta W). \tag{2.6}\] The quantities \(\eta_{W}\) and \(\alpha_{W}\) compare the strength of the repulsion to that of the Newtonian potential at close and long range, respectively. Note that \(\left\langle\nabla W_{N}\right\rangle(r)=-c_{d}^{-1}r^{1-d}\), so equivalently to (2.5) we could write \[\eta_{W}=-c_{d}\lim_{r\to 0}r^{d-1}\left\langle\nabla W\right\rangle(r),\qquad \alpha_{W}=-c_{d}\lim_{R\to\infty}R^{d-1}\left\langle\nabla W\right\rangle(R).\] As we will see in Lemma 2.11 below, \(\eta_{W}\) also determines the singular behaviour of \(W\) at the origin, in that \(\Delta W\) contains, in a distributional sense, a multiple \(-\eta_{W}\) of the Dirac delta at zero. Note that (2.6) implies the aforementioned fact that \(\int(\Delta W)_{+}<\infty\), because \(\eta_{W}\in\mathbb{R}\) and \(\alpha_{W}\geq 0\). **Definition 2.4**.: Define \[c_{W}\coloneqq\eta_{W}-\int(\Delta W)_{+}. \tag{2.7}\] If \(\alpha_{W}<\infty\) then \(c_{W}=\alpha_{W}-\int(\Delta W)_{-}\) by (2.6), so in fact \[c_{W}=\frac{1}{2}\left(\alpha_{W}+\eta_{W}-\int|\Delta W|\right)\quad\text{ if }\quad\alpha_{W}<\infty. \tag{2.8}\] The critical condition for our main results will be that \(c_{W}>0\). Of the examples given above, the most natural is the Newtonian, for which \(\alpha_{W}=\eta_{W}=c_{W}=1\). Another family of examples is a mixture of repulsive power laws \[W(x)=W_{N}(x)-P_{A}(x),\qquad 1\geq A>2-d,\] which have \(|\nabla W(x)|\sim|x|^{1-d}\) at short and \(|\nabla W(x)|\sim|x|^{A-1}\) at long range, and \(\alpha_{W}=\infty\), \(\eta_{W}=c_{W}=1\). The fact that \(\eta_{W}\) is linear in \(W\) implies that \[c_{W+W^{\prime}}\geq c_{W}+c_{W^{\prime}},\qquad c_{aW}=ac_{W},\] for two potentials \(W\) and \(W^{\prime}\) and \(a>0\). In particular, the property \(c_{W}>0\) is preserved under positive linear combinations. Furthermore, if \(c_{W}>0\) and \(\widehat{W}\) is a perturbative potential, then \(W+\varepsilon\widehat{W}\) retains positive \(c_{W}\) for small \(\varepsilon>0\), in fact as long as \(\varepsilon<c_{W}|c_{\widehat{W}}|^{-1}\). A natural example for a perturbation would be a smooth potential that decays faster than Newtonian (so that \(\eta_{W}=\alpha_{W}=0\)), in which case \(c_{\widehat{W}}=-\frac{1}{2}\int|\Delta\widehat{W}|\) by (2.8). We now state our main results. **Proposition 2.5**.: _If \(c_{W}>0\) then \(T^{\star}=\infty\) for any \(\rho_{0}\in\mathcal{X}_{+}\)._ This is part of Corollary 2.10 below. Denote by \(C_{c+}^{\infty}(\mathbb{R}^{d})\) the set of infinitely differentiable, compactly supported, non-negative functions on \(\mathbb{R}^{d}\). **Theorem 2.6** (Maximum Principle).: _If \(c_{W}>0\) and \(M=\sqrt{\left\|f\right\|_{\infty}/c_{W}}\), then the following holds._ \[\forall\rho_{0}\in\mathcal{X}_{+}\colon\left\|\rho_{0}\right\|_{\infty}\geq M \implies\max_{0\leq t<T^{\star}}\left\|\rho_{t}\right\|_{\infty}=\left\|\rho_ {0}\right\|_{\infty}, \tag{2.9}\] _and if \(\left\|\rho_{0}\right\|_{\infty}>M\), then \(\left\|\rho_{t}\right\|_{\infty}<\left\|\rho_{0}\right\|_{\infty}\) for all \(t\in(0,T^{\star})\). If \(M<\sqrt{\left\|f\right\|_{\infty}/c_{W}}\), or \(c_{W}<0\) and \(M>0\) arbitrary, then (2.9) is false, and a counterexample can be chosen with \(\rho_{0}\in C_{c+}^{\infty}(\mathbb{R}^{d})\)._ **Theorem 2.7** (Global Boundedness).: _If \(c_{W}>0\) and \(M=\sqrt{\left\|f\right\|_{\infty}/c_{W}}\), then the following holds._ \[\forall\rho_{0}\in\mathcal{X}_{+}\colon\left\|\rho_{0}\right\|_{\infty}\leq M \implies\sup_{0\leq t<T^{\star}}\left\|\rho_{t}\right\|_{\infty}\leq M. \tag{2.10}\] _If \(M<\sqrt{\left\|f\right\|_{\infty}/c_{W}}\), or \(c_{W}<0\) and \(M>0\) arbitrary, then (2.10) is false, and a counterexample can be chosen with \(\rho_{0}\in C_{c+}^{\infty}(\mathbb{R}^{d})\)._ Both theorems are consequences of the following result. Write \(\partial_{t}^{+}g(t)\coloneqq\overline{\lim}_{h\downarrow 0}\,\frac{g(t+h)-g(t)}{h}\) for a function \(g\colon[0,T)\to[0,\infty)\) and \(t\in[0,T)\). **Theorem 2.8**.: _For any \(\rho_{0}\in\mathcal{X}_{+}\) and \(t\in[0,T^{\star})\),_ \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\leq\left\|f\right\|_{\infty} -c_{W}\left\|\rho_{t}\right\|_{\infty}^{2}. \tag{2.11}\] _For every \(c>c_{W}\), there exists \(\rho_{0}\in C_{c+}^{\infty}(\mathbb{R}^{d})\) such that (2.11) is false at time zero if \(c_{W}\) is replaced by \(c\)._ _Remark 2.9_.: If the immigration \(f\) is time-dependent with sufficient regularity, say \(f\in C([0,\infty),\mathcal{X}^{\gamma_{f}})\), then existence and uniqueness for solutions to (1.1) still hold, and Theorem 2.8 remains true with (2.11) replaced by \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\leq\left\|f_{t}\right\|_{ \infty}-c_{W}\left\|\rho_{t}\right\|_{\infty}^{2},\quad t\in[0,T^{\star}).\] In particular, \(T^{\star}=\infty\) and \(\sup_{t\geq 0}\left\|\rho_{t}\right\|_{\infty}<\infty\) for any \(\rho_{0}\in\mathcal{X}_{+}\) as long as \(\sup_{t\geq 0}\left\|f_{t}\right\|_{\infty}<\infty\) and \(c_{W}>0\). **Corollary 2.10**.: _If \(c_{W}>0\) then \(T^{\star}=\infty\) and for all \(t\geq 0\),_ \[\left\|\rho_{t}\right\|_{\infty}\leq M\begin{cases}\coth(Mc_{W}(t+t_{0})),& \left\|\rho_{0}\right\|_{\infty}>M,\\ \tanh(Mc_{W}(t+t_{0})),&\left\|\rho_{0}\right\|_{\infty}<M,\\ 1,&\left\|\rho_{0}\right\|_{\infty}=M,\end{cases} \tag{2.12}\] _where \(M=\sqrt{\left\|f\right\|_{\infty}/c_{W}}\) and \(t_{0}\in\mathbb{R}\) is such that the right-hand side at \(t=0\) is \(\left\|\rho_{0}\right\|_{\infty}\)._ An illustration of the upper bound (2.12) is in Fig. 1. We close by making precise the previously mentioned fact that \(\eta_{W}\) is connected to the singular behaviour of \(W\) at the origin. **Lemma 2.11**.: _There is \(\xi_{W}\in\mathbb{R}^{d}\) such that for any \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) (in particular any \(g\in\mathcal{X}^{\gamma}\) with \(\gamma>1\)),_ \[\nabla\cdot(\nabla W\star g)=(\Delta W)\star g-\eta_{W}g+\xi_{W}\cdot\nabla g.\] _If \(W(x)=W(-x)\) for all \(x\in\mathbb{R}^{d}\setminus\{0\}\), then \(\xi_{W}=0\)._ Figure 1. Evolution of the upper bound in (2.12) for different values of \(\left\|\rho_{0}\right\|_{\infty}\). That is, \(\Delta W\) can be viewed in a distributional sense as a sum of the function \(x\mapsto\Delta W(x)\) on \(\mathbb{R}^{d}\setminus\{0\}\), and \(-\eta_{W}\delta_{0}-\xi_{W}\cdot\nabla\delta_{0}\) where \(\delta_{0}\) denotes the Dirac mass at the origin. ## 3. Proofs We write \(C\) for an unimportant positive constant whose value may change from one appearance to the next. If \(g\colon\mathbb{R}^{d}\to\mathbb{R}^{m}\) for some \(m\in\mathbb{N}\), then \(\left\|g\right\|_{L^{p}}\coloneqq\sum_{i=1}^{m}\left\|g_{i}\right\|_{L^{p}}\), similarly for other norms. Recall (2.1), and note that by Lemma B.2, \(\mathcal{X}^{\gamma}\) with this norm is a Banach space which embeds continuously into \(C^{\lfloor\gamma\rfloor-1,1}(\mathbb{R}^{d})\) if \(\gamma\geq 1\), and into \(C^{\lfloor\gamma\rfloor,\gamma-\lfloor\gamma\rfloor}(\mathbb{R}^{d})\) if \(\gamma\in(0,\infty)\setminus\mathbb{N}\). Here, \(C^{k,\beta}(\mathbb{R}^{d})\) for \(k\in\mathbb{N}_{0}\) and \(\beta\in(0,1]\) is the space of functions \(g\colon\mathbb{R}^{d}\to\mathbb{R}\) which are \(k\) times continuously differentiable and for which \[\left\|g\right\|_{C^{k,\beta}}\coloneqq\sum_{\left|\alpha\right|<k}\left\| \partial^{\alpha}g\right\|_{L^{\infty}}+\sum_{\left|\alpha\right|=k}\sup_{x \neq y}\frac{\left|\partial^{\alpha}g(x)-\partial^{\alpha}g(y)\right|}{\left| x-y\right|^{\beta}}<\infty, \tag{3.1}\] with the usual notational conventions for multi-indices \(\alpha\). ### Existence and Regularity of Solutions In this section, we prove local in time well-posedness and regularity of solutions to (1.1) under Assumption 1. Many of the ideas were inspired by arguments in Section 2 of [11]. We will begin by establishing well-posedness of a weak version of (1.1). More precisely, by Duhamel's formula we can formally rewrite (1.1) with initial condition \(\rho_{0}\in\mathcal{X}\) as an integral equation \[\rho_{t}=G_{t}\star\rho_{0}+\int_{0}^{t}G_{s}\star f\,\mathrm{d}s+\int_{0}^{t }\nabla G_{t-s}\star(\rho_{s}\nabla W\star\rho_{s})\,\mathrm{d}s, \tag{3.2}\] where \((G_{s}\colon\mathbb{R}^{d}\to(0,\infty))_{s>0}\) denotes the heat kernel, and \(F\star H\coloneqq\sum_{i=1}^{d}F_{i}\star H_{i}\) for vector fields \(F\) and \(H\). **Lemma 3.1**.: _If \(\gamma\geq 0\) then there exists \(C>0\) such that for any \(g\in\mathcal{X}^{\gamma}\),_ \[\left\|\nabla W\star g\right\|_{\mathcal{W}^{\gamma,\infty}}\leq C\norm{g}_{ \gamma}.\] Proof.: Denote \(F_{1}\coloneqq\nabla W\mathds{1}_{B(0,r_{0})}\in L^{1}\) and \(F_{2}\coloneqq\nabla W\mathds{1}_{\mathbb{R}^{d}\setminus B(0,r_{0})}\). Then, using the fractional version of Young's convolution inequality (see Theorem A.1 in [11]), \[\left\|\nabla W\star g\right\|_{\mathcal{W}^{\gamma,\infty}} \leq\left\|F_{1}\star g\right\|_{\mathcal{W}^{\gamma,\infty}}+ \left\|F_{2}\star g\right\|_{\mathcal{W}^{\gamma,\infty}}\] \[\leq C(\left\|F_{1}\right\|_{L^{1}}\left\|g\right\|_{\mathcal{W}^ {\gamma,\infty}}+\left\|F_{2}\right\|_{L^{\infty}}\left\|g\right\|_{\mathcal{W }^{\gamma,1}})\] \[\leq C(\left\|F_{1}\right\|_{L^{1}}+\left\|F_{2}\right\|_{L^{ \infty}})\norm{g}_{\gamma}.\] We now prove well-posedness of (3.2) locally in time. A standard fact we will use repeatedly is that, for \(\gamma\geq 0\), \[\left\|\nabla G_{s}\right\|_{\mathcal{W}^{\gamma,1}}\leq\left\|G_{s}\right\|_{ \mathcal{W}^{1+\gamma,1}}\leq Cs^{-(1+\gamma)/2},\quad s>0. \tag{3.3}\] (See, for example, [11, p.5].) **Theorem 3.2** (Local in time well-posedness).: _Given \(\rho_{0}\in\mathcal{X}\), there is a \(T^{\star}\in(0,\infty]\) and a \(\rho\in C([0,T^{\star}),\mathcal{X})\) which solves (3.2) started at \(\rho_{0}\) and such that any solution \(\widetilde{\rho}\in C([0,T),\mathcal{X})\) of (3.2) starting at \(\rho_{0}\) satisfies \(T\leq T^{\star}\) and coincides with \(\rho\) on \([0,T)\). Furthermore, if \(T^{\star}<\infty\) then \(\norm{\rho(t)}\rightarrow\infty\) as \(t\to T^{\star}\). If \(\rho_{0}\), \(f\), and \(W\) are radially symmetric, then so is \(\rho_{t}\) for all \(t\in[0,T^{\star})\)._ Proof.: For fixed \(T>0\) define \(\mathcal{F}\colon C([0,T],\mathcal{X})\to C([0,T],\mathcal{X})\) by the right-hand side (RHS) of (3.2), that is, \[\mathcal{F}[\rho]_{t}=G_{t}\star\rho_{0}+\int_{0}^{t}G_{s}\star f\,\mathrm{d}s+ \int_{0}^{t}\nabla G_{t-s}\star(\rho_{s}\nabla W\star\rho_{s})\,\mathrm{d}s, \quad t\in[0,T].\] We write \(\left|\!\left|\!\left|\rho\right|\!\right|\!\right|\coloneqq\sup_{0\leq s\leq T }\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|\) for \(\rho\in C([0,T],\mathcal{X})\). For \(p\in\{1,\infty\}\), by Young's convolutional inequality and Lemma 3.1, \[\left|\!\left|\mathcal{F}[\rho]_{t}\right|\!\right|_{L^{p}} \leq\left|\!\left|G_{t}\right|\!\right|_{L^{1}}\left|\!\left|\rho _{0}\right|\!\right|_{L^{p}}+\int_{0}^{t}\left|\!\left|G_{s}\right|\!\right|_{ L^{1}}\left|\!\left|f\right|\!\right|_{L^{p}}\mathrm{d}s\] \[\qquad+\int_{0}^{t}\left|\!\left|\nabla G_{t-s}\right|\!\right|_{ L^{1}}\left|\!\left|\rho_{s}\right|\!\right|_{L^{p}}\left|\!\left|\nabla W \star\rho_{s}\right|\!\right|_{L^{\infty}}\mathrm{d}s\] \[\leq\left|\!\left|\!\left|\rho_{0}\right|\!\right|\!\right|+C \left(t+\sqrt{t}\sup_{s\leq t}\left|\!\left|\!\left|\rho_{s}\right|\!\right|^{2 }\right).\] This implies \[\left|\!\left|\!\left|\mathcal{F}[\rho]\right|\!\right|\!\right|\leq\left|\! \left|\!\left|\!\left|\rho_{0}\right|\!\right|\!\right|+C\left(T+\sqrt{T} \right|\!\left|\!\left|\rho\right|\!\right|^{2}\right). \tag{3.4}\] (In particular, \(\mathcal{F}[\rho]_{t}\in\mathcal{X}\) so \(\mathcal{F}\) is well-defined.) By a similar argument, if \(\rho^{1},\rho^{2}\in C([0,T],\mathcal{X})\), then \[\left|\!\left|\mathcal{F}[\rho^{1}]_{t}-\mathcal{F}[\rho^{2}]_{ t}\right|\!\right|_{L^{p}} \leq\int_{0}^{t}\left|\!\left|\nabla G_{t-s}\right|\!\right|_{L^{1 }}\left|\!\left|\rho_{s}^{1}\nabla W\star\rho_{s}^{1}-\rho_{s}^{2}\nabla W \star\rho_{s}^{2}\right|\!\right|_{L^{p}}\mathrm{d}s\] \[=\int_{0}^{t}\left|\!\left|\nabla G_{t-s}\right|\!\right|_{L^{1}} \left|\!\left|\rho_{s}^{1}\nabla W\star(\rho_{s}^{1}-\rho_{s}^{2})+(\rho_{s}^ {1}-\rho_{s}^{2})\nabla W\star\rho_{s}^{2}\right|\!\right|_{L^{p}}\mathrm{d}s\] \[\leq C\int_{0}^{t}(t-s)^{-1/2}\Big{[}\left|\!\left|\rho_{s}^{1} \right|\!\right|_{L^{p}}\left|\!\left|\!\left|\rho_{s}^{1}-\rho_{s}^{2}\right| \!\right|\!\right|+\left|\!\left|\rho_{s}^{1}-\rho_{s}^{2}\right|\!\right|_{L^ {p}}\left|\!\left|\!\left|\rho_{s}^{2}\right|\!\right|\!\right|\Big{]}\mathrm{ d}s\] \[\leq C\sqrt{t}\sup_{s\leq t}\left(\left|\!\left|\!\left|\rho_{s}^{ 1}\right|\!\right|\!\right|+\left|\!\left|\rho_{s}^{2}\right|\!\right|\right) \left|\!\left|\!\left|\rho_{s}^{1}-\rho_{s}^{2}\right|\!\right|\!\right|,\] so \[\left|\!\left|\!\left|\mathcal{F}[\rho^{1}]-\mathcal{F}[\rho^{2}]\right|\! \right|\!\right|\leq c\sqrt{T}\left(\left|\!\left|\!\left|\rho^{1}\right|\! \right|\!\right|+\left|\!\left|\!\left|\rho^{2}\right|\!\right|\!\right|\right) \left|\!\left|\!\left|\rho^{1}-\rho^{2}\right|\!\right|\!\right|. \tag{3.5}\] Put \(\mathcal{Y}=\{g\in\mathcal{X}\colon\left|\!\left|\!\left|g\right|\!\right|\! \right|\leq\left|\!\left|\!\left|\rho_{0}\right|\!\right|\!\right|+1\}\). Then by (3.4) and (3.5), \(T>0\) can be chosen small enough so that \(\mathcal{F}\) maps to itself and is a contraction on the space \(C([0,T],\mathcal{Y})\). This shows that \[T^{\star}\coloneqq\sup\left\{t\geq 0\colon\text{there is a solution in }C([0,T],\mathcal{X})\text{ to }\eqref{eq:C([0,T], \mathcal{X})}\right\}>0.\] Suppose now that \(\rho\) and \(\widetilde{\rho}\) are two different solutions to (3.2) on \([0,t]\) for some \(t\in(0,T^{\star}]\), starting at \(\rho_{0}\). Let \[t_{0}\coloneqq\sup\left\{s\in[0,t]\colon\rho(r)=\widetilde{\rho}(r)\,\forall r \leq s\right\}\in[0,t].\] Suppose that \(t_{0}<t\), let \(t_{1}\coloneqq t_{0}+\delta\in(t_{0},t)\) for some \(\delta\in(0,t-t_{0})\), and define \(\mathcal{F}\colon C([t_{0},t_{1}],\mathcal{X})\to C([t_{0},t_{1}],\mathcal{X})\) by \[\mathcal{F}[u]_{s}=G_{s-t_{0}}\star\rho(t_{0})+\int_{t_{0}}^{s}G_{s-r}\star f\, \mathrm{d}r+\int_{t_{0}}^{s}\nabla G_{s-r}\star(u_{r}\nabla W\star u_{r})\, \mathrm{d}r,\quad s\in[t_{0},t_{1}].\] Put \(K\coloneqq\llbracket\!\!\rrbracket\rho\rrbracket\vee\llbracket\widetilde{\rho} \rrbracket\!\!\rrbracket\). Similarly to before we can show that \(\left|\!\left|\!\left|\mathcal{F}[u]_{s}\right|\!\!\right|\!\right|\leq K+C( \delta+\sqrt{\delta}\!\left|\!\left|\!\left|u\right|\!\right|\!\right|^{2})\) for some \(C>0\) and all \(s\in[t_{0},t_{1}]\), hence for sufficiently small \(\delta>0\), \(\mathcal{F}\) maps to itself and, by an argument identical to that leading to (3.5), is a contraction on \(C([t_{0},t_{1}],\mathcal{Y})\), where \(\mathcal{Y}=\{g\in\mathcal{X}\colon\left|\!\left|\!\left|g\right|\!\right|\! \right|\leq K+1\}\), so it has a unique fixed point. Since the restrictions of both \(\rho\) and \(\widetilde{\rho}\) to \([t_{0},t_{1}]\) are fixed points of \(\mathcal{F}\), we conclude they must coincide on \([t_{0},t_{1}]\), contradicting the definition of \(t_{0}\). We have proved that all solutions must coincide at all times where both are defined, in particular there exists a \(\rho\in C([0,T^{\star}),\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \right|\!\right|\!\right|\!\right|)\) which solves (3.2), and such that any solution \(\widetilde{\rho}\in C([0,T),\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \right|\!\right|\!\right|)\) of (3.2) satisfies \(T\leq T^{\star}\) and coincides with \(\rho\) on \([0,T)\). Now suppose that \(T^{\star}<\infty\), in which case we want to show that Assume for contradiction that there is a \(K>0\) and a sequence \(0<T_{n}\uparrow T^{\star}\) such that for all \(n\in\mathbb{N}\). Then we define \(\mathcal{F}_{n}\colon C([T_{n},T_{n}+\delta],\mathcal{X})\to C([T_{n},T_{n}+ \delta],\mathcal{X})\) by \[\mathcal{F}_{n}[u]_{t}=G_{t-T_{n}}\star\rho(T_{n})+\int_{T_{n}}^{ t}G_{t-s}\star f\,\mathrm{d}s\\ +\int_{T_{n}}^{t}\nabla G_{t-s}\star(u_{s}\nabla W\star u_{s})\, \mathrm{d}s,\qquad T_{n}\leq t\leq T_{n}+\delta,\] for some \(\delta>0\). Similarly to before we show for some \(C>0\), and (3.5) with \(T\) replaced by \(\delta\). Now choose \(\delta\) small enough that \(\mathcal{F}_{n}\) (maps to itself and) is a contraction on the space \(C([T_{n},T_{n}+\delta],\mathcal{Y})\) where \(\mathcal{Y}=\{g\in\mathcal{X}\colon g\colon\left|\!\left|\!\left|g\right|\! \right|\!\right|\leq K+1\}\). This choice of \(\delta\) can be made independent of \(n\), so there exists \(n\in\mathbb{N}\) with \(T_{n}+\delta>T^{\star}\), and we can concatenate \(\rho\big{|}_{[0,T_{n}]}\) with the fixed point of \(\mathcal{F}_{n}\) to obtain a solution to (3.2) defined on \([0,T_{n}+\delta]\supsetneq[0,T^{\star}]\), a contradiction. This also implies that, if \(T^{\star}<\infty\), there cannot be a solution defined on \([0,T^{\star}]\). If \(f\), \(W\), and \(\rho_{0}\) are radially symmetric, then \(\mathcal{F}\) preserves radial symmetry, so the fixed point iteration started at the constant in time function \((\rho_{t}\equiv\rho_{0}\colon t\in[0,T))\) is radially symmetric at every step and converges to \(\rho\) uniformly on \(\mathbb{R}^{d}\), so \(\rho\) is radially symmetric. For the remainder of this section, we assume some \(\rho_{0}\in\mathcal{X}_{+}\) to be given, and denote by \(\rho\in C([0,T^{\star}),\mathcal{X})\) the unique solution to (3.2) started at \(\rho_{0}\). **Theorem 3.3** (Regularity).: _For any \(\gamma\in(0,2+\gamma_{f})\),_ \[\rho\in C([0,T^{\star}),\mathcal{X}_{+})\cap C((0,T^{\star}),\mathcal{X}^{ \gamma})\] _In particular, \(\rho\) has bounded \(C^{2,\gamma_{f}}(\mathbb{R}^{d})\)-norm on compact subsets of \((0,T^{\star})\), and solves (1.1) in the classical sense on \((0,T^{\star})\). If \(\left|\!\left|\!\left|\rho_{0}\right|\!\right|\!\right|_{\gamma}<\infty\) (in particular if \(\rho_{0}\equiv 0\)), then \(\rho\in C([0,T^{\star}),\mathcal{X}^{\gamma})\)._ Proof.: Fix \(T\in(0,T^{\star})\). Then \(\sup_{0\leq s\leq T}\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|<\infty\), so it suffices to show that for any \(\delta>0\), \(\gamma\in[\frac{1}{2},2+\gamma_{f})\), \[\sup_{\delta\leq s\leq T}\left|\!\left|\!\left|\rho\right|\!\right|\!\right|_{ \gamma-\frac{1}{2}}<\infty\implies\sup_{2\delta\leq s\leq T}\left|\!\left|\! \left|\rho\right|\!\right|\!\right|_{\gamma}<\infty. \tag{3.6}\] Indeed, by iteration this implies that \(\rho\in C((t_{0},t_{1}),\mathcal{X}^{\gamma})\) for all \(0<t_{0}<t_{1}<T^{\star}\) and hence \(\rho\in C((0,T^{\star}),\mathcal{X}^{\gamma})\), for all \(\gamma\in[0,2+\gamma_{f})\). So let \(\gamma\in[\frac{1}{2},2+\gamma_{f})\), \(\delta>0\), and assume that \(\sup_{\delta\leq s\leq T}\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|_{ \gamma-\frac{1}{2}}<\infty\). Fix \(\varepsilon\in(0,1/2)\), and let \(p\in\{1,\infty\}\) and \(t\in[2\delta,T]\). Then, by Young's fractional convolution inequality and (3.3), (3.7) Now, since \(\gamma-\gamma_{f}<2\), \[t^{-\gamma/2}+\int_{0}^{t}s^{-(\gamma-\gamma_{f})/2}\leq\delta^{-\gamma/2}+ \int_{0}^{T}s^{-(\gamma-\gamma_{f})/2}\,\mathrm{d}s<\infty \tag{3.8}\] is a finite constant, independent of \(t\) given \(\delta,\gamma,T\). By Lemmas 3.1, B.1 and B.3, putting \(\beta\coloneqq\gamma-\frac{1}{2}\), \[\left\|\rho_{s}\nabla W\star\rho_{s}\right\|_{\mathcal{W}^{\beta-s,p}}\leq \left\|\rho_{s}\right\|_{\mathcal{W}^{\beta,p}}\left\|\nabla W\star\rho_{s} \right\|_{\mathcal{W}^{\beta,\infty}}\leq C\left\|\rho_{s}\right\|_{\mathcal{W }^{\beta,p}}\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|_{\beta} \leq C\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|_{\beta}^{2}. \tag{3.9}\] Thus, (3.10) Similarly to (3.9) (but easier), \(\left\|\rho_{s}\nabla W\star\rho_{s}\right\|_{L^{p}}\leq C\left|\!\left|\!\left| \rho_{s}\right|\!\right|\!\right|^{2}\), so \[\int_{0}^{\delta}\left\|\nabla G_{t-s}\right\|_{\mathcal{W}^{\gamma,1}}\left\| \rho_{s}\nabla W\star\rho_{s}\right\|_{L^{p}}\mathrm{d}s\leq C\sup_{0\leq s \leq T}\left|\!\left|\rho_{s}\right|\!\right|^{2}\cdot\int_{\delta}^{T}s^{-(1+ \gamma)/2}\,\mathrm{d}s<\infty,\] which, given \(\delta,T,\gamma\), is a constant independent of \(t\). Combining this and (3.8) and (3.10) with (3.7) gives \[\left\|\rho_{t}\right\|_{\mathcal{W}^{\gamma,p}}\leq C\left(1+\sup_{\delta\leq s \leq T}\left|\!\left|\!\left|\rho_{s}\right|\!\right|\!\right|_{\gamma-\frac{1 }{2}}^{2}\right)<\infty,\] for any \(t\in[2\delta,T]\) and \(p\in\{1,\infty\}\), for \(C>0\) that does not depend on \(t\) given \(\delta\), \(T\), and \(\gamma\). We conclude \[\sup_{2\delta\leq s\leq T}\left|\!\left|\!\left|\rho_{s}\right|\!\right|\! \right|_{\gamma}<\infty\] as claimed. Then recall from Lemma B.2 that \(\mathcal{X}^{2+\gamma_{f}}\hookrightarrow C^{2,\gamma_{f}}(\mathbb{R}^{d})\). If \(\gamma\in[0,2+\gamma_{f})\) and \(\left|\!\left|\!\left|\rho_{0}\right|\!\right|\!\right|_{\gamma}<\infty\), we can bound \(\left|\!\left|G_{t}\star\rho_{0}\right|\!\right|\!\right|_{\gamma}\leq\left| \!\left|G_{t}\right|\!\right|_{L^{1}}\left|\!\left|\!\left|\rho_{0}\right|\! \right|\!\right|_{\gamma}=\left|\!\left|\!\left|\rho_{0}\right|\!\right|\! \right|_{\gamma}<\infty\) which is uniform in \(t\in[0,T^{\star})\), in contrast to the bound \(\left|\!\left|\!\left|G_{t}\star\rho_{0}\right|\!\right|\!\right|\leq Ct^{- \gamma/2}\) that we used in (3.7). This lets us prove (3.6) with \(\delta=0\), giving \(\rho\in C([0,T^{\star}),\mathcal{X}^{\gamma})\). We now show that \(\rho\) solves (1.1) in the classical sense on \((0,T^{\star})\). Clearly \(\rho(0,\cdot)=\rho_{0}\). It suffices to show now that (1.1) is satisfied at a fixed \(t\in(0,T^{\star})\) and \(x\in\mathbb{R}^{d}\). Because of the instant regularisation of \(\rho\), we can assume without loss of generality that already \(\rho_{0}\in\mathcal{X}^{\gamma}\) for all \(\gamma\in[0,2+\gamma_{f})\), so that \(\sup_{0\leq s\leq t}\left|\kern-1.075pt\left|\kern-1.075pt\left|\rho_{s}\right| \kern-1.075pt\right|\kern-1.075pt\right|_{\gamma}<\infty\) by the above, and thus \[\sup_{0\leq s\leq t}\left|\kern-1.075pt\left|\kern-1.075pt\left|\rho_{s} \nabla W\star\rho_{s}\right|\kern-1.075pt\right|\kern-1.075pt\right|_{\gamma}< \infty,\quad\gamma\in[0,2+\gamma_{f}) \tag{3.11}\] by Lemmas 3.1 and B.3. In particular \(\rho_{s}\nabla W\star\rho_{s}\in C^{2}(\mathbb{R}^{d})\) for all \(s\in[0,t]\), so we can rewrite (3.2) as \[\rho(t)=G_{t}\star\rho_{0}+\int_{0}^{t}G_{t-s}\star f\,\mathrm{d}s+\int_{0}^{t }G_{t-s}\star\nabla\cdot(\rho_{s}\nabla W\star\rho_{s})\,\mathrm{d}s.\] Recall that \(G\colon[0,t]\times\mathbb{R}^{d}\to\mathbb{R}\) is smooth with globally bounded derivatives of any order, and \(\partial_{t}G_{t}=\frac{1}{2}\Delta G_{t}\). To show that we can pull the time derivative into the integrals, we establish the following bounds. \[\left|\left(\partial_{t}G_{t-s}\right)\star f\right|\kern-1.075pt \right|_{\infty} =\frac{1}{2}\left|\left(\Delta G_{t-s}\right)\star f\right|\kern-1.0 75pt\right|_{\infty} \leq C\left|\kern-1.075pt\left|G_{t-s}\star f\right|\kern-1.075pt \right|_{\mathcal{W}^{2,\infty}}\] \[\leq C\left|\kern-1.075pt\left|G_{t-s}\right|\kern-1.075pt\right| _{\mathcal{W}^{2-\gamma_{f},1}}\left|\kern-1.075pt\left|f\right|\kern-1.075pt \right|_{\mathcal{W}^{\gamma_{f},\infty}}\] \[\leq C(t-s)^{-(2-\gamma_{f})/2}\left|\kern-1.075pt\left|f\right| \kern-1.075pt\right|\kern-1.075pt\right|_{\gamma_{f}},\] which is integrable over \((0,t)\). Furthermore, \[\left|\left(\partial_{t}G_{t-s}\right)\star\nabla\cdot(\rho_{s} \nabla W\star\rho_{s})\right|\kern-1.075pt\right|_{\infty} =\frac{1}{2}\left|\left(\Delta G_{t-s}\right)\star\nabla\cdot( \rho_{s}\nabla W\star\rho_{s})\right|\kern-1.075pt\right|_{\infty}\] \[\leq C\left|\kern-1.075pt\left|G_{t-s}\star\rho_{s}\nabla W\star \rho_{s}\right|\kern-1.075pt\right|_{\mathcal{W}^{3,\infty}}\] \[\leq C\left|\kern-1.075pt\left|G_{t-s}\right|\kern-1.075pt\right| _{\mathcal{W}^{1,1}}\left|\rho_{s}\nabla W\star\rho_{s}\right|\kern-1.075pt \right|_{\mathcal{W}^{2,\infty}}\] \[\leq C(t-s)^{-1/2}\left(\sup_{0\leq s\leq t}\left|\kern-1.075pt \left|\rho_{s}\right|\kern-1.075pt\right|_{2}^{2}\right),\] where we recalled (3.11) and Lemma 3.1 in the final step, which is also integrable over \((0,t)\). Hence, \(\rho(\cdot,x)\) is differentiable at \(t\) and \[\partial_{t}\rho =(\partial_{t}G_{t})\star\rho_{0}+f+\int_{0}^{t}(\partial_{t}G_{t -s})\star f\,\mathrm{d}s+\nabla\cdot(\rho\nabla W\star\rho)\] \[\qquad\qquad\qquad+\int_{0}^{t}(\partial_{t}G_{t-s})\star\nabla \cdot(\rho_{s}\nabla W\star\rho_{s})\,\mathrm{d}s\] \[=\frac{1}{2}\Delta\rho+\nabla\cdot(\rho\nabla W\star\rho)+f.\] Non-negativity will be proved in the following lemma. **Lemma 3.4**.: _For any \(t\in[0,T^{\star})\), \(\rho_{t}\in\mathcal{X}_{+}\) and \(\int_{\mathbb{R}^{d}}\rho(t,x)\,\mathrm{d}x=\int\rho_{0}\,\mathrm{d}x+t\left( \int f\,\mathrm{d}x\right)\)._ Proof.: The idea is to show that \(\int\rho(t,x)_{-}\,\mathrm{d}x=0\) for all \(t\geq 0\). For that purpose, let \((j_{\varepsilon})_{\varepsilon>0}\) be a family of smooth and convex functions such that \(j_{\varepsilon}(s)=(-s)\lor 0\) on \(\mathbb{R}\setminus[-\varepsilon,0]\), and \(0\leq j_{\varepsilon}^{\prime\prime}\leq 2/\varepsilon\) in \([-\varepsilon,0]\). Then for any \(\varepsilon>0\), \[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\int j_{\varepsilon}( \rho(t,x))\,\mathrm{d}x&=\int_{\mathbb{R}^{d}}j_{\varepsilon}^{ \prime}(\rho(t,x))\left(\frac{1}{2}\Delta\rho+\nabla\cdot(\rho\nabla W\star \rho)+f\right)\mathrm{d}x\\ &\leq-\int_{\mathbb{R}^{d}}j_{\varepsilon}^{\prime\prime}(\rho(t) )\nabla\rho\cdot\left(\frac{1}{2}\nabla\rho+\rho\nabla W\star\rho\right) \mathrm{d}x\\ &\leq-\int_{\mathbb{R}^{d}}J_{\varepsilon}(\rho)\nabla\rho\cdot \nabla W\star\rho\,\mathrm{d}x\\ &\eqqcolon H_{\varepsilon}(t),\end{split} \tag{3.12}\] where in the second step we used that \(j_{\varepsilon}^{\prime}\leq 0\) and \(f\geq 0\), and in the third step we used that \(j_{\varepsilon}^{\prime\prime}(\rho)\left|\nabla\rho\right|^{2}\geq 0\) and put \(J_{\varepsilon}(s)\coloneqq j_{\varepsilon}^{\prime\prime}(s)s\). To understand why this was helpful, formally replace \(j_{\varepsilon}(s)\) by \(j(s)\coloneqq(-s)\lor 0\), so that the left-hand side (LHS) turns into \(\frac{\mathrm{d}}{\mathrm{d}t}\int\rho(t,x)_{-}\,\mathrm{d}x\), and the RHS is zero because \(j^{\prime\prime}(x)=\delta(x)\), so \(J(\rho)=\delta(\rho)\rho=0\). More precisely, (3.12) implies \(\int j_{\varepsilon}(\rho(t,x))\,\mathrm{d}x\leq j_{\varepsilon}(\rho_{0}(x)) +\int_{0}^{t}H_{\varepsilon}(s)\,\mathrm{d}s\) for all \(\varepsilon>0\) and \(t\in[0,T^{\star})\). The LHS converges by dominated convergence to \(\int\rho(t)_{-}\,\mathrm{d}x\), and the RHS converges by dominated convergence to zero. Indeed, \(J_{\varepsilon}\to 0\) pointwise on \(\mathbb{R}\setminus\{0\}\) as \(\varepsilon\to 0\), and \(0\leq J_{\varepsilon}(s)\leq 2\) for all \(s\in\mathbb{R}\) and \(\varepsilon>0\) by assumption on \(j_{\varepsilon}\). Hence, \(\int\rho(t)_{-}\,\mathrm{d}x=0\) for all \(t\in[0,T^{\star})\), so by continuity of \(\rho\), \(\rho(t)\geq 0\) for all \(t\in[0,T^{\star})\). For the second claim, \[\begin{split}\int\rho(t)\,\mathrm{d}x&=\int_{ \mathbb{R}^{d}}\int_{\mathbb{R}^{d}}G_{t}(x-y)\rho_{0}(y)\,\mathrm{d}y\,\mathrm{ d}x+\int_{0}^{t}\int_{\mathbb{R}^{d}}G_{s}(x-y)f(y)\,\mathrm{d}y\,\mathrm{d}x\, \mathrm{d}s\\ &\qquad\qquad+\int_{0}^{t}\int\int(\nabla G_{t-s})(x-y)(\rho_{s} \nabla W\star\rho_{s})(y)\,\mathrm{d}y\,\mathrm{d}x\,\mathrm{d}s\\ &=\int\rho_{0}(x)\,\mathrm{d}x+t\left(\int_{\mathbb{R}^{d}}f(x)\, \mathrm{d}x\right),\end{split}\] where we used that \(\int G_{s}\,\mathrm{d}x=1\) and \(\int\nabla G_{s}\,\mathrm{d}x=0\) for all \(s>0\). ### Maximum Principle and Global Boundedness In this section, we work with Assumptions 1 and 2, and the main goal is to prove Theorem 2.8. The idea is the following. Suppose \(t\in[0,T^{\star})\), and \(\rho_{t}\) has a global maximum at \(x_{0}\in\mathbb{R}^{d}\). Then \(\Delta\rho(x_{0})\leq 0\), \(\nabla\rho(x_{0})=0\), and as a consequence of Lemma 2.11, \(\nabla\cdot(\nabla W\star\rho)(x_{0})\leq-c_{W}g(x_{0})\), so \[\begin{split}\partial_{t}\rho_{t}(x_{0})\leq f(x_{0})+\rho_{t}(x _{0})\nabla\cdot(\nabla W\star\rho_{t})(x_{0})&\leq\left\|f \right\|_{\infty}-c_{W}\rho_{t}(x_{0})^{2}\\ &=\left\|f\right\|_{\infty}-c_{W}\left\|\rho_{t}\right\|_{\infty }^{2}.\end{split}\] This turns into (2.11) (see Lemma 3.8), from which (2.12) is a straightforward consequence. We begin by proving Lemmas 2.3 and 2.11. Using the same argument as in the proof of Lemma 3.1, we can show that Assumption 2 implies \[\left\|\Delta W\star g\right\|_{\mathcal{W}^{\gamma,\infty}}\leq C\left\|g \right\|\!\right\|_{\gamma} \tag{3.13}\] for any \(\gamma\geq 0\) and \(g\in\mathcal{X}^{\gamma}\) (where \(C>0\) depends on \(\gamma\) but not \(g\)). We will first prove the following version of Lemma 2.11. **Lemma 3.5**.: _There exist \(\eta_{W}^{\prime}\in\mathbb{R}\) and \(\xi_{W}\in\mathbb{R}^{d}\) such that for any \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) (in particular any \(g\in\mathcal{X}^{\gamma}\) with \(\gamma>1\)),_ \[\nabla\cdot(\nabla W\star g)=(\Delta W)\star g-\eta_{W}^{\prime}g+\xi_{W}\cdot \nabla g.\] _If \(W(x)=W(-x)\) for all \(x\in\mathbb{R}^{d}\setminus\{0\}\), then \(\xi_{W}=0\)._ Together with Lemma 2.3 we will then prove that \(\eta_{W}=\eta_{W}^{\prime}\). Proof of Lemma 3.5.: We prove the claim at a fixed but arbitrary \(x_{0}\in\mathbb{R}^{d}\), and assume without loss of generality that \(x_{0}=0\). Recall that \(\nabla W\star g\in\mathcal{W}^{1,\infty}\) for any \(g\in\mathcal{X}^{1}\) by Lemma 3.1, so it is bounded and globally Lipschitz continuous. This remains true if we replace \(g\) by \(g\varphi_{\varepsilon}\) for a smooth radially symmetric bump function \(\varphi_{\varepsilon}\) with \(\mathds{1}_{B(0,\varepsilon)}\leq\varphi_{\varepsilon}\leq\mathds{1}_{B(0,2 \varepsilon)}\). Put \(\psi_{\varepsilon}\coloneqq 1-\varphi_{\varepsilon}\). Then, \[\nabla\cdot(\nabla W\star g)=\nabla\cdot(\nabla W\star g\psi_{\varepsilon})+ \nabla\cdot(\nabla W\star g\varphi_{\varepsilon}).\] Now, \[\nabla\cdot(\nabla W\star g\psi_{\varepsilon})(0)=\nabla\cdot\left(\int\nabla W (\cdot-y)g(y)\psi_{\varepsilon}(y)\,\mathrm{d}y\right)(0)=(\Delta W\star g \psi_{\varepsilon})(0),\] because \(\Delta W\) is bounded away from zero. More precisely, as long as \(x\in B(0,\varepsilon/2)\), say, then \(|x-y|\geq|y|-|x|\geq\varepsilon/2\) for any \(y\in\mathbb{R}^{d}\) with \(\psi_{\varepsilon}(y)>0\), and \(\sup_{B(0,\varepsilon/2)^{c}}\Delta W<\infty\) by Assumption 2. By (3.13), \(\Delta W\star g\) is well-defined, hence by dominated convergence, \((\Delta W\star g\psi_{\varepsilon})(0)\to(\Delta W\star g)(0)\) as \(\varepsilon\to 0\). Thus, \[\mathcal{F}[g]\coloneqq\lim_{\varepsilon\to 0}\nabla\cdot(\nabla W\star g \varphi_{\varepsilon})(0)=\nabla\cdot(\nabla W\star g)(0)-(\Delta W\star g)(0)\] exists and is finite for any \(g\in\mathcal{X}^{1}\). Inspecting the RHS above we see that \[\mathcal{F}\text{ is linear and }\left|\mathcal{F}[g]\right|\leq C\mathopen{| \mskip-1.5mu |\mskip-1.5mu |}g\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{1}.\] Indeed, \(|\mathcal{F}[g]|\leq\|\nabla\cdot(\nabla W\star g)\|_{L^{\infty}}+\|\Delta W \star g\|_{L^{\infty}}\), and by Lemma 3.1, and \(\|\Delta W\star g\|_{L^{\infty}}\leq C\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}g \mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{1}\) by (3.13). Furthermore, from the definition it is immediate that \(\mathcal{F}[g]\) depends on \(g\) only locally around \(0\), in the sense that \(\mathcal{F}[g_{1}]=\mathcal{F}[g_{2}]\) if \(g_{1},g_{2}\in\mathcal{X}^{1}\) coincide in a neighbourhood of \(0\). Now suppose that \(g\in\mathcal{X}^{1}\) with \(g(0)=0\). Let \(\varepsilon\in(0,1)\), then by Lemma 3.6 below there is \(\widetilde{g}\in\mathcal{X}^{1}\) such that \(\widetilde{g}=g\) on \(B(0,\varepsilon)\), \(\widetilde{g}\equiv 0\) on \(\mathbb{R}^{d}\setminus B(0,2\varepsilon)\) and \(\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}\widetilde{g}\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{1}\leq C\, \mathopen{|\mskip-1.5mu |\mskip-1.5mu |}\widetilde{g}\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{\mathcal{W}^{1,\infty}} \leq C\,\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}g\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{\mathcal{W}^{1,\infty}} (B(0,\varepsilon))\) with constants independent of \(g\) and \(\varepsilon\) (the first inequality holds because \(\widetilde{\rho}\) is supported in \(B(0,\varepsilon)\subset B(0,1)\)). This implies that \(|\mathcal{F}[g]|=|\mathcal{F}[\widetilde{g}]|\leq C\mathopen{|\mskip-1.5mu | \mskip-1.5mu |}\widetilde{g}\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{1}\leq C\, \mathopen{|\mskip-1.5mu |\mskip-1.5mu |}g\mathopen{|\mskip-1.5mu |\mskip-1.5mu |}_{\mathcal{W}^{1,\infty}} (B(0,\varepsilon))\). Letting \(\varepsilon\to 0\) implies \[\forall g\in\mathcal{X}^{1},g(0)=0\colon\,|\mathcal{F}[g]|\leq C\varliminf_{ \varepsilon\to 0}\|g\|_{\mathcal{W}^{1,\infty}}(B(0,\varepsilon))\,.\] In particular, if \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) and \(g(0)=\nabla g(0)=0\) then \(\mathcal{F}[g]=0\). By linearity, there must be \(\eta_{W}^{\prime}\in\mathbb{R}\) and \(\xi_{W}\in\mathbb{R}^{d}\) with \[\mathcal{F}[g]=-\eta_{W}^{\prime}g(0)+\xi+W\cdot\nabla g(0)\] for all \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\). Now suppose that \(W(x)=W(-x)\) for all \(x\in\mathbb{R}^{d}\). To show that \(\xi_{W}=0\) it suffices to prove that whenever \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) with \(g(0)=0\), then \(\mathcal{F}[g]=0\). Since we already know that \(\mathcal{F}[g]\) does not change when we replace \(g\) by a function in \(\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) that has the same value and gradient at \(0\), we may assume that \(g(x)=\beta\cdot x\) for all \(x\in B(0,1)\), where \(\beta=\nabla g(0)\). Then, \[\nabla\cdot(\nabla W\star g\varphi_{\varepsilon})(0)=\beta\cdot( \nabla W\star\varphi_{\varepsilon})(0)+(\nabla W\star g\nabla\varphi_{ \varepsilon})(0)\\ =\beta\cdot\int\nabla W(x)\varphi_{\varepsilon}(x)\,\mathrm{d}x +\beta\cdot\int x(\nabla W(x)\cdot\nabla\varphi_{\varepsilon}(x))\,\mathrm{d}x =0,\] because both integrands are antisymmetric, hence \(\mathcal{F}[g]=0\). **Lemma 3.6**.: _There is \(C>0\) such that for any \(\rho\in\mathcal{W}^{1,\infty}\) with \(\rho(0)=0\) and any \(\varepsilon>0\), there exists \(\widetilde{\rho}\in\mathcal{W}^{1,\infty}\) with \(\widetilde{\rho}=\rho\) on \(B(0,\varepsilon)\), \(\widetilde{\rho}\equiv 0\) on \(\mathbb{R}^{d}\setminus B(0,2\varepsilon)\), and \(\|\widetilde{\rho}\|_{\mathcal{W}^{1,\infty}}\leq C\left\|\rho\right\|_{ \mathcal{W}^{1,\infty}(B(0,\varepsilon))}\)._ Proof.: Define \(\widetilde{\rho}\) in radial coordinates by \[\widetilde{\rho}(r,\varphi)\coloneqq\begin{cases}\rho(r,\varphi),&r< \varepsilon,\\ \rho(2\varepsilon-r,\varphi),&\varepsilon\leq r<2\varepsilon,\\ 0,&2\varepsilon\leq r,\end{cases}\] where \(\varphi\) stands collectively for all \(d-1\) angular variables. Then \(\widetilde{\rho}\) is continuous, \(\|\widetilde{\rho}\|_{L^{\infty}}\leq\|\rho\|_{L^{\infty}(B(0,\varepsilon))}\) and \(\|\widetilde{\rho}\|_{\mathrm{Lip}}\leq\|\rho\|_{\mathrm{Lip}(B(0,\varepsilon))}\), where \[\|h\|_{\mathrm{Lip}(\Omega)}\coloneqq\sup_{\begin{subarray}{c}x,y\in\Omega\\ x\neq y\end{subarray}}\frac{|h(x)-h(y)|}{|x-y|}\] for \(h\colon\mathbb{R}^{d}\to\mathbb{R}\) and \(\Omega\subset\mathbb{R}^{d}\). Then recall the well-known result that \(\mathcal{W}^{1,\infty}=C^{0,1}(\mathbb{R}^{d})\), the space of bounded Lipschitz functions. Proof of Lemma 2.3.: Let \((g_{\varepsilon})_{\varepsilon>0}\) be a smooth approximation to unity with \(g_{\varepsilon}\) supported in \(B(0,\varepsilon)\) for all \(\varepsilon>0\). Then by Lemma 3.5 and (2.4), and recalling that \(c_{d}>0\) denotes the surface area of the unit ball in \(\mathbb{R}^{d}\), \[\begin{split}\int\limits_{\partial B(0,R)}(\nabla W\star g_{ \varepsilon})\cdot\mathrm{d}\widehat{n}&=\int_{B(0,R)}\nabla \cdot(\nabla W\star g_{\varepsilon})(x)\,\mathrm{d}x\\ &=-\eta^{\prime}_{W}\int_{B(0,R)}g_{\varepsilon}(x)\,\mathrm{d}x +\xi_{W}\cdot\int_{B(0,R)}\nabla g_{\varepsilon}(x)\,\mathrm{d}x\\ &\qquad\qquad+\int_{B(0,R)}(\Delta W\star g_{\varepsilon})(x)\, \mathrm{d}x\\ &=-\eta^{\prime}_{W}+\int_{B(0,R)}(\Delta W\star g_{\varepsilon}) (x)\,\mathrm{d}x,\end{split} \tag{3.14}\] where we used that \(\int_{B(0,R)}\nabla g_{\varepsilon}(x)\,\mathrm{d}x=0\) by integration by parts. We want to take \(\varepsilon\to 0\) for fixed \(R>0\). On the LHS we obtain \[\int\limits_{\partial B(0,R)}(\nabla W\star g)\cdot\mathrm{d}\widehat{n} \longrightarrow\int\limits_{\partial B(0,R)}\nabla W\cdot\mathrm{d}\widehat{ n}=c_{d}R^{d-1}\left\langle\nabla W\right\rangle(R),\] by dominated convergence, which is applicable because \(\nabla W\) is bounded away from the origin by Assumption 1. For the RHS, we cannot directly apply dominated or monotone convergence without further assumptions on \(\Delta W\). Instead, we write \(\Delta W\star g_{\varepsilon}=(\Delta W)_{+}\star g_{\varepsilon}-(\Delta W)_{-} \star g_{\varepsilon}\), and \[\begin{split}\int_{B(0,R)}[(\Delta W)_{+}\star g_{\varepsilon}](x )\,\mathrm{d}x&=\int_{B(0,R)}\int_{B(0,\varepsilon)}(\Delta W)_{+ }(x-y)g_{\varepsilon}(y)\,\mathrm{d}y\,\mathrm{d}x\\ &=\int_{B(0,\varepsilon)}g_{\varepsilon}(y)\left(\int_{B(y,R)}( \Delta W)_{+}(x)\,\mathrm{d}x\right)\mathrm{d}y.\end{split} \tag{3.15}\] Now, for any \(y\in B(0,\varepsilon)\), if say \(\varepsilon<R/2\), \[\begin{split}\left|\int_{B(0,R)}\Delta W(x)_{+}\,\mathrm{d}x- \int_{B(y,R)}\Delta W(x)_{+}\,\mathrm{d}x\right|&\leq\int\limits _{B(0,R)\Delta B(y,R)}|\Delta W(x)|\,\mathrm{d}x\\ &\leq C\varepsilon R^{d-1}\sup_{|z|\geq R/2}|\Delta W(z)|,\end{split}\] where \(A\Delta B=(A\setminus B)\cup(B\setminus A)\) for \(A,B\subset\mathbb{R}^{d}\). This goes to zero as \(\varepsilon\to 0\), so continuing in (3.15), \[\begin{split}\int_{B(0,\varepsilon)}g_{\varepsilon}(y)& \left(\int_{B(y,R)}(\Delta W)_{+}(x)\,\mathrm{d}x\right)\mathrm{d}y\\ &=\int_{\mathbb{R}^{d}}g_{\varepsilon}(y)\left(\int_{B(0,R)}( \Delta W)_{+}(x)\,\mathrm{d}x+O(\varepsilon)\right)\mathrm{d}y\\ &\longrightarrow\int_{B(0,R)}(\Delta W)_{+}(x)\,\mathrm{d}x, \end{split}\] as \(\varepsilon\to 0\). We can proceed similarly with \((\Delta W)_{-}\) and combine the results to obtain \[\int_{B(0,R)}(\Delta W\star g_{\varepsilon})(x)\,\mathrm{d}x\longrightarrow \int_{B(0,R)}\Delta W(x)\,\mathrm{d}x\] as \(\varepsilon\to 0\). We have now shown that letting \(\varepsilon\to 0\) in (3.14) yields \[-\eta^{\prime}_{W}+\int_{B(0,R)}\Delta W(x)\,\mathrm{d}x=c_{d}R^{d-1}\left\langle \nabla W\right\rangle(R)=-\frac{\left\langle\nabla W\right\rangle(R)}{\left \langle\nabla W_{N}\right\rangle(R)},\] where we recalled \(\left\langle\nabla W_{N}\right\rangle(R)=-c_{d}^{-1}R^{1-d}\). If we let \(R\to 0\), then the LHS tends to \(-\eta^{\prime}_{W}\) because \(\Delta W\) is locally integrable. Thus the limit on the RHS exists, so \(\eta_{W}\) in (2.5) is well-defined and equals \(\eta^{\prime}_{W}\). If we let \(R\to\infty\), then the LHS tends to \(-\eta_{W}+\int\Delta W\) by Assumption 2 (ii), which implies that \(\alpha_{W}\) in (2.5) is well-defined and that (2.6) holds. **Lemma 3.7**.: _If \(g\in\mathcal{X}^{1}\cap C^{1}(\mathbb{R}^{d})\) is non-negative and has a global maximum at \(x_{0}\in\mathbb{R}^{d}\), then_ \[\nabla\cdot(\nabla W\star g)(x_{0})\leq-c_{W}g(x_{0}).\] _For any \(c>c_{W}\) and \(x_{0}\in\mathbb{R}^{d}\), there exists \(g\in C^{\infty}_{c+}(\mathbb{R}^{d})\) such that \(g\) has a global maximum of any given height at \(x_{0}\), \(\Delta g(x_{0})=0\), and_ \[\nabla\cdot(\nabla W\star g)(x_{0})>-cg(x_{0}).\] Proof of Lemma 3.7.: By Lemma 2.11, and because \(\nabla g(x_{0})=0\), \[\nabla\cdot(\nabla W\star g)(x_{0}) =(\Delta W\star g)(x_{0})-\eta_{W}g(x_{0})+\xi_{W}\cdot\nabla g(x_{ 0})\] \[\leq\left\|g\right\|_{\infty}\left(\int(\Delta W)_{+}\right)- \left(c_{W}+\int(\Delta W)_{+}\right)g(x_{0})\] \[=-c_{W}g(x_{0}).\] Now let \(c<c_{W}\), \(x_{0}\in\mathbb{R}^{d}\) and \(M>0\). We will construct a function \(g\in C^{\infty}_{c+}(\mathbb{R}^{d})\) such that \(g\) has a global maximum of height \(M\) at \(x_{0}\) and \(\nabla\cdot(\nabla W\star g)(x_{0})>-cg(x_{0})\). Because this inequality is linear in \(g\), we may assume \(M=1\). Put \(U\coloneqq\left\{x\in\mathbb{R}^{d}\setminus\{0\}:\Delta W(x)>0\right\}\), which is an open, possibly empty subset of \(\mathbb{R}^{d}\setminus\{0\}\). For \(\varepsilon>0\) let \[U_{\varepsilon}\coloneqq\left\{x\in U\colon d(x,U^{c})>\varepsilon\right\},\] which is also open and \(\bigcup_{\varepsilon>0}U_{\varepsilon}=U\) (note that we shrink, not grow \(U\) by \(\varepsilon\)). Put \(h_{\varepsilon}\coloneqq\varphi_{\varepsilon/2}\star\mathds{1}_{U_{\varepsilon }\cap B(0,\varepsilon^{-1})}\) where \(\varphi\in C^{\infty}_{c+}(B(0,1))\) with \(\int\varphi=1\) and \(\varphi_{\varepsilon}(x)\coloneqq\varepsilon^{-d}\varphi(\varepsilon^{-1}x)\). Then \(h_{\varepsilon}\in C^{\infty}_{c+}(\mathbb{R}^{d})\) and \[\mathds{1}_{U_{2\varepsilon}\cap B(0,(2\varepsilon)^{-1})}\leq h_{ \varepsilon}\leq\mathds{1}_{U_{\varepsilon/2}}. \tag{3.16}\] In particular \(0\leq h_{\varepsilon}\uparrow\mathds{1}_{U}\) as \(\varepsilon\to 0\), and, since \(h_{\varepsilon}\) is supported on \(U\) for all \(\varepsilon>0\), and by monotone convergence, \[\int_{\mathbb{R}^{d}}\Delta W(x)h_{\varepsilon}(x)\,\mathrm{d}x=\int_{ \mathbb{R}^{d}}(\Delta W)_{+}(x)h_{\varepsilon}(x)\,\mathrm{d}x\uparrow\int( \Delta W)_{+},\quad\varepsilon\to 0. \tag{3.17}\] Furthermore, by (3.16), \(h_{\varepsilon}\equiv 0\) on \(B(0,\varepsilon/2)\), in particular \(h_{\varepsilon}(0)=\Delta h_{\varepsilon}(0)=0\). Now let \(f_{\varepsilon}\in C^{\infty}_{c+}(B(x_{0},\varepsilon/2))\) with \(f_{\varepsilon}(x_{0})=1\), \(\Delta f_{\varepsilon}(x_{0})=0\), \(0\leq f_{\varepsilon}\leq 1\). In particular \(f_{\varepsilon}\to 0\) a.e. as \(\varepsilon\to 0\), and by dominated convergence (recall that \(\Delta W\) is locally integrable by Assumption 2), \[(\Delta W\star f_{\varepsilon})(x)\to 0,\quad\varepsilon\to 0, \tag{3.18}\] for any \(x\in\mathbb{R}^{d}\). Now put \[g_{\varepsilon}\coloneqq h_{\varepsilon}(x_{0}-\cdot)+f_{\varepsilon}\in C^{ \infty}_{c+}(\mathbb{R}^{d}),\qquad\varepsilon>0.\] Recall that \(h_{\varepsilon}(x_{0}-\cdot)\) is supported in \(B(x_{0},\varepsilon/2)^{c}\) and \(f_{\varepsilon}\) is supported in \(B(x_{0},\varepsilon/2)\), and both are upper bounded by \(1\), so \(0\leq g_{\varepsilon}\leq 1\). Furthermore, \[g_{\varepsilon}(x_{0})=h_{\varepsilon}(0)+f_{\varepsilon}(x_{0})=1,\qquad \Delta g_{\varepsilon}(x_{0})=\Delta h_{\varepsilon}(0)+\Delta f_{\varepsilon} (x_{0})=0,\] so \(g_{\varepsilon}\) attains a global maximum at \(x_{0}\), and by (3.17) and (3.18), \[(\Delta W\star g_{\varepsilon})(x_{0})=(\Delta W\star h_{\varepsilon})(0)+( \Delta W\star f_{\varepsilon})(x_{0})\to\int(\Delta W)_{+},\quad\varepsilon \to 0.\] Now choose \(\varepsilon>0\) so small that \((\Delta W\star g_{\varepsilon})(x_{0})>\int(\Delta W)_{+}-(c-c_{W})\), then \[\nabla\cdot(\nabla W\star g_{\varepsilon})=(\Delta W\star g_{ \varepsilon})(x_{0})-\eta_{W}g_{\varepsilon}(x_{0}) =(\Delta W\star g_{\varepsilon})(x_{0})-\int(\Delta W)_{+}-c_{W}\] \[>-c\] \[=-cg_{\varepsilon}(x_{0}),\] where we recalled \(\eta_{W}=c_{W}+\int(\Delta W)_{+}\) from (2.7). Now if \(t>0\) and \(\rho_{t}\) has a global maximum at some \(x_{0}\in\mathbb{R}^{d}\), then \(\Delta\rho_{t}(x)\leq 0\), so by Lemma 3.7, \[\partial_{t}\rho(t,x_{0})\leq\left\|f\right\|_{\infty}-c_{W}\left\|\rho_{t} \right\|_{\infty}^{2}. \tag{3.19}\] To conclude Theorem 2.8, we need the following lemmas. **Lemma 3.8**.: _Suppose \(T>0\) and \(g\in C([0,T],\mathcal{X}_{+})\) such that \(g(t,\cdot)\) is Lipschitz continuous for all \(t\in(0,T]\), and \(g\) is differentiable in time on \((0,T]\) with \(\partial_{t}g\colon(0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) jointly continuous. Suppose further that there is a continuous function \(C\colon[0,T]\to[0,\infty)\) such that for all \(t\in(0,T]\), if \(g_{t}\) has a global maximum at \(x\),_ \[\partial_{t}g(t,x)\leq C(t).\] _Then for all \(t\in[0,T)\),_ \[\partial_{t}^{+}\left\|g_{t}\right\|_{\infty}\leq C(t).\] **Lemma 3.9**.: _For any \(T\in(0,T^{\star})\), \(\rho\colon[0,T]\times\mathbb{R}^{d}\to[0,\infty)\) satisfies the assumptions of Lemma 3.8._ We can now prove Theorem 2.8. Proof of Theorem 2.8 and Corollary 2.10.: Eq. (2.11) follows from (3.19) and Lemmas 3.8 and 3.9. Now let \(M\coloneqq\sqrt{\left\|f\right\|_{\infty}/c_{W}}\). Then the unique solution to the ODE \[\begin{cases}G^{\prime}(t)=\left\|f\right\|_{\infty}-c_{W}G(t)^{2},\quad t\geq 0,\\ G(0)=\left\|\rho_{0}\right\|_{\infty},\end{cases}\] is given by \[G(t)=M\begin{cases}\coth\left(c_{W}M(t+t_{0})\right),&G(0)>M,\\ \tanh\left(c_{W}M(t+t_{0})\right),&G(0)<M,\\ 1,&G(0)=M,\end{cases}\] where \(t_{0}\in\mathbb{R}\) is such that \(G(0)=\left\|\rho_{0}\right\|_{\infty}\). Then (2.11) implies \(\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\leq G^{\prime}(t)\) for all \(t\in[0,T^{\star})\) and thus \(\left\|\rho_{t}\right\|_{\infty}\leq G(t)\) for all \(t\in[0,T^{\star})\). It remains to prove sharpness in Theorem 2.8. Let \(c>c_{W}\) and \(h>0\), put \(c^{\prime}\coloneqq(c+c_{W})/2>c_{W}\), and choose \(x_{0}\in\mathbb{R}^{d}\) with \(f(x_{0})\geq\left\|f\right\|_{\infty}-(c-c_{W})h^{2}/2\). Then by Lemma 3.7, there is \(\rho_{0}\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(\rho_{0}\) has a global maximum of height \(h\) at \(x_{0}\), \(\Delta\rho_{0}(x_{0})=0\), \(\rho_{0}(x_{0})=\left\|\rho_{0}\right\|_{\infty}=h\), and \(\nabla\cdot(\nabla W\star\rho_{0})>-c^{\prime}\rho_{0}(x_{0})=-c^{\prime}h\). Then, since \(\nabla\rho(x_{0})=0\), \[\partial_{t}\rho(t,x_{0})\Big{|}_{t=0} =\rho(x_{0})\nabla\cdot(\nabla W\star\rho_{0})(x_{0})+f(x_{0})\] \[>-c^{\prime}h^{2}+\left\|f\right\|_{\infty}-(c-c_{W})h^{2}/2\] \[=\left\|f\right\|_{\infty}-c\left\|\rho_{0}\right\|_{\infty}^{2}.\] In particular, \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\Big{|}_{t=0}= \overline{\lim_{t\downarrow 0}\frac{\left\|\rho_{t}\right\|_{\infty}-\left\|\rho_{0} \right\|_{\infty}}{t}} \geq\overline{\lim_{t\downarrow 0}\frac{\rho(t,x_{0})-\rho_{0}(x_{0})}{t}}\] \[=\partial_{t}\rho(t,x_{0})\Big{|}_{t=0}\] \[>\left\|f\right\|_{\infty}-c\left\|\rho_{0}\right\|_{\infty}^{2}.\] Before proving Lemmas 3.8 and 3.9, we show how Theorem 2.8 implies Theorems 2.6 and 2.7. Proof of Theorems 2.6 and 2.7.: Suppose that \(c_{W}>0\) and \(M=\sqrt{\left\|f\right\|_{\infty}/c_{W}}\). If \(\left\|\rho_{0}\right\|_{\infty}=M\), then (2.12) implies \(\left\|\rho_{t}\right\|_{\infty}\leq M=\left\|\rho_{0}\right\|_{\infty}\) for all \(t\in[0,T^{\star})\). If \(\left\|\rho_{0}\right\|_{\infty}<M\), then (2.12) implies \[\left\|\rho_{t}\right\|_{\infty}\leq M\tanh(c_{W}M(t+t_{0}))\leq M,\] for all \(t\in[0,T^{\star})\). If \(\left\|\rho_{0}\right\|_{\infty}>M\), then \[\left\|\rho_{t}\right\|_{\infty}\leq M\coth(c_{W}(t+t_{0}))=:G(t),\] for all \(t\in[0,T^{\star})\). Since \(G(0)>M\), we must have \(t_{0}>0\) and so \(G\) is strictly decreasing on \([0,\infty)\), so \[\left\|\rho_{t}\right\|_{\infty}\leq G(t)<G(0)=\left\|\rho_{0}\right\|_{\infty },\quad t\in(0,T^{\star}).\] For the sharpness statements, suppose first that \(c_{W}>0\) and \(M<\sqrt{\left\|f\right\|_{\infty}/c_{W}}\). Then it suffices to show that there exists \(\rho_{0}\in\mathcal{X}_{+}\) with \(\left\|\rho_{0}\right\|_{\infty}=M\) and \(\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\Big{|}_{t=0}>0\). For that purpose note that \(\left\|f\right\|_{\infty}-c_{W}M^{2}>0\), so there is \(c>c_{W}\) such that still \(\left\|f\right\|_{\infty}-cM^{2}>0\). Then there exists by Theorem 2.8 a \(\rho_{0}\in C_{c+}^{\infty}(\mathbb{R}^{d})\) with \(\left\|\rho_{0}\right\|_{\infty}=M\) and \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\Big{|}_{t=0}>\left\|f\right\| _{\infty}-c\left\|\rho_{0}\right\|_{\infty}^{2}=\left\|f\right\|_{\infty}-cM^{ 2}>0.\] Now suppose that \(c_{W}<0\) and \(M>0\). Then again it suffices to show that there exists \(\rho_{0}\in\mathcal{X}_{+}\) with \(\left\|\rho_{0}\right\|_{\infty}=M\) and \(\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}>0\). Let \(c_{W}<c<0\), then by Theorem 2.8 there exists \(\rho_{0}\in C_{c+}^{\infty}(\mathbb{R}^{d})\) with \(\left\|\rho_{0}\right\|_{\infty}=M\) and \[\partial_{t}^{+}\left\|\rho_{t}\right\|_{\infty}\Big{|}_{t=0}>\left\|f\right\| _{\infty}-c\left\|\rho_{0}\right\|_{\infty}^{2}\geq\left\|f\right\|_{\infty} \geq 0.\] Proof of Lemma 3.8.: It suffices to show that for any fixed \(t_{0}\in[0,T)\), and every \(\varepsilon,\varepsilon^{\prime}>0\), \[\left\|g_{t}\right\|_{\infty}\leq\left\|g_{t_{0}}\right\|_{\infty}+\int_{t_{0} }^{t}\left(C(s)+\varepsilon\right)\mathrm{d}s+\varepsilon^{\prime},\quad t\in[ t_{0},T]. \tag{3.20}\] Indeed, taking \(\varepsilon^{\prime}\downarrow 0\), this implies that (3.20) holds for all \(t\in[t_{0},T]\), \(\varepsilon>0\), and \(\varepsilon^{\prime}=0\). Thus, for every \(\varepsilon>0\), \[\partial_{t}^{+}\left\|g_{t}\right\|_{\infty}\Big{|}_{t=t_{0}}=\varlimsup_{t \downarrow t_{0}}\frac{\left\|g_{t}\right\|_{\infty}-\left\|g_{t_{0}}\right\|_ {\infty}}{t-t_{0}}\leq\varlimsup_{t\downarrow t_{0}}\frac{1}{t-t_{0}}\int_{t_{ 0}}^{t}\left(C(s)+\varepsilon\right)\mathrm{d}s=C(t_{0})+\varepsilon,\] which implies the claim at \(t_{0}\) by taking \(\varepsilon\downarrow 0\). Now fix \(t_{0}\in[0,T)\), \(\varepsilon,\varepsilon^{\prime}>0\), and we show (3.20). We may assume that \(t_{0}=0\). We want to show that \[F(t)\coloneqq\left\|g_{t}\right\|_{\infty}-\left\|g_{0}\right\|_{\infty}-\int_ {0}^{t}\left(C(s)-\varepsilon\right)\mathrm{d}s-\varepsilon^{\prime}\leq 0 \tag{3.21}\] for all \(t\in[0,T]\). Define \[t_{1}\coloneqq\sup\left\{t\in[0,T]\colon F(t)\leq 0\right\}.\] \(F\) is continuous because \(g\in C([0,T],L^{\infty})\), and \(F(0)=-\varepsilon^{\prime}<0\), so \(t_{1}>0\). Now if \(F(t)\leq 0\) does not hold for all \(t\in[0,T]\), then \(t_{1}<T\), so there would be \(T>t_{n}\downarrow t_{1}>0\) such that \(F(t)\leq 0\) for all \(0\leq t\leq t_{1}\) and \(F(t_{n})>0\) for all \(n\in\mathbb{N}\). Since \(g_{t_{n}}\) is Lipschitz continuous and integrable it must vanish at infinity, so it attains its maximum at some \(x_{n}\in\mathbb{R}^{d}\) and we have \[F(t_{n})=g(t_{n},x_{n})-\left\|g_{0}\right\|_{\infty}-\int_{0}^{t_{n}}\Big{(}C(s )+\varepsilon\Big{)}\,\mathrm{d}s-\varepsilon^{\prime}>0,\quad n\in\mathbb{N}. \tag{3.22}\] In particular, \(g(t_{n},x_{n})\geq\varepsilon^{\prime}\) for all \(n\in\mathbb{N}\), so \(g(t_{1},x_{n})\geq\varepsilon^{\prime}/2\) for infinitely many \(n\in\mathbb{N}\), say \(n\geq n_{0}\), because \(\rho\in C([0,T],L^{\infty})\). Then by Lipschitz continuity of \(g(t_{1},\cdot)\) in space, \(\inf_{n\in\mathbb{N}}\int_{B(x_{n},1)}g(t_{1},y)\,\mathrm{d}y>0\), which were to contradict integrability of \(g(t_{1},\cdot)\) if \((x_{n})\) were unbounded. Hence \((x_{n})\) must be bounded, without loss of generality already convergent to some \(x_{1}\in\mathbb{R}^{d}\). Then, using (3.21) and (3.22), \[F(t_{1}) =\left\|g_{t_{1}}\right\|_{\infty}-\left\|g_{0}\right\|_{\infty} -\int_{0}^{t_{1}}\Big{(}C(s)+\varepsilon\Big{)}\,\mathrm{d}s-\varepsilon^{ \prime},\] \[\text{and}\] \[F(t_{1}) =\lim_{n\to\infty}F(t_{n})=g(t_{1},x_{1})-\left\|g_{0}\right\|_{ \infty}-\int_{0}^{t_{1}}\Big{(}C(s)+\varepsilon\Big{)}\,\mathrm{d}s- \varepsilon^{\prime},\] so \(\left\|g_{t_{1}}\right\|_{\infty}=g(t_{1},x_{1})\). Thus by assumption \(\partial_{t}g(t_{1},x_{1})\leq C(t_{1})\), so \[\forall x\in\mathbb{R}^{d}\colon g(t_{1},x)-\left\|g_{0}\right\|_{\infty}-\int_ {0}^{t_{1}}\Big{(}C(s)+\varepsilon\Big{)}\,\mathrm{d}s-\varepsilon^{\prime} \leq F(t_{1})=0,\] \[\partial_{t}\Big{[}g(\cdot,x_{1})-\left\|g_{0}\right\|_{\infty}-\int_{0}^{t} \Big{(}C(s)+\varepsilon\Big{)}\,\mathrm{d}s-\varepsilon^{\prime}\Big{]}_{t=t _{1}}\leq C(t_{1})-\Big{(}C(t_{1})+\varepsilon\Big{)}=-\varepsilon<0.\] But this implies, since \(\partial_{t}g\) is jointly continuous, that the second inequality holds in \([t_{1},t_{1}+\delta)\times B(x_{1},\delta)\) for some \(\delta>0\), so in fact \(g(t,x)<\left\|g_{0}\right\|_{\infty}+\int_{0}^{t}\Big{(}C(s)+\varepsilon \Big{)}\,\mathrm{d}s+\varepsilon^{\prime}\) for all \((t,x)\in(t_{1},t_{1}+\delta)\times B(x_{1},\delta)\), which contradicts the fact (3.22) that \(g(t_{n},x_{n})>\left\|g_{0}\right\|_{\infty}+\int_{0}^{t}\Big{(}C(s)+ \varepsilon\Big{)}\,\mathrm{d}s+\varepsilon^{\prime}\) for all \(n\in\mathbb{N}\). Proof of Lemma 3.9.: Let \(T\in(0,T^{\star})\) and fix \(\gamma\in(2,2+\gamma_{f})\). Then by Theorem 3.3, \(\rho\in C([0,T],\mathcal{X}_{+})\cap C((0,T],\mathcal{X}^{\gamma})\), in particular \(\rho_{t}\) is Lipschitz continuous for \(t\in(0,T]\). It remains to prove that \(\partial_{t}\rho\) is jointly continuous on \([\delta,T]\times\mathbb{R}^{d}\) for any \(\delta>0\). By the instant regularisation proved in Theorem 3.3, \(\rho_{\delta}\in\mathcal{X}^{\gamma}\), so we may as well assume that \(\rho_{0}\in\mathcal{X}^{\gamma}\) and \(\delta=0\). Then we know that \(\rho\) solves (1.1) on \([0,1]\), so \[\partial_{t}\rho_{t}=\Delta\rho_{t}+\nabla\cdot(\rho_{t}(\nabla W\star\rho_{t }))+f. \tag{3.23}\] Using \(\rho\in C([0,1],\mathcal{X}^{\gamma})\) and Lemmas 2.11 and 3.1 and (3.13), it is straight-forward to confirm through direct calculation that the RHS in (3.23) and therefore \(\partial_{t}\rho\in C([0,1],\mathcal{W}^{\gamma-2,\infty})\), where \(\mathcal{W}^{\gamma-2,\infty}=C^{0,\gamma-2}(\mathbb{R}^{d})\) (see (B.2) in Appendix B) with \(\gamma-2\in(0,1)\). In particular, \(\partial_{t}\rho\) is continuous in time, and \((\gamma-2)\)-Holder continuous in space uniformly on \([0,T]\). This implies joint continuity We close by presenting a proof for a simple claim made in the introduction regarding the asymptotics of (1.1) with \(W\equiv 0\). **Lemma 3.10**.: _Let \(f\in\mathcal{X}_{+}\) not a.e. zero, and put \(\rho_{t}\coloneqq\int_{0}^{t}G_{s}\star f\,\mathrm{d}s\) for \(t\geq 0\)._ 1. _If_ \(d\leq 2\)_, then_ \(\rho_{t}\uparrow\infty\) _locally uniformly._ _._ 2. _If_ \(d\geq 3\)_, then_ \(\rho_{t}\uparrow\rho\) _where_ \(\rho\in L^{\infty}\)_._ Proof.: 1. We may assume \(\int_{B(0,1)}f(x)\,\mathrm{d}x\geq 1\). Then, for any \(x\in\mathbb{R}^{d}\), \[\rho_{t}(x)=\int_{0}^{t}G_{s}\star f(x)\,\mathrm{d}s \geq\int_{0}^{t}\int_{B(0,1)}f(y)G_{s}(x-y)\,\mathrm{d}y\,\mathrm{d}s\] \[\geq\int_{0}^{t}(2\pi s)^{-d/2}\mathrm{e}^{-(|x|+1)^{2}/(2s)}\, \mathrm{d}s\] \[\geq\mathrm{e}^{-(|x|+1)^{2}/2}\int_{1}^{t}(2\pi s)^{-d/2}\, \mathrm{d}s,\] which if \(d\leq 2\) goes to \(\infty\) locally uniformly in \(x\) as \(t\to\infty\). 2. We have \(\rho_{t}\uparrow\rho\coloneqq\int_{0}^{\infty}G_{s}\star f\,\mathrm{d}s=G \star f\) as \(t\to\infty\), where \[G(x)\coloneqq\int_{0}^{\infty}G_{s}(x)\,\mathrm{d}s=c(d)\left|x\right|^{2-d}, \quad x\in\mathbb{R}^{d},\] for some \(c(d)>0\) is the well-known Green's function of the Laplace equation in \(d\geq 3\). Then \(G\) is integrable at the origin and bounded away from the origin, and \(f\) is both bounded and integrable, so \(G\star f\) is bounded. ## 4. Conclusion and Outlook We established sharp conditions on the repulsive potential for a form of the maximum principle (1.4) and a strong notion of global boundedness (1.3) to hold. The latter is especially interesting in light of the motivation from population biology - see the introduction and Appendix A - because it gives a sufficient condition on \(W\) for global boundedness of solutions in the sense that \[\forall\rho_{0}\in\mathcal{X}_{+}\colon\sup_{t\geq 0}\left\|\rho_{t}\right\|_{ \infty}<\infty. \tag{4.1}\] Note however that (1.3) is a much stronger property than (4.1), which we would expect to hold under weaker assumptions: \(c_{W}>0\) necessitates both \(\eta_{W}>0\) and \(\alpha_{W}>0\), that is a singular repulsion and \(|\nabla W(x)|\gtrsim|x|^{1-d}\) for large \(|x|\). However, we would expect that (4.1) should also hold for a sufficiently strong smooth repulsion, and it seems unlikely that \(|\nabla W(x)|\gtrsim|x|^{1-d}\) is sharp; for example if \(d\geq 3\), then (4.1) already holds with \(W=0\). Ongoing work tentatively suggests that the critical strength of the repulsion for (4.1) is \(|\nabla W(x)|\gtrsim|x|^{-1}\) if \(d=1\) and \(|\nabla W(x)|\gtrsim|x|^{-3}\) if \(d=2\) (and none if \(d\geq 3\)). Another direction for future research would be to study non-negative steady states associated with (1.1), whose existence we would generally expect to be related to (4.1). ## Acknowledgements I would like to thank my supervisor Alison Etheridge for many lively discussions, guidance, encouragement, and helpful feedback. I would further like to thank Jose Carrillo for many insightful conversations, valuable feedback, for pointing out useful literature, and for advice regarding publication of the project. ## Appendix A Connection with Stability of Population Dynamics We present here in some more detail the connection between the asymptotic behaviour of solutions to (1.1) and long-term (in-)stability of branching particle systems (BPS). We will explain how the dichotomy observed in the introduction is related to the fact that an ordinary BPS (without immigration or interaction) started from infinite mass is unstable in dimensions \(d\leq 2\) in the sense that the process' mass concentrates, as time goes on, in increasingly large "clumps", with space in between growing increasingly empty. ### SuperBrownian Motion Long-term instabilities of a BPS, for now without immigration or interaction, are due to random fluctuations in the branching mechanism, so if we want to study them using a scaling limit, then the scaling needs to retain stochasticity. Indeed, the hydrodynamic rescaling just leads to the heat equation, which has stable long-term behaviour in any dimension. A well-studied approach to retain stochasticity is to scale up the branching rate at the same time as the particle density, leading to a measure-valued process called _superBrownian motion_[21, 35]. Formally, it solves the stochastic partial differential equation (SPDE) \[\mathrm{d}X_{t}=\frac{1}{2}\Delta X_{t}\,\mathrm{d}t+\sqrt{\gamma X_{t}}\, \mathrm{d}\mathcal{W}_{t},\] (A.1) for a space-time white noise \(\mathcal{W}\), and a parameter \(\gamma>0\) called the _branching variance_; note that setting \(\gamma=0\) recovers the hydrodynamic limit (the heat equation). If \(d\geq 2\), then \(X_{t}\) is singular w.r.t. Lebesgue measure, and (A.1) is ill-posed and has to be replaced with a martingale problem. ### The Pain in the Torus We will now explain heuristically why an SBM started from infinite mass, or from finite mass but conditioned on survival, is unstable in dimensions \(d\leq 2\), and how this is related to (1.1). In short, for large times \(t\), an increasingly small number of individuals that were alive at time zero will be ancestral to the entire population at time \(t\), and if \(d\leq 2\) then the diffusion is not "fast enough" to disperse and spread these large families, and they form well-separated clumps. It is this that underpins the problem famously dubbed "the pain in the torus" by Felsenstein [24]; see also Kallenberg [30] (esp. Cor. 6.5) for similar observations in the context of cluster fields. Let us now consider an SBM started from Lebesgue measure, and make this idea a bit more precise. We cut \(\mathbb{R}^{d}\) into a grid of unit sized cubes, and regard the initial mass in each of them as one family. Due to the independent branching, we can let each of the families evolve independently from each other, and obtain the process started from Lebesgue measure as their superposition (this is called the branching property, see e.g. [21, p. 2]). Each family is a critical branching process started from finite mass, so the probability that it is still alive at time \(n\) is proportional to \(1/n\), and, if alive, its expected size is proportional to \(n\)[35, Thm. II.1.1]. Hence, in expectation, after \(n\) units of time all but every \(n\)'th family has gone extinct, and each of the living families consists of order \(n\) individuals. Due to their diffusive movement, each family will have spread over an area of radius \(\sim\sqrt{n}\), hence the population density of any surviving family is \(\sim n^{1-d/2}\). In the critical case \(d=2\), this crude heuristic misses a factor \(\log n\) (c.f. (A.2) below), so the density of the surviving families diverges as \(n\to\infty\) if and only if \(d\leq 2\), in which case they form separated clumps. This means that the dichotomy between stable long-term dynamics and clumping can really be understood as the dichotomy between unbounded and bounded asymptotic population density of a single surviving family, that is, a finite mass SBM conditioned on survival. It is a classical result due to Evans [23] that the distribution of this process is that of a single "immortal particle" that follows the path of a Brownian motion and throws off mass at a constant rate, which then evolves like an ordinary SBM, independent of the immortal particle. This is not unexpected: In the unconditioned process, at large times the entire population will have descended from increasingly few ancestors that were alive at time zero, until eventually none remain and the process goes extinct; the conditioning imposes that one of those ancestors--the immortal particle--will never perish. If \((Z_{t})\) is a Brownian motion that denotes the path of the immortal particle, then the SBM \((X_{t})\) conditioned on survival, which we may now as well start from zero, formally satisfies the SPDE \[\mathrm{d}X_{t}=\left(\frac{1}{2}\Delta X_{t}+\gamma\delta_{Z_{t}}\right) \mathrm{d}t+\sqrt{\gamma X_{t}}\,\mathrm{d}\mathcal{W}_{t},\quad X_{0}=0,\] where \(\delta_{z}\) is the Dirac delta at \(z\in\mathbb{R}^{d}\). Then the mean measure conditional on \((Z_{t})\) has a density \(\rho_{t}\) that solves \(\partial_{t}\rho=\frac{1}{2}\Delta\rho+\gamma\delta_{Z_{t}}\), so \(\rho_{t}=\gamma\int_{0}^{t}G_{t-s}(\cdot-Z_{s})\,\mathrm{d}s\). If we centre the process around the immortal particle by putting \[\widetilde{\rho}_{t}(\cdot)\coloneqq\rho_{t}(\cdot+Z_{t})=\gamma\int_{0}^{t}G _{t-s}(\cdot+Z_{t}-Z_{s})\,\mathrm{d}s,\] which has expectation \(\gamma\int_{0}^{t}G_{2s}(\cdot)\,\mathrm{d}s\), then we can find \[\mathbb{E}\left[X_{t}(B(Z_{t},1))\right]=\int\limits_{B(0,1)}\mathbb{E}\left[ \widetilde{\rho}(x)\right]\mathrm{d}x=\gamma\int_{0}^{t}\int\limits_{B(0,1)}G _{2s}(x)\,\mathrm{d}x\,\mathrm{d}s\propto\begin{cases}\sqrt{t},&d=1,\\ \log t,&d=2,\\ 1,&d\geq 3.\end{cases}\] (A.2) Therefore, at least in expectation, mass accumulates in the vicinity of the immortal particle for large times if \(d\leq 2\), and remains bounded if \(d\geq 3\). This recovers the picture painted in the beginning of the section: In dimensions \(d\leq 2\), a SBM started from infinite mass concentrates in increasingly few large clumps (centred around the ancestors of the surviving families), with space in between growing increasingly empty. ### Introducing Repulsion As a model for a spatially evolving population (for which \(d=2\) is the most natural dimension), this is very unrealistic, and a better model should reflect stable long-term population dynamics. One of the most obvious reasons this clumping phenomenon does not occur in nature is that real individuals do not behave independently from surrounding individuals, as is assumed in the model underlying SBM. Indeed, a high population density leads to resource scarcity and decreases the average number of offspring, and causes migration away from the overcrowded area. The former effect has already been successfully integrated into the SBM model and been shown to lead to stable long-term dynamics [22]. The latter however, has not yet been studied in this context. A natural way to implement this is to introduce a pairwise repulsion between individuals, which corresponds to the term involving \(W\) in (1.1). If we again consider a single surviving family, that is a finite mass superprocess conditioned on survival, then again this will be described by an immortal particle that constantly immigrates mass into the system, which will now be repulsed from the mass it throws off. Formally, we arrive at \[\mathrm{d}X_{t} =\left(\frac{1}{2}\Delta X_{t}+\nabla\cdot(X_{t}\nabla W\star X_{t}) +\gamma\delta_{Z_{t}}\right)\mathrm{d}t+\sqrt{\gamma X_{t}}\,\mathrm{d}\mathcal{ W}_{t},\] \[\mathrm{d}Z_{t} =\mathrm{d}B_{t}-\nabla W\star X_{t}(Z_{t})\,\mathrm{d}t,\] for independent space-time white noise \(\mathcal{W}\) and Brownian motion \(B\). This turns out to be a very complicated process, and a natural first step is to study it without the noise; if the equation were linear, this would be the same as taking expectations conditional on \((Z_{t})\). If we also replace the Dirac immigration with a bounded function centred on \(Z_{t}\)--which should not change the behaviour of the system with regards to clumping behaviour, but makes the equation more regular--then we arrive exactly at (1.1) with a time-dependent immigration (recall Remark 2.9), and the question we want to answer is under what assumptions on the repulsion does its solution exhibit bounded long-term behaviour in dimensions one and two. ## Appendix B Fractional Sobolev Spaces We give a minimal definition of fractional Sobolev spaces, and refer the reader to [17, 16, 6] for detailed introductions. For \(p\in[1,\infty)\), \(s\in(0,1)\), and \(u\colon\mathbb{R}^{d}\to\mathbb{R}\) measurable let \[[u]_{\mathcal{W}^{s,p}}\coloneqq\left(s(1-s)\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{d+sp}}\,\mathrm{d}x\,\mathrm{d}y \right)^{1/p},\] (B.1) and \[[u]_{\mathcal{W}^{s,\infty}}=\sup_{x\neq y}\frac{|u(x)-u(y)|}{|x-y|^{s}}.\] (B.2) For \(k\in\mathbb{N}\), \(s\in(k,k+1)\), and \(p\in[1,\infty]\), let \[[u]_{\mathcal{W}^{s,p}}\coloneqq\sum_{|\alpha|=k}[\partial^{\alpha}u]_{ \mathcal{W}^{s-k,p}},\] with the usual notation for multi-indices \(\alpha\). Then \[\left\|u\right\|_{\mathcal{W}^{s,p}}=\left(\left\|u\right\|_{\mathcal{W}^{[s ],p}}^{p}+\left[u\right]_{\mathcal{W}^{s,p}}^{p}\right)^{1/p}\] (B.3) for \(p\in[1,\infty)\), and \[\left\|u\right\|_{\mathcal{W}^{s,\infty}}=\left\|u\right\|_{\mathcal{W}^{[s ],\infty}}+\left[u\right]_{\mathcal{W}^{s,\infty}},\] (B.4) define the fractional Sobolev norms. For two normed spaces write \(A\hookrightarrow B\) if \(A\subset B\) with continuous inclusion. **Lemma B.1**.: _If \(0\leq\gamma\leq\beta\) and \(p\in[1,\infty]\), then \(\mathcal{W}^{\beta,p}\hookrightarrow\mathcal{W}^{\gamma,p}\)._ Proof.: Assume \(0<\gamma<\beta\), otherwise there is nothing to show. Then the claim follows because \(\mathcal{W}^{\gamma,p}\) can be written as interpolation space between \(L^{p}\) and \(\mathcal{W}^{\beta,p}\), so \(\left\|\cdot\right\|_{\mathcal{W}^{\gamma,p}}\leq C(\left\|\cdot\right\|_{L^{p} }+\left\|\cdot\right\|_{\mathcal{W}^{\beta,p}})\leq C\left\|\cdot\right\|_{ \mathcal{W}^{\beta,p}}\). See [11, Appendix A] for details on interpolation spaces in the context of Sobolev norms. **Lemma B.2**.: 1. _If_ \(\gamma>0\)_,_ \(\gamma\not\in\mathbb{N}\)_, then_ \(\mathcal{X}^{\gamma}\hookrightarrow\mathcal{W}^{\gamma,\infty}=C^{\left\lfloor \gamma\right\rfloor,\gamma-\left\lfloor\gamma\right\rfloor}(\mathbb{R}^{d})\)_._ 2. _If_ \(\gamma\geq 1\)_, then_ \(\mathcal{X}^{\gamma}\hookrightarrow\mathcal{W}^{\gamma,\infty}\hookrightarrow C ^{\left\lfloor\gamma\right\rfloor-1,1}(\mathbb{R}^{d})\)_._ Proof.: First note that \(\mathcal{X}^{\gamma}\hookrightarrow\mathcal{W}^{\gamma,\infty}\) by definition of \(\mathcal{X}^{\gamma}\), see (2.1). 1. For any \(k\in\mathbb{N}_{0}\) and \(\gamma\in(k,k+1)\), if \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is \(k\)-times differentiable then \(\left\|f\right\|_{\mathcal{W}^{\gamma,\infty}}=\left\|f\right\|_{C^{k,\gamma-k} (\mathbb{R}^{d})}\) by definition, see (3.1), (B.2) and (B.4). Hence we only have to prove that any \(f\in\mathcal{W}^{\gamma,\infty}\) for \(\gamma>0\) is \(\left\lfloor\gamma\right\rfloor\)-times differentiable. There is nothing to prove for \(\gamma\in(0,1)\), so suppose the claim is true for \(\gamma\in(0,k)\setminus\mathbb{N}\) for some \(k\in\mathbb{N}_{0}\) and let \(f\in\mathcal{W}^{\gamma,\infty}\) for some \(\gamma\in(k,k+1)\). Put \(\beta\coloneqq\gamma-\left\lfloor\gamma\right\rfloor\). Then \(f\) is continuous because \(\mathcal{W}^{\gamma,\infty}\subset\mathcal{W}^{1,\infty}=C^{0,1}(\mathbb{R}^{d})\) (the last equality is a well-known theorem). Furthermore \(f\in\mathcal{W}^{\gamma,\infty}\subset\mathcal{W}^{1+\beta,\infty}\), so \(f\) has partial weak derivatives of first order that are \(\beta\)-Holder continuous, so they are in fact proper derivatives 1 Footnote 1: Let \(d=1\) and \(\varphi_{\varepsilon}\) an approximation to unity in \(\mathbb{R}\), \(x>0\), and put \(\psi_{\varepsilon}\coloneqq\varphi_{\varepsilon}\star\mathds{1}_{[0,x]}\), then \[\int_{0}^{x}f^{\prime}(y)\,\mathrm{d}y\leftarrow\int_{\mathbb{R}}\psi_{ \varepsilon}(y)f^{\prime}(y)\,\mathrm{d}y=-\int_{\mathbb{R}}\psi_{\varepsilon }^{\prime}(y)f(y)\,\mathrm{d}y=f(x)-f(0),\] so \(f(x)=f(0)+\int_{0}^{x}f^{\prime}(y)\,\mathrm{d}y\). Analogously for \(x<0\). Since \(f^{\prime}\) is continuous, \(f\) is in fact differentiable in the classical sense with derivative \(f^{\prime}\). This also works in \(d\geq 2\). By the induction hypothesis, all first partial derivatives are themselves \((\left\lfloor\gamma\right\rfloor-1)\) times differentiable. 2. Since \(\mathcal{W}^{\gamma,\infty}\hookrightarrow\mathcal{W}^{[\gamma],\infty}\), it suffices to show that \(\mathcal{W}^{k,\infty}\hookrightarrow C^{k-1,1}(\mathbb{R}^{d})\) for all \(k\in\mathbb{N}\). This is a well-known theorem for \(k=1\). Suppose now it is proved for some \(k\in\mathbb{N}\), and let \(f\in\mathcal{W}^{k+1,\infty}\). Then by (i), \(f\) has proper first partial derivatives, and they are in \(\mathcal{W}^{k,\infty}\hookrightarrow C^{k-1,1}(\mathbb{R}^{d})\), so in fact \(f\) is \(k\)-times differentiable and, by the induction hypothesis, \[\left\|f\right\|_{\mathcal{W}^{k+1,\infty}} =\left\|f\right\|_{\infty}+\sum_{i=1}^{d}\left\|\partial_{i}f \right\|_{\mathcal{W}^{k,\infty}}\] \[\leq\left\|f\right\|_{\infty}+C\sum_{i=1}^{d}\left\|\partial_{i} f\right\|_{C^{k-1,1}}\] \[\leq C\left\|f\right\|_{C^{k,1}}.\] **Lemma B.3**.: _Let \(0\leq\gamma<\beta\). Then there is \(C>0\) such that the following hold._ 1. _If_ \(f,g\in\mathcal{W}^{\gamma,\infty}\)_, then_ \(fg\in\mathcal{W}^{\gamma,\infty}\) _and_ \(\left\|fg\right\|_{\mathcal{W}^{\gamma,\infty}}\leq C\left\|f\right\|_{ \mathcal{W}^{\gamma,\infty}}\left\|g\right\|_{\mathcal{W}^{\gamma,\infty}}\)_,_ 2. _If_ \(f\in\mathcal{W}^{\gamma,1},g\in\mathcal{W}^{\beta,\infty}\)_, then_ \(fg\in\mathcal{W}^{\gamma,1}\) _and_ \(\left\|fg\right\|_{\mathcal{W}^{\gamma,1}}\leq C\left\|f\right\|_{\mathcal{W}^ {\gamma,1}}\left\|g\right\|_{\mathcal{W}^{\beta,\infty}}.\)__ Proof.: If \(\gamma\in\mathbb{N}_{0}\) and \(p\in\{1,\infty\}\), then for any multi-index \(\left|\alpha\right|\leq\gamma\), \[\left\|\partial^{\alpha}(fg)\right\|_{L^{p}} =\left\|\sum_{\alpha_{1}+\alpha_{2}=\alpha}(\partial^{\alpha_{1} }f)(\partial^{\alpha_{2}}g)\right\|_{L^{p}}\leq\sum_{\alpha_{1}+\alpha_{2}= \alpha}\left\|\partial^{\alpha_{1}}f\right\|_{L^{p}}\left\|\partial^{\alpha_{2} }g\right\|_{L^{\infty}}\] \[\leq C\left\|f\right\|_{\mathcal{W}^{\gamma,p}}\left\|g\right\|_{ \mathcal{W}^{\gamma,\infty}}.\] and both claims follow. If \(\gamma\not\in\mathbb{N}_{0}\), we need to show the claim with the LHS replaced by \(\left[fg\right]_{\mathcal{W}^{\gamma,p}}\). By a similar application of the product rule it suffices to consider \(\gamma\in(0,1)\). If \(p=\infty\), then \[\left[fg\right]_{\mathcal{W}^{\gamma,\infty}} =\sup_{x\neq y}\frac{\left|f(x)g(x)-f(y)g(y)\right|}{\left|x-y\right| ^{\gamma}}\] \[\leq\left\|f\right\|_{L^{\infty}}\left[g\right]_{\mathcal{W}^{ \gamma,\infty}}+\left\|g\right\|_{L^{\infty}}\left[f\right]_{\mathcal{W}^{ \gamma,\infty}}\] \[\leq 2\left\|f\right\|_{\mathcal{W}^{\gamma,\infty}}\left\|g \right\|_{\mathcal{W}^{\gamma,\infty}}.\] If \(p=1\), then \[\left[fg\right]_{\mathcal{W}^{\gamma,1}} =\gamma(1-\gamma)\int\int\frac{\left|f(x)g(x)-f(y)g(y)\right|}{ \left|x-y\right|^{d+\gamma}}\,\mathrm{d}x\,\mathrm{d}y\] \[\leq C\left\|g\right\|_{L^{\infty}}\left[f\right]_{\mathcal{W}^{ \gamma,1}}+\int\int\left|f(y)\right|\frac{\left|g(x)-g(y)\right|}{\left|x-y \right|^{d+\gamma}}\,\mathrm{d}x\,\mathrm{d}y.\] We split the integral according to \(\left|x-y\right|\leq 1\) or \(>1\). The former contribution can be bounded by \[\left[g\right]_{\mathcal{W}^{\beta,\infty}}\iint\limits_{\left|x-y\right| \leq 1}\frac{\left|f(y)\right|}{\left|x-y\right|^{d+\gamma-\beta}}\, \mathrm{d}x\,\mathrm{d}y\leq C\left[g\right]_{\mathcal{W}^{\gamma,\infty}} \left\|f\right\|_{L^{1}},\] where we used that \(\int_{B(0,1)}\left|x\right|^{-d+\beta-\gamma}\,\mathrm{d}x\leq C\int_{0}^{1}r ^{-1+\beta-\gamma}\,\mathrm{d}r<\infty\). The contribution with \(\left|x-y\right|>1\) can be bounded by \[2\left\|g\right\|_{L^{\infty}}\iint_{\left|x-y\right|>1}\frac{\left|f(y) \right|}{\left|x-y\right|^{d+\gamma}}\,\mathrm{d}x\,\mathrm{d}y\leq C\left\| g\right\|_{L^{\infty}}\int\left|f(y)\right|\mathrm{d}y=\left\|g\right\|_{L^{ \infty}}\left\|f\right\|_{L^{1}},\] where we used that \(\int_{\mathbb{R}^{d}\setminus B(0,1)}\left|x\right|^{-\gamma-d}\,\mathrm{d}x \leq C\int_{1}^{\infty}r^{-1-\gamma}\,\mathrm{d}r<\infty\).
2306.16322
Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models
Large language models (LLMs) have demonstrated impressive performance on various downstream tasks without requiring fine-tuning, including ChatGPT, a chat-based model built on top of LLMs such as GPT-3.5 and GPT-4. Despite having a lower training proportion compared to English, these models also exhibit remarkable capabilities in other languages. In this study, we assess the performance of GPT-3.5 and GPT-4 models on seven distinct Arabic NLP tasks: sentiment analysis, translation, transliteration, paraphrasing, part of speech tagging, summarization, and diacritization. Our findings reveal that GPT-4 outperforms GPT-3.5 on five out of the seven tasks. Furthermore, we conduct an extensive analysis of the sentiment analysis task, providing insights into how LLMs achieve exceptional results on a challenging dialectal dataset. Additionally, we introduce a new Python interface https://github.com/ARBML/Taqyim that facilitates the evaluation of these tasks effortlessly.
Zaid Alyafeai, Maged S. Alshaibani, Badr AlKhamissi, Hamzah Luqman, Ebrahim Alareqi, Ali Fadel
2023-06-28T15:54:29Z
http://arxiv.org/abs/2306.16322v1
# Taqym: Evaluating Arabic NLP Tasks Using ChatGPT Models ###### Abstract Large language models (LLMs) have demonstrated impressive performance on various downstream tasks without requiring fine-tuning, including ChatGPT, a chat-based model built on top of LLMs such as GPT-3.5 and GPT-4. Despite having a lower training proportion compared to English, these models also exhibit remarkable capabilities in other languages. In this study, we assess the performance of GPT-3.5 and GPT-4 models on seven distinct Arabic NLP tasks: sentiment analysis, translation, transliteration, paraphrasing, part of speech tagging, summarization, and diacritization. Our findings reveal that GPT-4 outperforms GPT-3.5 on five out of the seven tasks. Furthermore, we conduct an extensive analysis of the sentiment analysis task, providing insights into how LLMs achieve exceptional results on a challenging dialectal dataset. Additionally, we introduce a new Python interface1 that facilitates the evaluation of these tasks effortlessly. Footnote 1: [https://github.com/ARBML/Taqym](https://github.com/ARBML/Taqym) ## 1 Introduction The emergence of foundation models (Bommasani et al., 2021) in recent years has instigated a transformative shift within the field of Natural Language Processing (NLP). The conventional practice of pre-training and subsequently fine-tuning a model specifically for a given task has been shown to no longer be necessary on some tasks. Instead, research has shown that a sufficiently large model trained on vast amounts of data is capable of achieving comparable, and sometimes better, performance compared to task-specific models. However, despite its success across numerous NLP tasks, fine-tuned models remain superior in a variety of domains. For instance, its proficiency in solving elementary mathematical operations has been found to be lacking (Davis, 2023; Frieder et al., 2023; Gilson et al., 2022), while its performance in tasks involving commonsense reasoning has demonstrated much room for improvement (Davis, 2023; Guo et al., 2023). Moreover, concerns have been raised regarding the language coverage of this purportedly general-purpose language model (Bang et al., 2023; Jiao et al., 2023; Lu et al., 2022) leading to worse performance on non-English languages. In this paper, we present a comprehensive evaluation of the performance of two ChatGPT-based models, namely GPT-3.5 and GPT-4, across seven crucial Arabic NLP tasks. The evaluation is conducted with the aim of assessing the capabilities of these emerging foundation models and comparing their performance against state-of-the-art (SoTA) counterparts. The selected tasks for this study encompass a diverse range of NLP applica tions, including summarization, diacritization, part of speech tagging, sentiment analysis, transliteration, machine translation, and paraphrasing. The findings of our investigation reveal a notable disparity between the performance of the ChatGPT models and that of their Arabic-specific counterparts across most tasks, with the exception of summarization, where both ChatGPT models exhibit superior performance compared to existing SoTA approaches. Furthermore, the evaluation results indicate that GPT-3.5 outperforms GPT-4 on two out of the seven tasks, specifically summarization and diacritization. Furthermore, to gain deeper insights into the performance of the ChatGPT-based models, we conduct a comprehensive case study focusing on the Sentiment Analysis task. This investigation encompasses an examination of the impact of various factors, including temperature tuning, prompt engineering, and the effect of different numbers of few-shot demonstrations within the context, on the overall performance of the task, in addition to closely analyzing the outputs generated by the models. Moreover, in the context of the diacritization task, we provide fine-grained results across seven distinct domains, allowing for a more granular evaluation of the models' diacritization capabilities. This overall analysis sheds light on the current state of Arabic NLP in relation to the foundation models such as ChatGPT, highlighting both the potential and the existing gaps that warrant further exploration and improvement in Arabic NLP. In conclusion, we introduce a novel Python library, named Tayim, derived from the Arabic word for "evaluation." This library is designed to enhance the evaluation process and is developed as an extension of the OpenAI evals library, incorporating three fundamental principles: (1) Ease of Use, (2) Robustness, and (3) Debugging. By prioritizing user-friendly functionalities, Tayim aims to streamline the evaluation workflow, facilitating seamless integration and efficient utilization of the library. Tayim is released as an open-source library, allowing the wider community to benefit from its capabilities and contribute to its ongoing development and improvement. ## 2 Related Work Large Language Models.Several language models have been proposed recently. One of the earliest pre-trained language models is ELMo which was proposed to model the word context Peters et al. (2018). ELMo learns the word context by pre-training a two-layer bidirectional LSTM network on large data and fine-tuning it on downstream tasks. BERT followed this learning strategy with a Transformer model pre-trained on large datasets Devlin et al. (2019). The performance of BERT outperformed other models on several downstream tasks. This learning paradigm motivated researchers to propose either new architectures (e.g., BART Lewis et al. (2019) and GPT-2 Radford et al. (2019)) or enhanced pre-training techniques Liu et al. (2019); Sanh et al. (2021); Wang et al. (2022). Scaling language models in terms of model size or data used for model pre-training has shown its effectiveness in several downstream tasks Zhao et al. (2023). This led to the introduction of the "large language models (LLM)" term. These models are trained on large datasets and usually have billions of parameters. Such LLMs showed a better performance compared with the smaller models with similar architectures and pre-training tasks (e.g., GPT-3 Brown et al. (2020) vs GPT-2). Recently, a significant number of LLMs have been introduced, such as GPT-3, LLaMA Touvron et al. (2023), PaLM Chowdhery et al. (2022), BLOOM Muennighoff et al. (2022), and Chinchilla Hoffmann et al. (2022). ChatGPT2 is one of these LLMs that was developed based on the GPT model series (GPT-3.5 and GPT-4) and showed a powerful performance with dialogue tasks. Footnote 2: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) ChatGPT EvaluationFollowing the introduction of ChatGPT, numerous studies have emerged assessing its performance across diverse tasks, encompassing machine translation Jiao et al. (2023); Hendy et al. (2023), reasoning Bang et al. (2023); Qin et al. (2023), the health care domain Slapeta (2023); Cascella et al. (2023), among others Liu et al. (2023). In one investigation, Bang et al. (2023) comprehensively assessed the performance of ChatGPT on eight distinct NLP tasks, employing a diverse set of 23 datasets. These tasks primarily revolved around the English language, with the exception of machine translation. The findings of this study unveiled that ChatGPT outperformed several state-of-the-art models across various NLP tasks. However, certain limitations were observed in specific scenarios, such as summarization, machine translation, particularly for low-resource languages, and reasoning capabilities. Conversely, Qin et al. (2023) conducted an investigation revealing ChatGPT's robust performance in numerous NLP tasks, particularly highlighting its proficiency in reasoning tasks. However, the model exhibited limitations in certain tasks, such as sequence tagging. Furthermore, Chan et al. (2023) evaluated ChatGPT's capabilities in capturing inter-sentential relations, including discourse, causal, and temporal relationships. The reported outcomes demonstrated ChatGPT's adeptness in identifying and reasoning causal relationships. Conversely, the model's performance proved to be sub-optimal in tasks related to dialogue discourse parsing and detecting temporal relationships between two events. Notably, ChatGPT exhibited satisfactory detection of explicit discourse relations, yet encountered difficulties in handling implicit discourse relations. Concurrent WorkDuring the course of our study, two papers have been published that evaluate ChatGPT models across multiple Arabic NLP tasks. Namely, Tawkat Islam Khandaker et al. (2023) evaluated GPT-3.5 on a variety of Arabic NLU Elmadany et al. (2022) and NLG tasks, and compared it against the multilingual BLOOMZ (\(7.1\)B) model Muennighoff et al. (2022) and the much smaller monolingual AraT5 Elmadany et al. (2022) fine-tuned on each respective task. Their evaluation encompassed varying numbers of few-shot demonstrations within the context, with n-shot\(=\{0,3,5,10\}\). Their results demonstrated that while GPT-3.5 exhibited superior performance compared to BLOOMZ on Arabic tasks, it still significantly trailed behind the smaller-scale, Arabic-specific finetuned model, AraT5. In another related study, Abdelali et al. (2023) conducted an evaluation of GPT-3.5 on a range of Arabic NLP and Speech processing tasks. However, their investigation lacked explicit disclosure to some of their evaluation methodology (e.g. on the diacritization task), hindering the reproducibility of their findings. In contrast, our research provides comprehensive and transparent documentation of all pertinent details. Additionally, we extend the evaluation to include GPT-4 and provide additional analysis of the models' outputs in Section 6. ## 3 Pipeline In Figure 1, we highlight the pipeline for Taqyim. Given a set of tasks, we pass that to a Python interface that contacts the OpenAI API back and forth and gets the evaluation results. Our Python interface is built on top of a forked version of OpenAI's evals3 library. It has four main advantages over the evals library, in the following paragraphs we illustrate each feature. Footnote 3: [https://github.com/openai/evals](https://github.com/openai/evals) Ease of useThe evals library does not have a Python interface, which makes evaluations a bit more complex. To tackle this problem, we created a Python interface, that could be used to run the evals codebase. In addition to that, we use the datasets4 library to provide a single hub for loading and downloading any dataset to run evaluation on by just providing the name of the dataset. The Figure 1: Taqyim Pipeline. following code snippet gives an example of how to run an evaluation on a sentiment analysis dataset (AJGT). ``` 1importtaqymasstq 2 3#createthepipeline 4pipeline=tq.Pipeline( 5eval_name="ajgt-test", 6dataset_name="arbml/ajgt_ubc_split", 7task_class="classification", 8task_description="Sentiment Analysis", 9input_column_name="content", 10target_column_name="label", 11prompt="Predictthesentiment", 12api_key="copenai-key>", 13train_split="train", 14test_split="test", 15model_name="gt-3.5-turbo-0301", 16max_samples=1,) 17 18#runtheevaluation 19pipeline.run() ``` Respond only positive or negative sentiment in English ({tweet}) RobustnessThe OpenAI's API counts the number of tokens as the total of the tokens required to compute the input and completion. It will return an error if the input size is greater than the model context size. To calculate that efficiently, we use the tiktoken5 library to calculate the number of tokens of the input which is subtracted from the model max context size. In addition to that, to make our library more robust, we allow resuming a run after an error due to stopping execution. The resume_from_record flag is used to resume any given run which hugely reduces cost. Footnote 5: [https://github.com/openai/tiktoken](https://github.com/openai/tiktoken) DebuggingThe evals library shuffles the samples before sending and fetching the API results. This approach makes debugging difficult because the'sample_id' key in the results is not in sync with the row number in the original test dataset. We force the library to send the requests sequentially which makes it easy to debug and visualize the results in a sequential manner. AnalysisGiven the output from TaqMan, we can represent our output as a data frame that can be easily used for analyzing and visualizing the output to get some useful insights about the ChatGPT completions. ## 4 Tasks In this section, we illustrate the datasets used for evaluating the ChatGPT models. The prompts used in evaluation are summarized in Figure 2. We also summarize the datasets used in Table 1. ### Summarization In this task, we aim at predicting the summary of a given article. The summary could vary in length Figure 2: Prompts used for each task. The double curly braces {{}} indicate placeholders that are taken from the dataset to apply the prompt on. depending on the article. In a prompted fashion, given an article, we want to prompt ChatGPT to predict that summary. EascFor this task, we use Essex Arabic Summary Corpus (EASC). The dataset contains 153 Arabic articles with their associated summaries (ElHaj et al., 2010). For the sake of the evaluation, we use the Rougel score which calculates the similarity between the true summary and the predicted summary. EASC has been used a lot in the literature to evaluate LLMs. (Elmadany et al., 2022) trained T5-based models (AraT5) on multiple tasks including summarization. They reported the results on EASC after fine-tuning on the train split of WikiLingua (Ladhak et al., 2020). Similarly, AT5B (Ghaddar et al., 2022) released T5-style models (AT5S and AT5B) that outperform AraT5 by pretraining on more cleaned data. AraMUS (Alghamdi et al., 2023) an 11B parameter language model achieves better results compared to the previous results on EASC and gets a result of 13.3 using the RougeL score. PreprocessingThe dataset contains some long articles which might exceed the model context size an hence can't be consumed as an API request. To avoid that, we truncate all the articles to a max size of 4,290 characters before sending the request. ### Diacritization The automatic restoration of diacritics to Arabic text is arguably one of the most important NLP tasks for the Arabic language. Diacritics play a crucial role in determining accurate word pronunciation and meaning, as they indicate vowel sounds and grammatical information. However, diacritics are often omitted in various written Arabic texts, such as social media posts, news articles, and formal documents, due to reasons like typing convenience, space limitations, or lack of standardization. Consequently, the task of Arabic diacritization aims to address this issue by automatically adding the missing diacritics to the text. This process facilitates precise interpretation and analysis of Arabic data, supporting a wide range of NLP applications, including machine translation, text-to-speech systems, and named entity recognition, among others (Zitouni and Sarikaya, 2009). WikiNewsWe leverage the WikiNews test set introduced by (Darwish et al., 2017) to evaluate the ChatGPT models on Arabic diacritization. It comprises 70 WikiNews articles written in Modern Standard Arabic (MSA), primarily sourced from 2013 and 2014, encompassing 7 domains: politics, economics, health, science and technology, sports, arts, and culture. To ensure a balanced representation, an equal number of articles (i.e., 10 articles) were allocated to each domain. In total, the WikiNews test set encompasses approximately \(18,300\) words, providing a substantial corpus for assessing the performance of ChatGPT models across a diverse range of domains. The SoTA model on this test set leverages a sequence-to-sequence transformer-based model that is fine-tuned on a large diacritized corpus of 4.5 million tokens and employs an overlapping sliding window and a voting mechanism during inference (see next paragraph) to predict the final diacritic (Mubarak et al., 2019). Evaluation SetupFollowing the approach introduced by (Mubarak et al., 2019), we adopt the overlapping context window methodology coupled with a voting mechanism to facilitate diacritic prediction for each character, as opposed to the naive approach of passing the entire sentence to the model at once. This technique involves dividing a given sentence into multiple overlapping segments, each individually presented to the model for inference. This approach proves effective, as local context often provides sufficient information for accurate inference. Consequently, identical character sequences may appear in different contexts (i.e., various segments within a single sentence), potentially resulting in different diacritized forms. To determine the definitive diacritic, we employ a popularity voting mechanism, and in cases where a tie occurs, we randomly select one of the outputs. Our implementation employs a sliding window of 20 words with a stride of 2, similar to (AlKhamissi et al., 2020). \begin{table} \begin{tabular}{l c} \hline \hline **Dataset** & **Tokens** \\ \hline EASC (El-Haj et al., 2010) & 238,993 \\ AJGT (Adomari et al., 2017) & 14,465 \\ PADT (Smrz et al., 2008) & 91,122 \\ APB (Alian et al., 2021) & 17,707 \\ UNv1 (Ziemski et al., 2016) & 407,523 \\ BOLT (Bies et al., 2014) & 66,059 \\ WikiNews (Darwish et al., 2017) & 68,418 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of input tokens of each dataset. We use the tiktoken library to calculate them. Post ProcessingSince ChatGPT models are not constrained during the generation process, they have the potential to produce invalid outputs, which may involve the addition or omission of characters or words in the generated text. Therefore, in order to effectively evaluate the model's performance on generation tasks like diacritization, we employ the following heuristic approach. For each word in the input sentence, we verify if it is present in the generated output. If the word is found, we incorporate the corresponding generated diacritics for that word. Conversely, if the word is not found, we include it in the output without diacritics. This methodology ensures that the output sentence maintains the same content as the input sentence while incorporating the appropriate diacritics. ### Part of Speech Tagging The part of speech tagging (POS) task is responsible for predicting the part of speech tags for a given list of tokens/words. PadFor this task, we use ar_padt split that is offered by universal_dependencies and created by (Zeman et al., 2020). The subset contains \(6,080\) samples for training, \(909\) for validation, and \(680\) for testing. The dataset contains 17 tags that can be used in multilingual settings. For prompting purposes, we feed the model by joining the tokens using space and predicting the tags in the following format token:tag separated by the new line character \(\vspace{.5ex}\). Encoder-based language models like BERT seem to achieve decent results for part of speech tasks (Kondratyuk and Straka, 2019). They used a multilingual BERT language model that was pre-trained in 104 languages. Then they fine-tuned it on 75 languages by concatenating all of them together with simple softmax classifiers for the POS task. Post ProcessingWe tested with a lot of prompts for GPT-3.5 model before getting descent completions that followed our required output format. The POS task is unique in this regard because we constrain the output to be in the format token:tag. For a given output completion, we first match the output tokens against the gold tokens and then extract their associated tags. We remove extra spaces or quotations that might result in some wrong evaluations. ### Sentiment Analysis In this task, the model is prompted to predict the label given the text. We consider this task as a binary classification task where the model is supposed to predict only one of two classes which are positive or negative. AjgtWe use the Arabic Jordanian General Tweets (AJGT) Corpus which consists of \(1,800\) tweets from the Jordanian dialect (Adomari et al., 2017). Since the dataset doesn't have train and test splits, we use the splits suggested by (Elmadany et al., 2022) which consists of \(1,440\) samples for training and \(360\) samples for testing. On this task, masked language models seem to achieve much better results, especially after fine-tuning on the train split of the dataset. More specifically the MARBERT (Abdul-Mageed et al., 2020) model which was pretrained on a social media dataset achieved a score of \(96.11\) on the test split. ### Transliteration Transliteration is the process of converting text from one writing system to another while maintaining the phonetic value of the original text. It enables approximating the pronunciation of words or names in a different writing system, allowing users to understand and vocalize them more easily. It allows non-Arabic speakers to approximate the pronunciation of Arabic words and names by using familiar Latin characters. BoltWe used BOLT Egyptian Arabic Treebank dataset with a test set of size \(6,653\). (Shazal et al., 2020) achieved the best results on the test set of that dataset with a score of \(65.88\) on the BLEU score metric as reported by (Elmadany et al., 2022). ### Machine Translation In this task, the model is prompted to predict a given translation using the source and target languages, for example, from ar to en. In this task, we aim to translate from English to Arabic as a sample study. Unv1We use the united nations version 1 (UNv1) (Ziemski et al., 2016) with its test split that contains around \(4,000\) Arabic-English pairs. On this test split (Elmadany et al., 2022) achieves the best results on the BLEU metric with a score of \(53.29\). ### Paraphrasing Paraphrasing in NLP refers to the process of rephrasing or restating a given text or sentence while preserving its original meaning. In a prompted fashion, given a prompt in a specific language, we want to predict the paraphrased version in the same language. ApbWe used the Arabic Paraphrasing Benchmark (APB) with a test set of \(1,010\) sentences. As a generative task, AraT5 achieves the best result with a score of \(17.52\) on the blind test split. ## 5 Results In this section, we go over the results of the evaluated datasets in a comprehensive fashion. In all the experiments, we use gpt-3.5-turbo-0301 and gpt-4-0314 versions for GPT-3.5 and GPT-4 respectively. ### Zero-shot Results In Table 2, we summarize the results for all the tasks used in the study by applying the prompts in Figure 2. It is also worth mentioning that those prompts are designed to serve the best results for GPT-3.5 model then used for evaluating GPT-4.0. We perform zero-shot evaluation where the model is only assumed to predict the input given prompt+input. For each task, we show the dataset used, the test size, the metric, and a comparison between the ChatGPT models and the SoTA results. Our results show that GPT-4 outperforms GPT-3.5 in all the tasks except the summarization and diacritization tasks. Regarding summarization, the EASC dataset contains large summaries, while GPT-4 predicts conscience and compact summaries. We calculated the average length of summaries generated from GPT-3.5 and we got 429 compared to 348 characters from GPT-4. We discuss diacritization in more detail in Section 5.2. For the other tasks, GPT-4 achieves the largest improvement margin over GPT-3.5 in the POS task because the model can predict the tokens in a more natural manner compared to GPT-3.5. We had to do a lot of prompt engineering to force GPT-3.5 to predict the tokens and tags in our constrained format. Regardless, both models still lack behind compared to fine-tuned models, especially for complex tasks like paraphrasing which contains a lot of dialectal examples. ### Fine-grained Diacritization Results Tables 3 and 4 present the performance outcomes of GPT-3.5 and GPT-4, respectively, on the WikiNews diacritization benchmark in relation to each domain. The obtained results indicate that the _culture_ domain exhibits the most favorable performance with the lowest error rate, whereas the _arts_ domain demonstrates the least satisfactory performance across both models. For a further breakdown of the diacritization results, we refer the reader to Appendix B. It is worth noting that the ChatGPT models occasionally fail to generate diacritics for all characters in the input (as exemplified by the first word in Table 7), resulting in a significant increase in the error rate, particularly for Word Error Rate (WER). To improve the performance of diacritization, future research should consider incorporating multiple instructions and sampling multiple outputs for each input, followed by aggregating the results through a majority voting scheme. This approach is expected to enhance the accuracy of the models on this task and represents a promising direction for further investigation. ## 6 Sentiment Analysis: Case Study In this section, we conduct a case study on the AJGT dataset, where we analyze it from different perspectives. While this approach can be implemented for all the tasks and models, we only use it for the classification task due to budget constraints. ### Temperature Tuning Temperature is a hyperparameter used to control the creativity of a language model. Technically, it normalizes the probabilities of the softmax layer Figure 3: Temperature tuning results for the test set of AJGT. We vary the temperature from 0.0 to 2.0 and evaluate on the test set of AJGT. giving a chance to lower probability tokens to be selected while generating output. In both GPT models, 0 temperature is an aggressive value that takes into account only the highest probable token while a value of 2 gives the highest chance for less probable tokens to be selected while generating the output. In Figure 3, we show the results for different temperatures. For all the different values GPT-4 achieves better results compared to GPT-3.5. Further, zero temperature shows a noticeable performance gap between GPT-3.5 and GPT-4.0 compared to other values. ### Few-shot results Fewshot prompting defines the approach of prompting the model with multiple examples from the training corpus. In this subsection, we study the effect of few-shot size on both models. We set the temperature to be 1 as a middle-ground between creativity and aggressiveness. Figure 4 shows the results of different few-shot examples applied to GPT-3.5 and GPT-4 models. As can be seen from the Figure, increasing the few-shot examples degraded the results for GPT-3.5 while improving the results for GPT-4. We analyzed the model outputs for GPT-3.5 and found out that this model refused to give predictions on many samples for various reasons discussed in more detail in Section 6.4. In \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Domain**} & **DER** & **WER** & **DER** & **WER** \\ \cline{2-5} & \multicolumn{2}{c}{w/ CE} & \multicolumn{2}{c}{w/o CE} \\ \hline **Culture** & 9.37 & 34.15 & 8.10 & 24.52 \\ **Health** & 10.74 & 34.80 & 9.10 & 23.82 \\ **Science** & 11.06 & 37.81 & 9.39 & 27.20 \\ **Politics** & 11.45 & 36.09 & 10.51 & 27.81 \\ **Economics** & 11.66 & 39.99 & 10.55 & 30.48 \\ **Sports** & 14.04 & 42.64 & 12.73 & 32.23 \\ **Arts** & 14.38 & 42.52 & 12.14 & 30.33 \\ \hline **Combined** & 11.64 & 38.06 & 10.18 & 27.88 \\ \hline \hline \end{tabular} \end{table} Table 4: Fine-grained results of the WikiNews diacritization benchmark, showcasing the performance of **GPT-4** across different domains. The results are presented in ascending order of the Diacritic Error Rate (DER) with case-ending (CE). Figure 4: Fewshot results on the test set of AJGT. We evaluate both GPT-4 and GPT-3.5 using different numbers of a few shot samples [0, 3, 5, 10]. \begin{table} \begin{tabular}{l l l l l l||c} \hline \hline \multirow{2}{*}{**Task**} & **Dataset** & **Test Size** & **Metric** & **GPT-3.5** & **GPT-4** & **SoTA** \\ \hline **Summarization** & EASC & 153 & (\(\uparrow\)) RougeL & **23.5** & 18.25 & 13.3 \\ **Sentiment Analysis** & AJGT & 360 & (\(\uparrow\)) Accuracy & 86.94 & 90.30 & **96.11** \\ **PoS Tagging** & PADT & 680 & (\(\uparrow\)) Accuracy & 75.91 & 86.29 & **96.83** \\ **Paraphrasing** & APB & 1,010 & (\(\uparrow\)) BLEU & 4.295 & 6.104 & **17.52** \\ **Translation** & UNv1 & 4,000 & (\(\uparrow\)) BLEU & 35.05 & 38.83 & **53.29** \\ **Transliteration** & BOLT & 6,653 & (\(\uparrow\)) BLEU & 13.76 & 27.66 & **65.88** \\ **Diacritization** & WikiNews & 393 & (\(\downarrow\)) DER & 10.29 & 11.64 & **1.21** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing results of GPT-{3.5, 4} with SoTA. The test size reflects the number of samples used for evaluating each dataset. The best GPT-based model is underlined, and the best result is highlighted in **bold**. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Domain**} & **DER** & **WER** & **DER** & **WER** \\ \cline{2-5} & \multicolumn{2}{c}{w/ CE} & \multicolumn{2}{c}{w/o CE} \\ \hline **Culture** & 9.13 & 30.94 & 8.22 & 22.87 \\ **Politics** & 9.99 & 31.15 & 9.44 & 24.60 \\ **Economics** & 10.08 & 33.62 & 9.49 & 26.73 \\ **Health** & 10.25 & 31.67 & 9.40 & 23.41 \\ **Sports** & 10.68 & 33.77 & 9.68 & 25.41 \\ **Science** & 10.70 & 32.95 & 9.71 & 25.03 \\ **Arts** & 11.55 & 35.64 & 10.08 & 25.59 \\ \hline **Combined** & 10.29 & 32.74 & 9.39 & 24.77 \\ \hline \hline \end{tabular} \end{table} Table 3: Fine-grained results of the WikiNews diacritization benchmark, showcasing the performance of **GPT-3.5** across different domains. The results are presented in ascending order of the Diacritic Error Rate (DER) with case-ending (CE). contrast, more few-shot examples improved the results for GPT-4 allowing the model to reach close to the SoTA results with 5 examples. However, adding more few-shots may degrade the model's performance as in the case with 10 few-shot examples. ### Prompt Engineering In Figure 5, we show the results of the evaluation for multiple prompts shown in Table 5. We observe that in general, GPT-3.5 achieves a wider error range, compared to a shorter range for GPT-4. Hence, we can predict that GPT-4 is more robust against different prompts compared to GPT-3.5. ### Responses Analysis This section provides further details on GPT-3.5 and GPT-4.0 responses on the classification task. We studied the confusion matrices for both models and noticed that GPT-3.5 only classifies 342, 304, 317, and 312 samples for 0, 3, 5, and 10 few-shot examples respectively out of the 360 samples. We also found out that, in some cases, it may respond with a different template than the one instructed to respond with. For example, it responds with "Negative sentiment" instead of "Negative". We asked it only to respond either by "Positive" or "Negative". We went over the unique responses of GPT-3.5 to study the responses that do not contain either "Positive" or "Negative" tokens and found the following: * It cannot understand either part or whole of the sentence in Arabic. It asked the user to translate the sample to English or provide more context. * It cannot determine the sentiment of the sentence by asking the user to rephrase, provide more context, or provide the text in English. * Instead of providing a classification, it provides an Arabic explanation. This happens more when adding more few-shot examples. * It did not respond because it thinks the provided sample is written in an inappropriate language, i.e. it is an offensive sample. However, sometimes it provides a classification with further clarification indicating that the sample is written in inappropriate or offensive language. On the other hand, this behavior was not noticed for GPT-4. In fact, GPT-4 classifies all samples with the given two classes, either "Positive" or "Negative" in all few-shot settings except for 3 few-shot samples where it generated a new label "Neutral". As a result of the above analysis, GPT-3.5 has lower comprehension abilities for Arabic text compared to GPT-4. It is, also, more vulnerable to offensive samples than GPT-4. Figures 6 and 7 show the confusion matrix in zero-shot settings for GPT-3.5 and GPT-4, respectively. Responses counts are removed from GPT-3.5 confusion matrix as most of them are just zeros. As can be seen Figure 5: Prompt engineering results for AJGT. For each model, we run both models on five different prompts. \begin{table} \begin{tabular}{p{227.6pt}} \hline \hline **Prompt** \\ \hline \hline Respond only positive or negative sentiment in English \\ \hline \hline Predict the sentiment of the following statement in English: choose an option: Positive, Negative \\ \hline Is the sentiment of the following statement Positive or Negative? \\ \hline \hline \end{tabular} \end{table} Table 5: The five prompts used for evaluating the AJGT dataset. For each prompt, we vary the text. The fourth prompt is written as a mixture between Arabic and English. in the confusion matrix, the number of unique responses for GPT-3.5 is more than 20. This number further explodes when adding more few-shot examples. This can be seen in the confusion matrices of the other few-shot examples of GPT-3.5 in the Appendix. GPT-4 confusion matrices for the other few-shot examples are also added there. ## 7 Conclusion In summary, this paper evaluated the performance of large language models (LLMs), specifically GPT-3.5 and GPT-4, on seven different Arabic NLP tasks which are sentiment analysis, translation, transliteration, paraphrasing, part of speech tagging, summarization, and diacritization. The study highlighted the impressive abilities of chat-based models like ChatGPT, which are built on LLMs, in performing these tasks in a zero-shot setting. The results demonstrated that GPT-4 outperformed GPT-3.5 on five out of the seven evaluated tasks, indicating continuous advancements in LLM technology. Additionally, we developed a new Python interface publicly available which facilitates the evaluation of these tasks with ease. One significant aspect of this research is the exploration of sentiment analysis in a comprehensive manner. We provide valuable insights into how LLMs achieve remarkable results in dialectal tasks in a zero-shot fashion, shedding light on the underlying mechanisms of these models. Overall, this paper contributes to the growing body of knowledge regarding the capabilities of LLMs, specifically in the context of Arabic language processing. The findings not only showcase the superior performance of GPT-4 compared to its predecessor but also provide a useful tool for future evaluations in Arabic natural language processing tasks. This work paves the way for further advancements in language models and their applications across diverse languages and tasks. ## Limitations Model Selection BiasThis study focuses exclusively on the evaluation of ChatGPT-based models, reflecting their increasing popularity and relevance in the field. However, it is important to acknowledge the presence of other language models (LLMs) that warrant exploration, particularly those explicitly designed with multilinguality in mind, such as BLOOMZ Muennighoff et al. (2022). Including a wider range of LLMs in future research would provide a more comprehensive evaluation and facilitate a more informed comparison. Limited Exploration of Fewshot DemonstrationsMoreover, while this work briefly explores the inclusion of few-shot demonstrations in one task, the main emphasis remains on the zero-shot scenario. Given the contextual learning capabilities of LLMs, it is reasonable to expect that incorporating few-shot demonstrations could potentially enhance model performance. However, a deeper investigation into the impact of few-shot demonstrations across multiple tasks is warranted, as this aspect remains an avenue for future research and calls for more extensive analysis. ## Acknowledgement The authors would like to acknowledge the support received from the Saudi Data and AI Authority Figure 6: GPT-3.5 zero-shot responses to AJGT as a confusion matrix. Although there are only two classes in this dataset, “Positive” and “Negative”, GPT3.5 responds with many variations. Numbers are removed as most of them, except the highlighted areas, are zeros. Also, responses are removed for clarity purposes. Figure 7: GPT-4 zero-shot responses to AJGT as a confusion matrix. (SDAIA) and King Fahd University of Petroleum and Minerals (KFUPM) under the SDAIA-KFUPM Joint Research Center for Artificial Intelligence Grant JRC-AI-RFP-05. We would like to also thank Maqsam for providing the compute to run some of our experiments.
2307.06218
Ashaar: Automatic Analysis and Generation of Arabic Poetry Using Deep Learning Approaches
Poetry holds immense significance within the cultural and traditional fabric of any nation. It serves as a vehicle for poets to articulate their emotions, preserve customs, and convey the essence of their culture. Arabic poetry is no exception, having played a cherished role in the heritage of the Arabic community throughout history and maintaining its relevance in the present era. Typically, comprehending Arabic poetry necessitates the expertise of a linguist who can analyze its content and assess its quality. This paper presents the introduction of a framework called \textit{Ashaar} https://github.com/ARBML/Ashaar, which encompasses a collection of datasets and pre-trained models designed specifically for the analysis and generation of Arabic poetry. The pipeline established within our proposed approach encompasses various aspects of poetry, such as meter, theme, and era classification. It also incorporates automatic poetry diacritization, enabling more intricate analyses like automated extraction of the \textit{Arudi} style. Additionally, we explore the feasibility of generating conditional poetry through the pre-training of a character-based GPT model. Furthermore, as part of this endeavor, we provide four datasets: one for poetry generation, another for diacritization, and two for Arudi-style prediction. These datasets aim to facilitate research and development in the field of Arabic poetry by enabling researchers and enthusiasts to delve into the nuances of this rich literary tradition.
Zaid Alyafeai, Maged S. Al-Shaibani, Moataz Ahmed
2023-07-12T15:07:16Z
http://arxiv.org/abs/2307.06218v1
# Ashaar: Automatic Analysis and Generation of Arabic Poetry Using Deep Learning Approaches ###### Abstract Poetry holds immense significance within the cultural and traditional fabric of any nation. It serves as a vehicle for poets to articulate their emotions, preserve customs, and convey the essence of their culture. Arabic poetry is no exception, having played a cherished role in the heritage of the Arabic community throughout history and maintaining its relevance in the present era. Typically, comprehending Arabic poetry necessitates the expertise of a linguist who can analyze its content and assess its quality. This paper presents the introduction of a framework called _Ashaar_1, which encompasses a collection of datasets and pre-trained models designed specifically for the analysis and generation of Arabic poetry. The pipeline established within our proposed approach encompasses various aspects of poetry, such as meter, theme, and era classification. It also incorporates automatic poetry diacritization, enabling more intricate analyses like automated extraction of the _Arudi_ style. Additionally, we explore the feasibility of generating conditional poetry through the pre-training of a character-based GPT model. Furthermore, as part of this endeavor, we provide three datasets: one for poetry generation, another for diacritization, and a third for Audi-style prediction. These datasets aim to facilitate research and development in the field of Arabic poetry by enabling researchers and enthusiasts to delve into the nuances of this rich literary tradition. Footnote 1: [https://github.com/ARBML/Ashaar](https://github.com/ARBML/Ashaar) ## 1 Introduction In a general setting, Arabic poetry could be divided into two forms: rhymed or measured and prose. Rhylmed poetry was first introduced and theorized by al-Farahidi (711 - 786 A. D.) who categorized every poem into one of 15 different classes, later extended to 16, called meters or _Buhur_ as pronounced in Arabic. These meters govern how each poem should be constructed with specific rules called _Arud_ or _Arudi Style_. The main constructs of Arud could be represented using _Tafeelat_ as plural or _Tafeelah_ as singular for easier memorization. Such constructs could be used to define how to create each meter using a finite set of rules. Another important part of Arabic poetry is _Qafiyah_ which refers to the end rhyme pattern or the rhyme scheme used in the poem. The construction of meters depends on diacritics which are special symbols assigned to each letter in the poem. These diacritics are categorized as either harakah or sukun. Analyzing poems usually needs expertise in the field to figure out the consistent meter and find out issues if there are any. Poets, nevertheless, have an intrinsic ability to construct poems from a specific meter without the need to consult experts. Recently, in the modern era, many poets were influenced by western culture resulting in a new form of poetry called prose poetry. Prose poetry is loose in terms of rules but has some structure and rhythm although not in a strict format. Modern poets used poetry as a medium to express various emotions and feelings. Prose poetry is similar to English poetry in the way it is constructed but, due to its long history, Arabic poetry is richer in terms of metaphors and symbolism. In this paper, we utilize deep learning approaches to analyze and generate poetry. A high-level pipeline is shown in Figure 1. We summarize our contributions as the following: 1. We create four public datasets: _Ashaar dataset2_ is a labeled dataset with meter, theme, era, etc. that could be used for conditional poetry generation. _Ashaar diacritized3_ is a cleaned dataset with diacritized poems. _Ashaar arudi4_ is a dataset that gives gold Arudi representations for a given set of verses. _Ashaar tafeelab5_ which contains all the possible tafeelabt for a given meter. Footnote 2: [https://huggingface.co/datasets/arbml/Ashaar_dataset](https://huggingface.co/datasets/arbml/Ashaar_dataset) 2. We provide five pre-trained models. Three classification models for era, theme, and meter. One pre-trained model for diacritization. And, a pre-trained model for conditional poetry generation. 3. We introduce a framework named _Ashaar_ for poetry analysis and generation. The analysis part uses the meter and diacritization models to predict the Arudi form. While, the generation part uses the meter, qafiyah, and theme to generate poem completion. Footnote 3: [https://huggingface.co/datasets/arbml/Ashaar_dataset](https://huggingface.co/datasets/arbml/Ashaar_dataset) Footnote 4: [https://huggingface.co/datasets/arbml/Ashaar_factorial](https://huggingface.co/datasets/arbml/Ashaar_factorial) Footnote 5: [https://huggingface.co/datasets/arbml/Ashaar_tafeelah](https://huggingface.co/datasets/arbml/Ashaar_tafeelah) ## 2 Literature Review Many studies have been proposed to analyze and study the Arabic poetry metric system. Most of such efforts are directed towards linguistic libraries. (Ziyovuddinova, 2021)(Manna and Arifin, 2021), (Paoli, 2001), and (Maling, 1973) are just examples of the literary work advocated to the subject. Below is a list of the tasks we found in the literature that deals with Arabic poetry from various aspects. These tasks include Authorship Attribution, meter classification, emotion and era classification, poetry identification from textual sources, poetry generation, and other miscellaneous tasks. ### Authorship Attribution In Arabic literature, there are many studies that dealt with authorship attribution in general text. (Al-Sarem et al., 2020), (Altakrori et al., 2018), (Altheneyan and Menai, 2014), and (Hajja et al., 2019) are instances of various methods used to approach this problem for general Arabic text. For a special format of Arabic text like poetry, limited work has been proposed. (Ahmed et al., 2016) used machine learning methods such as Support Vector Machines (SVM) and Sequential Minimal Optimization (SMO) to study the problem of the Authorship Attribution of Arabic poetry. The features they extracted from poetry cover characters, lexical syntactic, and se Figure 1: General pipeline for Ashaar. mantic features. They applied their methods to a corpus of poems belonging to 54 poets. They achieved 98% precision for SMO as the best score. (Al-Falahi et al., 2015) attempted to approach this problem using Markov chains. They conducted their experiments on characters and other syntactically crafted features. The experiments were conducted on a dataset of 33 samples from 33 authors for training and another different 33 unknown samples for testing. They achieved more than 96% accuracy score on the test set. (Albadawi and Abandah, 2021) developed a deep-learning model to identify poetry authors. The features they used are a fusion of the character embeddings and an LSTM-based pre-trained meter classification model. This architecture was evaluated on a dataset of more than 100k verses from 10 famous Arabic poets. They achieved around 81% accuracy. On a different direction, (Omer and Oakes, 2017) utilized Arud words encoding as binary features for prose Authorship Attribution. They compare this set of features to another baseline of only considering the most frequent 100 words. They showed that their method is superior compared to this baseline. They tested their method on two different sets of Arabic and English texts. ### Meter Classification The work on textual Arabic meter classification can be divided into two main categories based on the techniques used. The first category covers the techniques that are rule-based while the second category approached this problem using deep learning methods. The prominent drawback of the first approach is that it requires the poetry text to be diacritized either fully or partially. Another characteristic of this category is that it has been evaluated on relatively small datasets as compared to the second category. The largest evaluation study is reported by (Alabbas et al., 2014) consisting of less than 7k verses. Below, we survey the literature for both approaches. **Traditional Machine Learning** Several methods have been proposed to classify Arabic poetry meters. (Mohammad, 2009) proposed a Naive Bayes while (Ismail et al., 2010) proposed a knowledge-based framework. (Saleh and Elshafei, 2012) filed a patent for a system that classifies poetry from acoustic as well as textual input. (Alnagdawi et al., 2013), (Alabbas et al., 2014), and (Abuata and Al-Omari, 2018) proposed rule-based systems. (Ahmed and Trausan-Matu, 2017) introduced a rule-based system to analyze the rhyme of the poem. (Berkani et al., 2020) created a matching pattern approach where the verse is matched against a curated set of meter patterns. (Zeyada et al., 2020) suggested a system that extends the meter classification task for modern Arabic poetry, albeit that modern Arabic poetry does not need to follow a meter, unlike classical poetry. (Alqasemi et al., 2021) evaluated traditional machine learning techniques on a partial dataset proposed by (Al-Shaibani et al., 2020). **Deep Learning**(Yousef et al., 2019) is the first work that utilizes deep learning for this task for all the 16-meter classes. They also tried to approach this task in English and Arabic languages. It is worth noting that Arabic poetry classes are 16 as compared to only 4 meters in English. This makes the task more complex to approach for Arabic. In their research, they introduced, APCD, a large dataset of 1.5M verses of Arabic poetry. The model they proposed is RNN-based. The results they achieved are 96.38% and 82.31% for Arabic and English respectively. (Al-Shaibani et al., 2020) proposed a GRU-based model to classify Arabic poetry meters. The model is a 5-layer stacked GRU followed by two dense layers. The dataset introduced in this research is MetRec (Al-Shaibani et al., 2020) constituting more than 55.4K verses of 14-meter classes. The result they achieved is 94.3% on the accuracy score. (Abandah et al., 2020) extended the work done by (Al-Shaibani et al., 2020) and (Yousef et al., 2019) on this task. They introduced a larger RNN-based model evaluated on a dataset of poetry and prose, 17 classes in total. They introduced the APCD2 dataset which is an extended version of APCD with the prose class. In their research, they mark the use of diacritics as optional in contrast to (Al-Shaibani et al., 2020) where these characters are removed from the input stream. The results they reported crossed 97% accuracy on this task. ### Emotion and Era classification In the literature, there is a lot of focus to work on era classification as compared to theme classification. **Theme Classification**(Alsharif et al., 2013) investigated the promise of machine learning methods to address the task of Arabic poetry emotion classification. The dataset they collected consists of 1,231 Arabic poems variable in length with four major emotional classes: Retha (Elegiac), Ghazal (love or romance), Fakhr (pride or honor), and Heja (Satire). They experimented with Naive Bayes classification, SVM, Voting Features Intervals (VFI), and Hyperpipes. They reported the results of their experiments in precision, recall, and F-measure. They showed that VFI outperforms others in terms of F-Measure with a result of around 73%. **Era Classification** Depending on a set of literary features, Arabic scholars divided the Arabic poetry timeline into a couple of time segments based on either political status or literary features specific to that period of time or location. These segments are called eras. (Abbas et al., 2019) tried to classify Arabic poetry into its recitation era. The era classes they worked on are 5 ranging from _pre-Islamic_ era to _Andalusian_ era. The dataset comprised a set of more than 30k poems belonging to these different classes. Various machine learning methods have been experimented with this dataset. They showed that Multinomial Naive Bayes achieved the best performance with an F1-score of 68.8% and a kappa score that is very close to 0.4. (Orabi et al., 2020) proposed a deep learning-based approach to address era classification. The dataset they used is scraped from the web. It consists of 60,377 poems in classical Arabic recited by 739 poets. They developed two deep learning-based models and compared their performance. The first is a classification model with FastText (Bojanowski et al., 2017) embeddings while the second is a CNN-based model. They showed that the CNN model was superior achieving more than 91% result on F1-score in terms of binary classification into modern and non-modern poetry. (Khorsheed and Al-Thubaity, 2013) proposed a comprehensive study on Arabic text classification with different textual styles including poetry. The poetry dataset they used comprised 1.95K documents with different 6 classes. They tried different features selections methods along with different machine learning classifiers. The best classification results they achieved were for the C5.0 classifier with 80% on average for all styles and 50% accuracy for poetry only. They attributed this low results to the difficulty of the classification task on creative materials like poetry. (Gharbat et al., 2019) evaluated various classification models for classify poetry from _Abbasid_ and _Andalusian_ eras. The evaluated models are logistic regression, random forest, decision trees, and SVM. They evaluated these models on a curated dataset from the web. The dataset contains around 10,895 hemistiches (half a verse) of 15 random poems by 15 poets. Their experiments showed that SVM achieved the best performance. ### Poetry Generation With the recent advancement of deep learning approaches, there were many attempts in the literature to generate Arabic poetry. (Talafha and Rekabdar, 2019) proposed a GRU-based approach to generate Arabic poetry. They trained their model on a dataset comprising more than 20.1k poems with 80.5K verses collected from the web. For evaluation, they conducted two types of evaluations: quantitative and qualitative. For quantitative analysis, the BLEU score was used. For qualitative, they involved human subjects to evaluate the generated poetry. (Beheitt and Hmida, 2022) proposed a GPT-based model to generate poetry. The model was trained from scratch. The methodology they followed is first training the GPT model on a newswire dataset to develop language understanding and then fine-tuning the model on a poetry dataset. The model was evaluated on BLEU as well as human evaluation. They showed that their approach outperformed other approaches that are based on elementary architectures like RNNs and GRUs. (Elkaref et al., 2022) evaluated the poetry generation task on two transformer-based models with two different promoting settings. The evaluated models are BERTShared (Rothe et al., 2020) and GPT-j (Wang and Komatsuzaki, 2021) and the prompting methods are rhythm or topic based. The dataset used for this research is a fused collection of an earlier version of Ashaar and a public dataset published in GitHub (Ahm). They found out that GPT-J is better at capturing the rhyme while BERTShared is better at generating fluent poems. (Abboushi and Azzeh, 2023) fine-tuned AraGPT2 (Antoun et al., 2020) to generate poems. The dataset they used to fine-tune the pre-trained model is APCD. In one of the proposed experiments, the model was constrained to generate poetry from a specific meter. For evaluation, they used the BLUE score as well as human evaluation where they showed that this fine-tuning procedure outperformed all proposed approaches in the literature. They also showed another study with fake-generated poetry presented to subjects with limited poetic knowledge. They showed that the generated poetry was able to fool at least 61% of the population. ### Poetry Identification from the Web (Almuhareb et al., 2013) proposed a system to identify poetry from a text document. The proposed system relies extensively on the structural patterns of textual poetry. The system is evaluated on collected data from the web. The dataset has 23K lines with 161 classical poem instances. The method achieved an F-measure of 95%. (Almuhareb et al., 2015) extended their work by considering modern poetry that is different in style than classical poetry. The method is similar to the one with classical poetry in the sense that it focuses on the structural patterns of modern poetry. The method was evaluated on a dataset of 2,067 plain text documents containing 513 modern Arabic poems. The method achieves an accuracy of more than 99.81%. (Almuhareb, 2016) developed a system for recalling Arabic poetry material from the web. The system consists of two main components, a classifier, and a distiller. The classifier classifies whether a page contains poetic material while the distiller absorbs the poetic material from the selected page. The system achieves a precision of 94% on an initially selected 14 domains as a seed list. ### Miscellaneous Tasks (Khan, 2014) applied the Arud meter system as a stenography tool. The idea is that the poem will be used as a cover message. Its binary representation is used to hide the secret message with the help of some special Arabic characters like diacritics. They compared their approach with other methods in the literature and they showed that their method outperforms others in the literature in the capacity score. (Abandah et al., 2020) investigated the model architecture proposed by (Abandah and Abdel-Karim, 2020) which was designed for prose text to automatically diacritize Arabic poetry. They evaluated the model on an extended version of the dataset proposed by (Yousef et al., 2019). They selected samples where the diacritization ratio is 0.5 or higher resulting in 368.6K verses. The results they showed are 6% and 20% for DER and WER respectively where it was 1.9% and 5.1% respectively for prose. ## 3 Datasets The release of large Arabic poetry datasets did not happen until recently with the surge of deep learning. The first sufficiently large dataset published were, MetRec (Al-Shaibani et al., 2020), APCD (Yousef et al., 2019), and APCD 2.0 (Abandah et al., 2020). MetRec is the smallest among the three of these datasets. It contains verses from the most frequent 14 meters of Arabic poetry with a total of 55.4K verses. APCD is a massive dataset compared to MetRec with more than 1.8M verses containing samples from all 16 meters. APCD was extended by (Abandah et al., 2020) introducing APCD2.0. They added another class for prose to distinguish poetry from prose in their proposed classification model. Ashaar dataset extends APCD by adding more poetry while considering more sources. We also added a column for the poem theme which was not available in APCD. Table 1 compares APCD with Ashaar. As can be seen from this table, Ashaar is almost an order of magnitude larger than APCD in terms of verses and poets. This plenty of poetic data along with poets is useful for many tasks concerning poetry generation such as language modeling and authorship attribution. It can be noted from the table that Ashaar is also larger in terms of diacritized verses. In this comparison, we considered verses where diacritics constitute more than 25% of its characters. This is helpful for tasks that involve diacritics predictions. ## 4 Poetry Classification In this section, we mainly discuss three types of classifications which are era, theme, and meter clas \begin{table} \begin{tabular}{l c c} & **APCD** & **Ashaar** \\ \hline \hline Poems & - & 254,630 \\ \hline Poets & 3,569 & **7,167** \\ \hline Verses & 1,831,770 & **3,857,429** \\ \hline Verses with meter & 1,739,436 & **1,947,648** \\ \hline Verses with theme & - & 1,757,639 \\ \hline Verses with era & 1,831,770 & **1,899,567** \\ \hline Diacritized verses & 817,756 & **1,389,564** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between Ashaar dataset and APCD on different aspects. sification. In each subsection, we illustrate the dataset used and the architecture for training. ### Meter Classification As discussed in the literature, there are mainly 16 meters that govern how each poem should be constructed. In this subsection, we discuss our approach to generating a system that can predict a meter for a given poem. Preprocessing and AugmentationWe first remove duplicates from the training corpus that exist in the testing corpus. Then for each verse in the training corpus, we split the two parts using a special symbol #. We then remove all special characters except for the hashtag and diacritics. After that, we augment the corpus by randomly splitting each bait using # then randomly swap the first and second parts. Also, to make the corpus more robust against partial diacritization at each step of training we randomly remove diacritics. We end up with 1,717,948 verses for training. We use a 15% subset for validation. For testing, we use a dataset of size 362,798 verses. Training and ResultsWe use a transformer base model with multi-head attention. We start with an embedding of size 64. We use a transformer block with two dense layers at the end with ReLU activation functions. The transformer block contains, multi-head attention followed by dropout and layer normalization with a skip connection. We then add 3 Bidirectional GRU layers followed by one dense layer. The last block contains the same skip connections as in the previous block. In Figure 2, we show the main architecture of the model. We train the model for 15 epochs and we save the model that achieves the best validation accuracy. In Table 2, we compare the results of our transformer base model to the work of (Abandah et al., 2020). We mainly compare training on the smaller dataset and larger dataset with and without diacritics. Our base model trained on comparable corpus compared to the state of the art achieves better results with and without diacritics on the test set. Also, our models are 4 times faster in terms of inference when evaluated on a Tesla T4 machine with around 16GB of memory. \begin{table} \begin{tabular}{l|l|l|l|l} **Model/Metric** & **Diacritics** & **Training Corpus size** & **Prediction Time / 1024** & **Accuracy** \\ \hline \hline (Abandah et al., 2020) & ✓ & 1,493,086 & 388 ms & 96.18 \% \\ Transformer Model & ✓ & 806,062 & 84 ms & 95.51 \% \\ Transformer Model & ✓ & 1,460,255 & 84 ms & **96.24** \% \\ \hline (Abandah et al., 2020) & & 1,493,086 & 388 ms & 94.18 \% \\ Transformer Model & & 806,062 & 84 ms & 93.90 \% \\ Transformer Model & & 1,460,255 & 84 ms & **95.22** \% \\ \end{tabular} \end{table} Table 2: Comparison between our model and (Abandah et al., 2020). We compare our models as a combination of dataset size and diacritics training. Also, we show the speed of running inference per 1024 batch size. Figure 2: Model architecture for the meter classification model. ### Era Classification We group the classes of poems into four main eras in Hijri corresponding to 1) before Islam - 132, 2) 132-232, 3) 232 - 784, and 4) 784-now. We use a max length of 64 verses for each poem. We then fix the max size of each class into 50,000 poems in order to avoid bias towards classes with many poems. For tokenization, we use a sentence-piece tokenizer and we create a model with a 10,000 vocabulary size with 128 max number of tokens for each poem. We train a model with 3 bidirectional layers and two dense layers with a dropout of size 30% for 5 epochs with batch size 128. Figure 3, shows the confusion matrix on the test set. We can notice that in general, we see confusion, especially in consecutive eras. ### Theme Classification We group the classes of poems into four main categories that are, elegy (sad) poems, lampoon (sarcasm) poems, boosting (praise) poems, and romantic poems. We use a similar training setup as in era classification. Figure 4, shows the confusion matrix on the test set. Generally, we observe that the model finds it much more difficult to predict the correct classes as compared to the era classification. We think the reason is the contamination of the dataset which might contain a lot of incorrect labels. ## 5 Poetry Diacritization In this section, we try to tackle the problem of diacritizing Arabic poetry. Usually, poetry contains many classical words and metaphors which makes assigning diacritics to sentences more challenging. ### Training datasets We use the Tashkeela dataset for pre-training the model (Zerrouki and Balla, 2017). Since the dataset doesn't contain any splits, we utilize the splits suggested by (Fadel et al., 2019) which contains 50k training, 2.5k validation, and 2.5k testing. For Ashaar, since there are many sentences that are not diacritized we filter by percentages of diacritics. If the verse has more than 5% missing diacritics we discard it from training the model. We end up with 26,091 poems after also discarding short poems. We use 23,481, 1,305, and 1,305 for training, validation, and testing respectively. We utilize the word error rate (WER), diacritics error rate (DER), WER without case ending called WER*, and DER without case ending DER*. Figure 4: Confusion matrix for theme classification. Figure 3: Confusion matrix for era classification. \begin{table} \begin{tabular}{l|l} **Era** & **Date (Hijri)** \\ \hline \hline Umayyad & 041 - 132 \\ \hline Abbasid & 132 - 232 \\ \hline Al-Andalus & 113 - 172 \\ \hline Fatimid & 358 - 567 \\ \hline Ayyubid & 569 - 626 \\ \hline Mamluk & 648 - 784 \\ \hline Ottoman & 699 - 1200 \\ \hline Modern & 1200 - now \\ \end{tabular} \end{table} Table 3: Time distribution of Arabic Poetry. The dates are in Hijri which is calculated using the Lunar calendar. ### Results We use a 1-D convolution bank, highway network, and bidirectional GRUs from [1] as our main model for pre-training. We pre-train two models, one on Tashkeela and another on the diacritized version of Ashaar. We train each model for 10,000 steps and evaluate both on the test subset of Ashaar. In Table 4, we compare the two training strategies for diacritization. We observe that pre-training and then evaluating on Ashaar provide better results. ## 6 Predicting Arudi Style Each given meter has a closed set of tafeelat that represent how the meter should be constructed. For example, the Taweel meter has this sequence where 1 represents harakah and 0 represents a sukun: \[11010\,1101010\,11010\,110110\] When a verse or hemistich is created it should follow one of the permissible representations. If the verse doesn't follow the meter, we can map it to the original sequence by addition, deletion, or flipping. As an example, the following sequence could be mapped to the previous sequence using that coloring scheme: \[110100\,1101010\,11011\,110110\] Using that representation, we can predict whether a given poem has any problems as the following. 1. We first created a dataset of all permissible changes of a given meter. 2. For a given poem we diacritize it using the approach mentioned in section 5. We then map every harakah to 1 and sukun to 0. 3. Then we use our collected dataset to find the sequence with the largest cosine similarity match. We utilize the built-in function in python SequenceMatcher which gives a similarity score between input patterns. We can use our meter classification model to also reduce the cost of the search. This makes our algorithm robust because even if the verse doesn't follow any given Tafeelah if the diacritized form is not accurate we will still predict the Tafeelah with high confidence. Furthermore, using our color coding representation we will be able to predict if a given character needs to be added, deleted or flipped. In order to assess the ability of our system to predict correctly a given Arudi style, we created manually an independent test set containing 100 hemstitches. We use our system to predict the patterns and then compare the gold patterns to the output. Using that we get an average score of 93.41% which indicates a high similarity score. We get 43% with an exact match i.e. similarity score of 100% which indicates a precise approach. ## 7 Poetry Generation In this section, we consider training a poetry model from scratch rather than fine-tuning. Our early experiments show that usually poetry doesn't work well with word pieces (see Appendix A) so instead we retrain the whole model on characters. ### Data Preparation **Representation** Our main objective is to train a model that can generate poetry that preserves the meter, theme, and structure of classical poetry. To do that, we introduce new types of tokens to the model as in Table 5. Below we show a simple example of how to encode a given poem that contains two verses. We use an HTML-like prompting approach to be applied for a given input poem. Note that n is in the range(0, 15) and k in the range range(0, 17). \begin{table} \begin{tabular}{l|c|c|c|c} **pre-training** & **DER** & **WER** & **DER*** & **WER*** \\ \hline Tashkeela & 20.40 \% & 62.3 \% & 18.22 \% & 50.42 \% \\ Ashaar & **14.03** \% & **47.97** \% & **12.09** \% & **36.09** \% \\ \end{tabular} \end{table} Table 4: Diacritization metrics on the test set of **Ashaar**. * <|meter_n|>qafyiah<|theme_k|> <|psep|> <|bsep|> <|bsep|> verse_1<|vsep|>verse_2 </|bsep|> </|psep|> Note that, for poems that don't have a meter we use our pre-trained meter classification to predict that meter. To make the prediction more robust, we use a majority vote over the poem to be more accurate. We filter out poems that don't match our meter classifier label. For the theme, we reserve a token for unknown theme. **Data Cleaning and Filtration** We apply the following cleaning procedures for each poem 1. We map characters using their Unicode representation. 2. Remove poems that don't have an even number of verses. 3. Remove poems that have very small verses i.e number of characters less than 5. 4. Remove poems with meters that are not one of the 16 classes we have. We release our dataset pre-processed in that format in HuggingFace 6. Footnote 6: [https://huggingface.co/datasets/arbml/Ashaar_dataset](https://huggingface.co/datasets/arbml/Ashaar_dataset) ### Training For this task, we don't remove any diacritics and we consider this as an approach to generate poetry with diacritics as well. Training a GPT-based model using BPE tokenization will be expensive because the frequency of word pieces will be much less, especially with partially diacritized text. So, we use a character-based tokenization approach. We train the model for 450,000 steps with batch size 16 and context size 512. The max vocabulary size is 121 which equals the number of characters plus diacritics in the corpus in addition to the reserved tokens in Table 5. We use the default GPT-2 transformer-based architecture7 with 10 layers. Footnote 7: [https://huggingface.co/docs/transformers/model_doc/gpt2](https://huggingface.co/docs/transformers/model_doc/gpt2) ### Evaluation Evaluating language models is a difficult task, let alone poetry generation which is a creative challenging task. For this purpose, we use a set of novel evaluation metrics, to evaluate the generative power of our pre-trained models. Figure 5: Confusion matrix for rhythm evaluation. \begin{table} \begin{tabular}{l|l} **Functionality** & **Tokens** \\ \hline \hline Poem separators & [”<|psep|>", "</|psep|>"] \\ \hline Bait separators & [”<|bsep|>", "</|bsep|>"] \\ \hline Verse separator & [”<|vsep|>"] \\ \hline Themes & [”<|theme_0|>",..., "<|theme_17|>"] \\ \hline Meters & [”<|meter_0|>",..., "<|meter_15|>"] \\ \hline Unused tokens & [”<|res_0|>",..., "<|res_9|>"] \\ \hline Other tokens & [”<|pad|>", "<|endoftext|>"] \\ \hline \end{tabular} \end{table} Table 5: Descriptions of the special tokens used in the tokenizer. Rhyth EvaluationIn order to evaluate how much rhythm is encoded in our generated poetry we use meter classification for such a task. Given a generated poetry output we can evaluate how much the model can generate poetry that belongs to the same meter with high confidence. We use the same meter classification model that we created in section 4.1. Because we can not force the model to generate certain poetry, we use the model which gets a high accuracy to evaluate how much rhythm is able to generate. At each step, we generate 10 verses for the 15 meters used in Abboushi and Azzeh (2023). We repeat the process 100 times for each meter resulting in 1,500 generated poems. Then, we pass the generated poems to our classification model to predict the meter. We use majority voting to decide if the poem meter is correct. For Top-3 and Top-5 accuracy, we predict correctly if one of the top 3 and 5 predicted meters contains the true meter. In Table 7, we show the results and compare them to the work done by Abboushi and Azzeh (2023). Even though, our model is much smaller it, still achieves better results on the poem level. In Figure 5, we show the confusion matrix for meter classification on the generated poetry. We mostly observe that the more popular the meter the better results. Still, for 50% of the meters, we achieve more than 90%. Zero-shot AnalysisZero-shot evaluation is used to evaluate how much pre-trained models can incorporate or generalize to new tasks without explicit pre-training or fine-tuning. The model was not pre-trained explicitly to predict diacritics for a given text in a supervised way. In Table 8, we evaluate the correctness of our model in predicting diacritics. We evaluate the model against our pre-trained diacritization model in Section 5. We consider our model as the gold prediction. We pre-train an unconditional character-based model on Ashaar and evaluate its diacritization ability. We sample with different probabilities and evaluate the DER and WER metrics. We observe that the model is able to predict diacritics with at most a 50 % error rate. ## 8 Conclusion To summarize, our paper introduces a system called _Ashaar_ capable of analyzing and generating con \begin{table} \begin{tabular}{l|c|c|c|c|} **Model** & **Tokenizer** & **Accuracy** & **Top-3 Accuracy** & **Top-5 Accuracy** \\ \hline \hline (Abboushi and Azzeh, 2023) & BPE & 56.70 & - & - \\ \hline **Ashaar** & Char & **64.40** & **69.40 \%** & **71.13** \\ \end{tabular} \end{table} Table 7: Comparison to State of the Art in rhythm evaluation results. We compare the tokenizer, number of layers Top-n accuracy where n is varied across 1, 3, and, 5. \begin{table} \begin{tabular}{l|c|c} **Sampling Probability** & **WER** & **DER** \\ \hline \hline sampling with 2 & 88.77 \% & 43.30 \% \\ sampling with 3 & 88.57 \% & 44.62 \% \\ sampling with 5 & 90.71 \% & 47.85 \% \\ sampling with 7 & 91.18 \% & 50.28 \% \\ \end{tabular} \end{table} Table 8: Zero shot evaluation on diacritics. We compare an unconditional pre-trained character-based GPT in zero-shot diacritization abilities with different sampling rates. ditional poetry. Additionally, we curate multiple datasets and assess their effectiveness in various tasks such as classification, diacritization, Arud prediction, and conditional poetry generation. Furthermore, we leverage this dataset to generate poetry and evaluate the performance of our character-based model in diacritization, where we observe a satisfactory level of proficiency. ## 9 Acknowledgement We would like to thank the colleague of Computing and Mathematics at KFUPM for providing the compute to train some of the models. Also, we would like to thank ML Collective for providing compute part which was used to train the generative models. We also would like to thank Omar Hemmad, with whom we discuss some of the ideas during the early phases of the project. In addition, we would like to thank Kyle McDonald for providing the compute to pre-train some of the earlier models.
2306.09202
An Optimal Algorithm for the Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit
We study the real-valued combinatorial pure exploration problem in the stochastic multi-armed bandit (R-CPE-MAB). We study the case where the size of the action set is polynomial with respect to the number of arms. In such a case, the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits. Existing methods in the R-CPE-MAB and transductive linear bandits have a gap of problem-dependent constant terms and logarithmic terms between the upper and lower bounds of the sample complexity, respectively. We close these gaps by proposing an algorithm named the combinatorial gap-based exploration (CombGapE) algorithm, whose sample complexity upper bound matches the lower bound. Finally, we numerically show that the CombGapE algorithm outperforms existing methods significantly.
Shintaro Nakamura, Masashi Sugiyama
2023-06-15T15:37:31Z
http://arxiv.org/abs/2306.09202v2
# Combinatorial Pure Exploration of Multi-Armed Bandit with a Real Number Action Class ###### Abstract The combinatorial pure exploration (CPE) in the stochastic multi-armed bandit setting (MAB) is a well-studied online decision-making problem: A player wants to find the optimal _action_\(\mathbf{\pi}^{*}\) from _action class_\(\mathcal{A}\), which is a collection of subsets of arms with certain combinatorial structures. Though CPE can represent many combinatorial structures such as paths, matching, and spanning trees, most existing works focus only on binary action class \(\mathcal{A}\subseteq\{0,1\}^{d}\) for some positive integer \(d\). This binary formulation excludes important problems such as the optimal transport, knapsack, and production planning problems. To overcome this limitation, we extend the binary formulation to real, \(\mathcal{A}\subseteq\mathbb{R}^{d}\), and propose a new algorithm. The only assumption we make is that the number of actions in \(\mathcal{A}\) is polynomial in \(d\). We show an upper bound of the sample complexity for our algorithm and the action class-dependent lower bound for R-CPE-MAB, by introducing a quantity that characterizes the problem's difficulty, which is a generalization of the notion _width_ introduced in Chen et al. (2014). ## 1 Introduction The stochastic multi-armed bandit (MAB) model is one of the most popular models for action-making problems where we investigate trade-offs between exploration and exploitation in a stochastic environment. In this model, we are given a set of stochastic arms associated with unknown distributions. Whenever an arm is pulled, it generates a reward sampled from the corresponding distribution. The MAB model is mainly used for two objectives. One is regret minimization, or cumulative reward maximization (Auer et al., 2002b; Bubeck and Cesa-Bianchi, 2012; Auer et al., 2002a), where the player tries to maximize its average rewards by sequentially pulling arms by balancing _exploring_ all the distributions and _exploiting_ the most rewarding ones. The other is pure exploration (Audibert et al., 2010; Gabillon et al., 2012), where the player tries to identify the optimal arm with high probability with as few samples as possible. On the other hand, it is well known that many real-world problems can be modeled as linear optimization problems, such as the shortest path problem, the optimal transport problem, the minimum spanning tree problems, the traveling salesman problem, and many more (Sniedovich, 2006; Villani, 2008; Pettie and Ramachandran, 2002; Gutin et al., 2001). Abstractly, these linear optimization problems can be formulated as follows: \[\begin{array}{ll}\text{maximize}_{\mathbf{\pi}}&\mathbf{\mu}^{\top}\mathbf{\pi}\\ \text{subject to}&\mathbf{\pi}\in\mathcal{A}\subset\mathbb{R}^{d},\end{array}\] where \(\mathbf{\mu}\in\mathbb{R}^{d}\) is a given vector that specifies the cost, \(d(\geq 2)\) is a positive integer, \(\top\) denotes the transpose, and \(\mathcal{A}\) is a set of feasible solutions. For instance, if we see the shortest path problem in Figure 1, each edge \(i\in\{1,\dots,7\}\) has a cost \(\mu_{i}\) and \(\mathcal{A}=\{(1,0,1,0,0,1,0),(0,1,0,1,0,1,0,1),(0,1,0,0,1,0,1)\}\). The optimal transport problem shown in Figure 2 can also be formulated similarly to the above. We have five suppliers and four demanders. Each supplier \(i\) has \(s_{i}\) goods to supply. Each demander \(j\) wants \(d_{j}\) goods. Each edge \(\mu_{ij}\) is the cost to transport goods from supplier \(i\) to demander \(j\). Our goal is to minimize \(\sum_{i=1}^{5}\sum_{j=1}^{4}\pi_{ij}\mu_{ij}\) where \(\pi_{ij}(\geq 0)\) is the number of goods transported to demander \(j\) from supplier \(i\). In real-world applications, the cost can often be random variables. For instance, in the shortest path problem, each edge (road) can be congested, and therefore the cost stochastically changes. Here, our main interest is to identify the best action \(\mathbf{\pi}^{*}\in\mathcal{A}\) which maximizes the expectation of the objective \(\mathbf{\mu}^{\top}\mathbf{\pi}\) in the MAB framework. There are many MAB frameworks where the goal is to identify the best action for a linear objective in a stochastic environment (Soare et al., 2014; Xu et al., 2018; Kuroki et al., 2020; Rejwan and Mansour, 2020; Jourdan et al., 2021; Chen et al., 2014) (Table 1). Our study focuses on the combinatorial pure exploration of the MAB (CPE-MAB) framework (Chen et al., 2014; Wang and Zhu, 2022; Gabillon et al., 2016). In CPE-MAB, the player pulls a certain arm \(i\) in each round and observes its reward. The player's goal is to identify the best action \(\mathbf{\pi}\in\mathcal{A}\) with the smallest number of rounds. Most of the existing works in CPE-MAB (Chen et al., 2014; Wang and Zhu, 2022; Gabillon et al., 2016; Chen et al., 2017; Du et al., 2021b; Chen et al., 2016) assume \(\mathcal{A}\subset\{0,1\}^{d}\), which means the player's objective is to identify the best action which maximizes the sum of the expected rewards. Although we can apply this model to the shortest path problem (Sniedovich, 2006), top-\(K\) arms identification (Kalyanakrishnan and Stone, 2010), matching (Gibbons, 1985), and spanning trees (Pettie and Ramachandran, 2002), we cannot apply it to problems where \(\mathcal{A}\subset\mathbb{R}^{d}\) such as the optimal transport problem (Villani, 2008), the knapsack problem (Dantzig and Mazur, 2007), and the production planning problem (Pochet and Wolsey, 2010). In this paper, we expand the action space to real vectors, i.e., \(\mathcal{A}\subset\mathbb{R}^{d}\), and call it real CPE-MAB (R-CPE-MAB). With this extension, we can apply the CPE-MAB framework to a broader class of linear objective optimization problems. To the best of our knowledge, the only existing work which can be applied when \(\mathcal{A}\subset\mathbb{R}^{d}\) is Huang et al. (2018). However, they need an assumption named _bi-monotonicity_ for the implementation of their algorithm, which the optimal transport problem, the knapsack problem, and other problems having complex combinatorial constraints do not necessarily satisfy. In our study, we assume that the size of the action space \(\mathcal{A}\) is polynomial in \(d\) so that we can run a search algorithm in the action space to choose which arm to pull in each time step. For instance, if we think of the shortest path problem, this assumption holds when the graph is sparse (Philip et al., 2009; Fomin et al., 2015), and only a reasonably small number of actions (path) have to be compared. Alternatively, even though it is nearly impossible to identify the best action \(\mathbf{\pi}\) which maximizes \(\mathbf{\mu}^{\top}\mathbf{\pi}\) from \(\mathbb{R}^{d}\) due to the uncertainty of \(\mathbf{\mu}\), it may be sufficient to identify the best action from a set of candidate \(\mathcal{A}\). In order to construct \(\mathcal{A}\), one may use some prior knowledge of each arm, which is sometimes obtainable in the real world (Labille et al., 2021; Yang and Gao, 2021). For instance, if we model the real world with the optimal transport problem, we may know in advance that some edges are clearly more costly than others. For example, the distance from New York to San Francisco is clearly farther than from New York to Boston. Also, nowadays, the approximate time required to travel between two points on the globe can be easily obtained from databases. Based on this prior knowledge, a realistic transportation method can be narrowed down from all the solutions to the optimal transport problem. The remainder of this paper is structured as follows. In Section 2, we formally introduce our model. In Sections 3, we explain why naively modifying existing methods for CPE-MAB may not be suited for R-CPE-MAB, and give a new arm selection strategy for R-CPE-MAB. In Section 4, we show our algorithm inspired by Xu et al. (2018) and theoretically analyze the sample complexity of our algorithm. We introduce an interesting connection to Chen et al. (2014) and show that there is a certain quantity that characterizes the difficulty of the R-CPE-MAB problem, which is a generalization of _width_ introduced in Chen et al. (2014). Finally, we experimentally compare our algorithm to some existing works in Section 5. #### Related Works and the Location of this Work Here, we introduce models that investigate the best action identification in linear objective optimization with a MAB framework (Table 1). Then, we briefly explain the location of this work in the literature. #### Pure Exploration in Linear Bandits (PE-LB) In PE-LB (Soare et al., 2014; Xu et al., 2018), we are given a number of actions in advance, and polynomial-time algorithms of the following form have been explored: In each time step \(t\), the player chooses an action \(\mathbf{\pi}_{t}\) from the action set and observes a reward with noise, i.e., \(r_{t}=\mathbf{\mu}^{\top}\mathbf{\pi}_{t}+\epsilon_{t}\), where \(\epsilon_{t}\) is a random noise from a certain distribution. Since \(\mathbf{\mu}\) is typically treated as an unknown vector, many algorithms estimate \(\mathbf{\mu}\) by leveraging the sequence of action selections. For instance, Soare et al. (2014) used the least-square estimator to estimate \(\mathbf{\mu}\). #### Combinatorial Pure Exploration with Full-Bandit linear Feedback (CPE-F/S-BL) In CPE-F-BL (Kuroki et al., 2020; Rejwan and Mansour, 2020), arms are often called _base arms_, and the actions are often called _super arms_. Super arms are subsets of base arms, therefore \(\mathcal{A}\subset\{0,1\}^{d}\). In each time step, the player chooses an action and observes the sum of rewards from the base arms involved in the chosen action. Therefore, CPE-F-BL can be seen as an instance of PE-LB. However, since the running time of existing methods for PE-LB has polynomial dependence on the size of action space, they are not suitable for CPE-F-BL, where the size of the action space \(\mathcal{A}\) can be exponentially large with respect to \(d\). To cope with this problem, algorithms specifically designed for CPE-F-BL have been explored in the literature (Kuroki et al., 2020; Rejwan and Mansour, 2020; Du et al., 2021). #### Combinatorial Pure Exploration with Semi-Bandit linear Feedback (CPE-F/S-BL) CPE-S-BL Jourdan et al. (2021) is a framework similar to CPE-F-BL: In each round, the player chooses an action from the set of actions (super arms) \(\mathcal{A}\subset\{0,1\}^{d}\), which is a subset of base arms. Yet, the observation is different. The player observes rewards \(r_{i}\) for every base arm \(i\) where \(\pi_{i}=1\). For instance, in Figure 1, if the player chooses \(\mathbf{\pi}=(1,0,1,0,0,1,0)\), she can observe the rewards of base arms 1, 3, and 6. #### Location of this Work We can see that our work is different from PE-LB in terms of the player's behavior and observation (Table 1). As discussed above, our work is different from CPE-MAB since R-CPE-MAB can be applied when \(\mathcal{A}\subset\mathbb{R}^{d}\). Our work implies that CPE-F-LB and CPE-S-LB can also be extended to _R-CPE-FAB_ and _R-CPE-S-LB_, where the action class is extended to real vectors (matrices). This remains an interesting future work. ## 2 Problem Formulation In this section, we formally define our R-CPE-MAB model similar to Chen et al. (2014). Suppose we have \(d\) arms, numbered \(1,\ldots,d\). Assume that each arm \(s\in[d]\) is associated with a reward distribution \(\phi_{s}\), where \([d]=\{1,\ldots,d\}\). We assume all reward distributions have \(R\)-sub-Gaussian tails for some known constant \(R>0\). Formally, if \(X\) is a random variable drawn from \(\phi_{s}\) for some \(s\in[d]\), then, for all \(t\in\mathbb{R}\), we have \(\mathbb{E}[\exp(tX-t\mathbb{E}[X])]\leq\exp(R^{2}t^{2}/2)\). It is known that the family of \(R\)-sub-Gaussian tail distributions includes all distributions that are supported on \([0,R]\) and also many unbounded distributions such as Gaussian distributions with variance \(R^{2}\)(Rivasplata, 2012). Let \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{d})^{\top}\) denote the vector of expected rewards, where each element \(\mu_{s}=\mathbb{E}_{X\sim\phi_{s}}[X]\) denotes the expected reward of arm \(s\) and \(\top\) denotes the transpose. We denote the number of times arm \(s\) pulled before round \(t\) by \(T_{s}(t)\), and by \(\mathbf{\hat{\mu}}(t)=(\hat{\mu}_{1}(t),\ldots,\hat{\mu}_{d}(t))^{\top}\) the vector of sample means of each arm in round \(t\). We define the action class \(\mathcal{A}=\{\mathbf{\pi}^{1},\ldots,\mathbf{\pi}^{K}\ |\mathbf{\pi}^{1},\ldots,\mathbf{\pi}^{K} \in\mathbb{R}^{d}\}\) as the set of all actions whose size is \(K\). We assume \(K\) is polynomial in \(d\). Let \(\mathbf{\pi}^{*}=\operatorname*{arg\,max}_{\mathbf{\pi}\in\mathcal{A}}\mathbf{\mu}^{\top} \mathbf{\pi}\) denote the optimal member in the action class \(\mathcal{A}\) which maximizes \(\mathbf{\mu}^{\top}\mathbf{\pi}\). Let \(a^{*}\) be the index of \(\mathbf{\pi}^{*}\), i.e., \(\mathbf{\pi}^{*}=\mathbf{\pi}^{a^{*}}\). We denote the true gap and the estimated gap in time step \(t\) by \(\Delta(i,j)=\mathbf{\mu}^{\top}(\mathbf{\pi}^{i}-\mathbf{\pi}^{j})\) and \(\hat{\Delta}_{t}(i,j)=\mathbf{\hat{\mu}}^{\top}(\mathbf{\pi}^{i}-\mathbf{\pi}^{j})\), respectively. The player's objective is to identify \(\mathbf{\pi}^{*}\) from \(\mathcal{A}\) by playing the following game. At the beginning of the game, the action class \(\mathcal{A}\) is revealed while the reward distributions \(\{\phi_{s}\}_{s\in[d]}\) are unknown to her. Then, the player pulls an arm over a sequence of rounds; in each round \(t\), she pulls an arm \(p_{t}\in[d]\) and observes a reward \(r_{p_{t}}\) sampled from the associated reward distribution \(\phi_{p_{t}}\). The game continues until a certain stopping condition is satisfied. After the game finishes, the player outputs an action \(\mathbf{\pi}_{\mathrm{out}}\in\mathcal{A}\). Let \(a_{\mathrm{out}}\) be the index of \(\mathbf{\pi}_{\mathrm{out}}\), i.e., \(\mathbf{\pi}_{\mathrm{out}}=\mathbf{\pi}^{a_{\mathrm{out}}}\). We consider the \((\epsilon,\delta)\)-best action identification problem (Xu et al., 2018). The problem is to design an algorithm to find an action \(\mathbf{\pi}_{\mathrm{out}}\) which satisfies \[\Pr\left[\mathbf{\mu}^{\top}(\mathbf{\pi}^{*}-\mathbf{\pi}_{\mathrm{out}})\leq\epsilon \right]\geq 1-\delta,\] as fast as possible. We denote by \(\tau\) the round an algorithm terminated. We use the term sample complexity as the round an algorithm terminated. ## 3 The Arm Selection Strategy In this section, we discuss the arm selection strategy qualitatively and quantitatively. We first qualitatively show that applying or naively modifying existing works in CPE-MAB for R-CPE-MAB may not be a good choice. Then, we discuss the arm selection strategy quantitatively by looking at the confidence bound of the estimated gap between actions and propose a new arm selection strategy for R-CPE-MAB. ### Limitation of Existing Works in CPE-MAB Here, we first briefly explain what some of the existing algorithms in CPE-MAB (Chen et al., 2014; Gabillon et al., 2016; Wang and Zhu, 2022) are doing at a higher level. First, in each round \(t\), they output the action \(\mathbf{\tilde{\pi}}(t)\) which maximizes \(\mathbf{\hat{\mu}}^{\top}\mathbf{\pi}\). As we have seen in the example in Figure 1, we denote by \(\hat{S}\) the set that contains all the arms "used" in \(\mathbf{\tilde{\pi}}(t)\). Next, they output another action \(\mathbf{\tilde{\pi}}(t)\), which potentially be the best action by considering a confidence bound on arms. Similarly, we denote by \(\tilde{S}\) the set of arms that contains all the arms "used" in \(\mathbf{\tilde{\pi}}(t)\). Finally, they choose the arm \(s\) that is in \(\hat{S}\oplus\tilde{S}=(\hat{S}\setminus\tilde{S})\cup(\tilde{S}\setminus \tilde{S})\) with the least number of pulls. This means they are choosing an arm \(s\) where \(\tilde{\pi}_{s}(t)\neq\tilde{\pi}_{s}(t)\). They repeat this until some stopping condition for the algorithm is satisfied. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Player’s behavior & Observation & The size of \(\mathcal{A}\) & Characteristic of \(\mathcal{A}\) \\ \hline PE-LB & Play an action \(\mathbf{\pi}^{t}\) & \(\mathbf{\mu}^{\top}\mathbf{\pi}^{t}+\epsilon_{t}\) & Poly(\(d\)) & \(\mathcal{A}\subset\mathbb{R}^{d}\) \\ CPE-F-LB & Play an action \(\mathbf{\pi}^{t}\) & \(\sum_{i:\pi_{i}^{t}=1}r_{i}\) & Exp(\(d\)) & \(\mathcal{A}\subset\{0,1\}^{d}\) \\ CPE-S-LB & Play an action \(\mathbf{\pi}^{t}\) & \(\begin{array}{c}\{r_{i}\ |\ \forall i\in[d],\\ \mathbf{\pi}_{i}^{t}=1\}\end{array}\) & Exp(\(d\)) & \(\mathcal{A}\subset\{0,1\}^{d}\) \\ CPE-MAB & Pull an arm \(i\) & \(r_{i}\) & Exp(\(d\)) & \(\mathcal{A}\subset\{0,1\}^{d}\) \\ R-CPE-MAB & Pull an arm \(i\) & \(r_{i}\) & Poly(\(d\)) & \(\mathcal{A}\subset\mathbb{R}^{d}\) \\ (this work) & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Taxonomy of MAB problems. Poly(\(d\)) and Exp(\(d\)) denotes that \(|\mathcal{A}|\) are polynomial in \(d\) and exponentially large in \(d\), respectively. In R-CPE-MAB, we can no longer think \(\tilde{S}\oplus\tilde{S}=(\tilde{S}\setminus\tilde{S})\cup(\tilde{S}\setminus \tilde{S})\) since we are thinking of \(\mathcal{A}\subset\mathbb{R}^{d}\), and therefore have to think of other arm selection strategies. One naive way to modify their methods to R-CPE-MAB is to choose the arm \(s\) with the least number of times it was pulled among the set of arms \(\{s\in[d]\mid\hat{\pi}_{s}(t)\neq\hat{\pi}_{s}(t)\}\). However, this may not be an efficient strategy to pull arms in R-CPE-MAB. To explain this, suppose we have two actions \(\mathbf{\pi}^{1}=(100,0,0.1)\) and \(\mathbf{\pi}^{2}=(0,100,0.2)\), and the stopping condition for an algorithm is not satisfied. Also, let us assume that we are pretty sure that \(\hat{\mu}_{1}(t)\in(0.9-0.01,0.9+0.01)\), \(\hat{\mu}_{2}(t)\in(0.9-0.01,0.9+0.01)\), and \(\hat{\mu}_{3}(t)\in(0.9-0.01,0.9+0.01)\) using some concentration inequality. Here, although the confidence interval is all the same when we estimate the gap between actions 1 and 2, i.e., \(\mathbf{\mu}^{\top}(\mathbf{\pi}^{1}-\mathbf{\pi}^{2})\), the uncertainty of arms 1 and 2 will be amplified by 100 times where the uncertainty of arm 3 will be amplified by only 0.1 times. This example suggests that we must consider how _important_ a certain arm is to estimate the gaps between actions in R-CPE-MAB, and not simply pull the arm with the least number of times it was pulled among the set of arms \(\{s\in[d]\mid\hat{\pi}_{s}(t)\neq\hat{\pi}_{s}(t)\}\). In Appendix A, we discuss the limitation of other works on CPE-MAB (Du et al., 2021; Chen et al., 2016, 2017), which are close to the above discussion. ### Confidence Bounds and the Arm Selection Strategy In Section 3.1, we have seen that uncertainty of arms will be amplified when we estimate gaps between actions. We discuss this quantitatively in this section. First, we bound the gap of two actions \(\mathbf{\pi}^{k}\) and \(\mathbf{\pi}^{l}\) with a concentration inequality. Define \(\beta_{t}(k,l)\) as \[\beta_{t}(k,l)=R\sqrt{\frac{1}{2}\sum_{s=1}^{d}\frac{(\pi_{s}^{k}-\pi_{s}^{l}) ^{2}}{T_{s}(t)}\log\frac{2K^{2}X_{t}t^{2}}{\delta}}, \tag{1}\] where \(X_{t}=\left(1+\frac{t-1}{d-1}\right)^{d-1}\left(1+\frac{d-1}{t-1}\right)^{t-1}\). Then, we have the following proposition. **Proposition 3.1**.: _Let \(T_{s}(t)\) be the number of times arm \(s\) has been pulled before round \(t\), and \(C_{t}\) be a specific assignment of pulling arms in round \(t\). Then, for any \(t\in\mathbb{N}\), \(C_{t}\), and \(\mathbf{\pi}^{k},\mathbf{\pi}^{l}\in\mathcal{A}\), with probability at least \(1-\delta\), we have_ \[\Pr\left(\left|\Delta(k,l)-\hat{\Delta}_{t}(k,l)\right|\leq\beta_{t}(k,l) \right)\geq 1-\delta. \tag{2}\] We show the proof in Appendix C. If we set \(\delta\) small, Lemma 3.1 shows that the estimated gap between two actions is close to the true gap with high probability. We regard \[\hat{\Delta}_{t}(k,l)+\beta_{t}(k,l) \tag{3}\] as an upper confidence bound of the estimated gap between actions \(\mathbf{\pi}^{k}\) and \(\mathbf{\pi}^{l}\). Assume we want to estimate the gap between two actions \(\mathbf{\pi}^{k}\) and \(\mathbf{\pi}^{l}\). Since we want to estimate it as fast as possible, we want to pull an arm that makes the confidence bound \(\beta_{t}(k,l)\) smallest. Formally, we pull arm \(p_{t}\) as follows: \[p_{t}=\operatorname*{arg\,min}_{u\in[d]}\sum_{s=1}^{d}\frac{(\pi_{s}^{k}-\pi_{ s}^{l})^{2}}{T_{k}(t)+\mathbf{1}[s=u]}, \tag{4}\] where \(\mathbf{1}[\cdot]\) denotes the indicator function. Then, the following proposition holds. **Proposition 3.2**.: \(p_{t}\) _in (4) can be written as follows:_ \[p_{t}=\operatorname*{arg\,max}_{s\in[d]}\frac{(\pi_{s}^{k}-\pi_{s}^{l})^{2}}{T_ {s}(t)(T_{s}(t)+1)}. \tag{5}\] We show the proof in Appendix D. Intuitively speaking, we can say computing (5) is considering the _importance_ of each arm to estimate the gap between \(\mathbf{\pi}^{k}\) and \(\mathbf{\pi}^{l}\). If \((\pi_{s}^{k}-\pi_{s}^{l})^{2}\) is large, that means the uncertainty of arm \(s\) can be amplified largely, and arm \(s\) needs to be pulled many times to reduce the gap between actions \(\mathbf{\pi}^{k}\) and \(\mathbf{\pi}^{l}\). Additionally, the arm selection strategy (5) is equivalent to pulling the arm with the least number of times pulled among \(\{i\in[d]\mid\mathbf{\hat{\pi}}_{i}(t)\neq\mathbf{\tilde{\pi}}_{i}(t)\}\) in CPE-MAB. Therefore, we can see the arm selection strategy (5) as a generalization of the arm selection strategies in Chen et al. (2014); Gabillon et al. (2016); and Wang and Zhu (2022). ``` 1:Input: accuracy \(\epsilon\), confidence level \(\delta\), and an action set \(\mathcal{A}\) 2:Output: action \(\boldsymbol{\pi}_{\mathrm{out}}\in\mathcal{A}\) 3:\(t\gets 1\) 4:for\(s=1\)to\(d\)do 5: Observe \(r_{s}\) 6:\(t\gets t+1\) 7:endfor 8:for\(t=d+1\),...do 9: // Select which gap to examine 10:\((i_{t},j_{t},B(t))\leftarrow\) Select-Ambiguous-Action(\(t\)) (see Algorithm 2) 11: // Check the stopping condition 12:if\(B(t)\leq\epsilon\)then 13:return\(\boldsymbol{\pi}^{i_{t}}\) as the best action \(\boldsymbol{\pi}_{\mathrm{out}}\) 14:endif 15: // Pull an arm based on the gap \(B(t)\) 16: Pull \(p_{t}=\underset{s\in[d]}{\arg\max}\frac{(\pi^{i_{t}}-\pi^{i_{t}})^{2}}{T_{s}(t) (T_{s}(t)+1)}\) 17: Observe \(r_{p_{t}}\), and update the number of pulls: \(T_{p_{t}}(t+1)\gets T_{p_{t}}+1\) and \(T_{e}(t+1)\gets T_{e}(t)\) for all \(e\neq p_{t}\) 18:endfor ``` **Algorithm 1** CombGapE Algorithm ``` 1:procedureSelect-Ambiguous-Action() 2: Compute \(\hat{\mu}_{s}(t)\) for every \(s\in[d]\) 3:\(i_{t}\leftarrow\underset{i\in[K]}{\arg\max}\boldsymbol{\pi}^{i\top}\hat{ \boldsymbol{\mu}}(t)\) 4:\(j_{t}\leftarrow\underset{j\in[K]}{\arg\max}\hat{\Delta}_{t}(j,i_{t})+\beta_{t}( j,i_{t})\) 5:\(B(t)\leftarrow\max_{j\in[K]}\hat{\Delta}_{t}(j,i_{t})+\beta_{t}(j,i_{t})\) 6:return\((i_{t},j_{t},B(t))\) 7:endprocedure ``` **Algorithm 2** Select-Ambiguous-Action(t) ## 4 CombGapE Algorithm and Theoretical Analysis In this section, we introduce our algorithm (Algorithm 1) inspired by Xu et al. (2018). We name it CombGapE (Combinatorial Gap-based Exploration algorithm). Then, we show that the CombGapE algorithm is an \(\epsilon\)-best action identification algorithm and establish the sample complexity of the CombGapE algorithm. Interestingly, there is a key quantity in the sample complexity, which can be seen as a generalization of _width_ in Chen et al. (2014). ### CombGapE Algorithm We show our algorithm CombGapE in Algorithm 1. In each round, it chooses two actions \(\boldsymbol{\pi}^{i_{t}}\) and \(\boldsymbol{\pi}^{j_{t}}\), the action with the largest estimated action and the action whose gap from \(\boldsymbol{\pi}^{i_{t}}\) is the largest. We can say \(\boldsymbol{\pi}^{i_{t}}\) is the action that is most likely to be the best, and \(\boldsymbol{\pi}^{j_{t}}\) is an action that is potentially the best. Then, as we discussed in Section 3, CombGapE pulls the arm \(p_{t}\) that most reduces the confidence bound \(\beta_{t}(i_{t},j_{t})\) (line 16). ### Accuracy and the Sample Complexity Here, we first show that our algorithm is an \(\epsilon\)-best action identification algorithm. Theorem 4.1 shows that given a confidence parameter \(\delta\), Algorithm 1 can identify an \(\epsilon\)-best action with more than probability \(\delta\). **Theorem 4.1**.: _The output of Algorithm 1\(\boldsymbol{\pi}^{a_{\mathrm{out}}}\) satisfies the following condition:_ \[\Pr(\Delta(a^{*},a_{\mathrm{out}})\leq\epsilon)\geq 1-\delta. \tag{6}\] We show the proof in Appendix E. Next, we show an upper bound of the sample complexity of the CombGapE algorithm. In our analysis, we use a key quantity named _Gwidth_ defined as follows: \[\mathrm{Gwidth}(\mathcal{A})=\max_{\mathbf{\pi},\mathbf{\pi}^{\prime}\in\mathcal{A}} \sum_{s=1}^{d}|\pi_{s}-\pi_{s}^{\prime}|. \tag{7}\] This quantity can be seen as a generalization of an upper bound of the quantity _width_ introduced in Chen et al. (2014) (see details in Appendix B). In Theorem 4.2, we show that _GWidth_ appears in the upper bound of the sample complexity, which means _GWidth_ characterizes the difficulty of the problem instance in R-CPE-MAB. **Theorem 4.2**.: _Assume that \(\delta\in(0,1)\), any action class \(\mathcal{A}\in\mathbb{R}^{d}\), and any expected rewards \(\mathbf{\mu}\in\mathbb{R}^{d}\) are given. The reward distribution \(\phi_{s}\) for each arm \(s\in[d]\) is an \(R\)-sub-Gaussian tail distribution. With probability at least \(1-\delta\), the sample complexity \(\tau\) of Algorithm 1 can be bounded as follows:_ \[\tau\leq\max\Biggl{\{}d+\frac{R^{2}d\cdot\mathrm{Gwidth}^{2}}{\max\left\{ \epsilon,\frac{\epsilon+\Delta_{\min}}{2}\right\}^{2}}\log\left(\frac{4 \epsilon K^{2}M^{4}}{\delta(d-1)}\right),d+\frac{R^{2}d\cdot\mathrm{Gwidth}} {\max\left\{\epsilon,\frac{\epsilon+\Delta_{\min}}{2}\right\}}\sqrt{\log \left(\frac{4\epsilon K^{2}d}{\delta}\right)}\Biggr{\}}, \tag{8}\] _where \(M=d+\frac{R\sqrt{d}}{\max\left\{\epsilon,\frac{\epsilon+\Delta_{\min}}{3} \right\}}\cdot\mathrm{Gwidth}\left(\frac{4\epsilon K^{2}}{\delta(d-1)}\right) ^{\frac{1}{4}}\) and \(\Delta_{\min}=\min_{i\in[K],i\neq a^{*}}\Delta(a^{*},i)\)._ We show the proof in Appendix F. For CPE-MAB, the tightest upper bound was introduced in Wang and Zhu (2022). Their upper bound of the sample complexity is \(\mathcal{O}\left(\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s}^{2}}\log \left(\frac{1}{\delta}\right)+\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s} ^{2}}\log^{2}\left(K\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s}^{2}} \right)\right)\). Here, \(\Delta_{s}=\sum_{j\in S^{*}}\mu_{j}-\max_{\mathbf{\pi}\in\left\{\mathbf{\pi}\in \mathcal{A}|\pi_{s}=0\right\}}\sum_{e\in\mathbf{\pi}^{*}}\mu_{e}\) for \(s\) such that \(\pi_{s}=1\), and \(\Delta_{s}=\sum_{j\in S^{*}}\mu_{j}-\max_{\mathbf{\pi}\in\left\{\mathbf{\pi}\in \mathcal{A}|\pi_{i}=1\right\}}\sum_{j}\mu_{j}\) for \(s\) such that \(\pi_{s}=0\), where \(S^{*}=\{e\in[d]\mid\pi_{e}^{*}=1\}\). Note that it depends on \(\mathrm{width}\), not \(\mathrm{width}^{2}\). Therefore, though our upper bound for R-CPE-MAB is \(\mathcal{O}\left(\frac{R^{2}d\mathrm{width}^{2}}{\Delta_{\min}^{2}}\log \left(\frac{1}{\delta}\right)\right)\), it implies that it can be reduced to \(\mathcal{O}\left(\frac{R^{2}d\mathrm{Gwidth}}{\Delta_{\min}^{2}}\log\left( \frac{1}{\delta}\right)\right)\) but this remains as a future work. ### Lower bound Here, we show the lower bound of the sample complexity of R-CPE-MAB. **Theorem 4.3**.: _Fix any action class \(\mathcal{A}\) and any vector \(\mathbf{\mu}\in\mathbb{R}^{d}\). Suppose, for each arm \(s\), the reward distribution \(\phi_{s}\) is given by \(\mathcal{N}(\mu_{s},1)\), where \(\mathcal{N}(\mu,\sigma^{2})\) denotes a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). Fix any \(\delta\in(0,\frac{\epsilon-16}{4})\) and any algorithm that outputs the best action with probability more than \(\delta\). Then, for any \(s\in[d]\), there exist actions \(\mathbf{\pi}\) and \(\mathbf{\pi}^{\prime}\), such that \(s\in\{e\in[d]\mid\pi_{e}\neq\pi_{e}^{\prime}\}\) and_ \[\mathbb{E}\left[\sum_{e\in\{u\in[d]\mid\pi_{u}^{\prime}\neq\pi_{e}^{\prime}\}} T_{e}\right]\geq\frac{\left(\sum_{e=1}^{d}|\pi_{e}-\pi_{e}^{\prime}| \right)^{2}}{32\Delta_{s}^{2}}\log\left(\frac{1}{4\delta}\right), \tag{9}\] _where \(T_{e}\) is the total number of samples of arm \(e\) used by the algorithm and \(\Delta_{s}=\mathbf{\mu}^{\top}\mathbf{\pi}^{*}-\max_{\mathbf{\pi}\in\left\{\mathbf{\pi}\in \mathcal{A}|\pi_{s}^{*}\neq\pi_{s}\right\}}\mathbf{\mu}^{\top}\mathbf{\pi}\)._ The proof is shown in Appendix G. Theorem 4.3 can be seen as a natural generalization of Theorem 6 in Chen et al. (2014), which is an action space-dependent lower bound of CPE-MAB. Theorem 4.3 implies that in the worst case, any algorithm needs \(\mathcal{O}\left(\frac{\mathrm{GWidth}^{2}}{\Delta_{\min}^{2}}\log(\frac{1}{ \delta})\right)\). Again, we can see that \(\mathrm{GWidth}\) is characterizing the difficulty of the problem instance in R-CPE-MAB. ## 5 Experiment In this section, we show two experiments for the CombGapE algorithm. First, we compare the CombGapE algorithm with an existing algorithm in water resource planning (Huang et al., 2018), where \(\mathcal{A}=\mathbb{R}^{d}\). We show the CombGapE algorithm terminates faster than Huang et al. (2018). Next, we show our algorithm can identify the best action in the knapsack problem, where actions have complex combinatorial structures. We compare our algorithm with a _naive_ algorithm that chooses the arm with the least number of pulls among the set \(\{s\in[d]|\pi_{s}^{i_{t}}\neq\pi_{s}^{j_{t}}\}\) as we discussed in Section 3. ### Best action identification for the water resource planning As we mentioned in the introduction, to the best of our knowledge, the only work which can be applied when \(\mathcal{A}=\mathbb{R}^{d}\) is Huang et al. (2018). Since they need an assumption named bi-monotonicity for the implementation of their algorithm, we decided to use the problem of water resource planning which was used in their paper (Huang et al., 2018). In water resource planning, we have \(d\) water resources, and one needs to determine the Biological Oxygen Demand (BOD, a measure of pollution) to be removed from each one of them. Formally, let \(\pi_{s}\) be the pounds of BOD to be removed at source \(s\in[d]\). Then, \[\text{maximize}_{\mathbf{\pi}\in\mathcal{A}} \sum_{s=1}^{d}\mu_{s}\pi_{s}-\sum_{s=1}^{d}f_{s}\pi_{s}\] subject to \[\sum_{i=s}^{d}\pi_{s}\geq b,0\leq\pi_{s}\leq c_{s},\;\forall s\in[ d],\] where \(\mu_{s}\) is the quality response caused by removing one pound of BOD at source \(s\), and \(f_{s}\pi_{s}\) is the cost of removing \(\pi_{s}\) pounds of BOD at source \(s\). Each \(\pi_{s}\) is constrained by \(c_{s}\), the maximum pounds of BOD that can be removed at source \(s\). Additionally, the total pounds of BOD to be removed are required to be larger than a certain threshold \(b\). In this model, the cost coefficient \(f_{s}\) is known, which depends on the cost of oxidation, labor, or facility. On the other hand, the quality response \(\mu_{s}\) is unknown beforehand and therefore needs to be learned from tests at source \(s\). In each time step, the player chooses a source \(s\) and gets an observation of \(\mu_{s}\), which can be regarded as a random variable from an unknown distribution with mean \(\mu_{s}\). The goal is to determine the best allocation \((\pi_{1}^{*},\ldots,\pi_{d}^{*})\) to remove BOD from each source. Here, we discretize \(\{\pi_{s}\}\)'s so that the action class \(\mathcal{A}\) is finite. We set \(d=7\), \(b=18\), \(c_{s}=3\) for all \(s\in[d]\), and let \(\mathbf{\pi}\) be an integer vector. Then, we have 120 actions in total. We assume \(\mu_{s}\in[0,1]\) and \(f_{s}\in[0,1]\) for every \(s\). Since, random variables supported on \([a,b]\) is a \(\frac{(b-a)}{2}\)-sub-Gaussian random variables, we set \(R=\frac{1}{2}\)(Ravikumar et al., 2019). We compared our method with the CPE-CS algorithm proposed by Huang et al. (2018). We show the results in Figure 3. For all \(\epsilon\in\{0.00,0.25,0.50\}\), the CombGapE algorithm successfully identified the strictly best action in all thirty experiments. We can see that the CombGapE identifies the best action faster than the CPE-CS algorithm. Moreover, when \(\epsilon=0.25,0.50\), the CombGapE outperforms significantly compared with the CPE-CS. We can say that this can become a clear advantage when we have some prior knowledge, and are able to set the hyperparameter \(\epsilon\) appropriately, since CPE-CS can only look for the best action. ### Best action identification for the knapsack problem Here, we show that our algorithm can identify the best action in the knapsack problem (Dantzig and Mazur, 2007), where we do not have any assumption on the action space. In the knapsack problem, we have \(d\) items. Each item \(s\in[d]\) has a weight \(w_{s}\) and value \(v_{s}\). Also, there is a knapsack whose capacity is \(W\) in which we put items. Our goal is to maximize the total value of the knapsack not letting the total weight of the items exceed the capacity of the knapsack. Formally, the optimization problem is given as follows: \[\text{maximize}_{\mathbf{\pi}\in\mathcal{A}} \sum_{s=1}^{d}v_{s}\pi_{s}\] subject to \[\sum_{s=1}^{d}\pi_{s}w_{s}\leq W,\] where \(\pi_{s}\) denotes the number of item \(s\) in the knapsack. Here, the weight of each item is known, but the value is unknown, and therefore has to be estimated. In each time step, the player chooses an item \(s\) and gets an observation of value \(v_{s}\), which can be regarded as a random variable from an unknown distribution with mean \(v_{s}\). For our experiment, we generated the weight of each item uniformly from \(\{5,6,\ldots,500\}\). For each item \(s\), we generated \(v_{s}\) as \(v_{s}=w_{s}+x\), where \(x\) is a uniform sample from \([-5,5]\). As the capacity of the knapsack, \(W=10000\). As a prior knowledge of the problem, we assumed that we know each \(v_{s}\) is in \([w_{s}-5,w_{s}+5]\), and used this prior knowledge to generate the action class \(\mathcal{A}\) in the following procedure. We first generated a vector \(\mathbf{v}^{\prime}\) whose \(s\)-th element \(v^{\prime}_{s}\) was uniformly sampled from \([w_{s}-5,w_{s}+5]\), and then solved the knapsack problem with \(v^{\prime}_{s}\) and added the obtained solution \(\mathbf{\pi}\) to \(\mathcal{A}\). We repeated this until \(|\mathcal{A}|=30\) was satisfied. Each time we chose an item \(s\), we observed a value \(v_{s}+x\) where \(x\) is a noise from \(\mathcal{N}(0,1)\). We set \(R=1\). We compared CombGapE with an algorithm whose arm selection strategy is to choose the arm \(s\) with the least number of pulls among the set \(\{e\in[d]|\pi_{e}^{i_{t}}\neq\pi_{e}^{j_{t}}\}\) at each time step \(t\). We call it the _naive_ algorithm. We show the result in Figure 4. Though the _naive_ algorithm does not have any theoretical guarantee on the identification accuracy or on whether it terminates or not, it successfully terminated and identified the best action in all the settings. The CombGapE algorithm also identified the best action in all the settings. However, the sample complexity was quite different between the two. Figure 4 shows that the CombGapE algorithm terminated two to three times faster than the _naive_ algorithm, and in some cases, it terminated six to seven times faster. This shows that the arm selection strategy shown in (5) successfully determines which arm to focus on pulling. ## 6 Conclusion In this paper, we studied the R-CPE-MAB model, which is a combinatorial pure exploration in the stochastic multi-armed bandit setting, where the action class is in a real vector (matrix). We showed that if we assume that the size of the action class is polynomial in \(d\), we can construct an algorithm that is efficient empirically compared to the existing methods. Moreover, we derived an upper bound of the sample complexity of our algorithm and an action class-dependent lower bound of R-CPE-MAB, and showed that there is a key quantity named \(Gwidth\), which characterizes the difficulty of the problem. This \(Gwidth\) is a generalization of the notion _width_ proposed in Chen et al. (2014).
2303.03697
Stylometric Detection of AI-Generated Text in Twitter Timelines
Recent advancements in pre-trained language models have enabled convenient methods for generating human-like text at a large scale. Though these generation capabilities hold great potential for breakthrough applications, it can also be a tool for an adversary to generate misinformation. In particular, social media platforms like Twitter are highly susceptible to AI-generated misinformation. A potential threat scenario is when an adversary hijacks a credible user account and incorporates a natural language generator to generate misinformation. Such threats necessitate automated detectors for AI-generated tweets in a given user's Twitter timeline. However, tweets are inherently short, thus making it difficult for current state-of-the-art pre-trained language model-based detectors to accurately detect at what point the AI starts to generate tweets in a given Twitter timeline. In this paper, we present a novel algorithm using stylometric signals to aid detecting AI-generated tweets. We propose models corresponding to quantifying stylistic changes in human and AI tweets in two related tasks: Task 1 - discriminate between human and AI-generated tweets, and Task 2 - detect if and when an AI starts to generate tweets in a given Twitter timeline. Our extensive experiments demonstrate that the stylometric features are effective in augmenting the state-of-the-art AI-generated text detectors.
Tharindu Kumarage, Joshua Garland, Amrita Bhattacharjee, Kirill Trapeznikov, Scott Ruston, Huan Liu
2023-03-07T07:26:09Z
http://arxiv.org/abs/2303.03697v1
# Stylometric Detection of AI-Generated Text in Twitter Timelines ###### Abstract Recent advancements in pre-trained language models have enabled convenient methods for generating human-like text at a large scale. Though these generation capabilities hold great potential for breakthrough applications, it can also be a tool for an adversary to generate misinformation. In particular, social media platforms like Twitter are highly susceptible to AI-generated misinformation. A potential threat scenario is when an adversary hijacks a credible user account and incorporates a natural language generator to generate misinformation. Such threats necessitate automated detectors for AI-generated tweets in a given user's Twitter timeline. However, tweets are inherently short, thus making it difficult for current state-of-the-art pre-trained language model-based detectors to accurately detect at what point the AI starts to generate tweets in a given Twitter timeline. In this paper, we present a novel algorithm using stylometric signals to aid detecting AI-generated tweets. We propose models corresponding to quantifying stylistic changes in human and AI tweets in two related tasks: Task 1 - discriminate between human and AI-generated tweets, and Task 2 - detect if and when an AI starts to generate tweets in a given Twitter timeline. Our extensive experiments demonstrate that the stylometric features are effective in augmenting the state-of-the-art AI-generated text detectors. Keywords:AI generated text Large language models Twitter Stylometry Misinformation ## 1 Introduction With the recent advances in transformer-based language models, we see tremendous improvements in natural language generation (NLG). Consequently, with the proliferation of pre-trained language models (PLM) such as Grover [29], GPT-2 [23] and GPT-3 [18] the generation of human-like texts by AIs, i.e., AI-generated text, has become easy and achievable at large-scale. A research question that emerges with the advancement of NLG is: can AI-generated text be automatically detected? This is primarily because NLG models can generate grammatically accurate large volumes of text backed by the pre-trained language models, thus making way for potential societal and ethical issues. An adversary could incorporate these models with malicious intent and produce text that could lead to harm and confusion. Some examples such as click-bait headlines [20], deep tweets [5], and AI-generated fake news [29] show the underlying potential threats. Social networks such as Twitter are an ideal playground for adversaries to incorporate such AI text generators to generate misinformation on a large scale. For example, an adversary could deploy bots on social media, equipped with text generators to disseminate AI-generated misinformation or even launch large-scale misinformation campaigns. In this paper, we consider a new threat scenario as shown in Fig. 1, depicting a Twitter timeline of a credible user account and how the author changes from a human (credible user) to an AI (NLG). Here an authentic Twitter account gets hacked by a malicious user who then incorporates an AI text generator to generate misinformation. The severity of these types of malicious human-to-AI author changes is twofold: 1) credible accounts have a vast number of followers, hence a high diffusion rate of misinformation, and 2) compelling human-like AI tweets are generated at an unprecedented pace. Therefore, to identify this threat, it is crucial to have an automatic mechanism for detecting AI-generated tweets on Twitter. Furthermore, detecting the point where the human-to-AI author change occurs would be vital for digital forensics and future threat mitigation. Many approaches exist for automatically detecting AI-generated text in the literature, the most successful of which use PLMs [5, 29], However, incorporating the state-of-the-art (SOTA) PLM-based classifiers for detecting human-to-AI author changes in Twitter timelines is particularly challenging for two reasons: 1) **Input text contains fewer semantic information**. Tweets are inherently short in length, and the classification accuracy of PLMs decreases when the input text length is small and the amount of semantic information is insufficient. 2) **Generating training data for supervised learning**. PLM-based classifiers require sufficient fine-tuning to adjust to the task at hand. The training data in this problem would consist of Twitter timelines which each contain a sequence of human and AI-generated tweets. It is a resource-consuming task to generate such AI-generated Twitter timelines. To address the challenges, we propose a simple yet effective architecture using stylometric features as an auxiliary signal to detect AI tweets in Twitter timelines. Accordingly, we analyze different categories of stylometric features and design a comprehensive set of experiments Figure 1: An hypothetical example where a credible news Twitter account gets hijacked and generates misinformation. to discuss how stylometry augments AI-generated text detection performance across various configurations, e.g., AI-generator size, tweet topic, and others. Furthermore, we propose a simple stylometric feature-based change-point detection architecture to detect if and when a human-to-AI author change occurs in a user's timeline. This methodology consists of few learnable parameters and works well even when only a few training timeline samples exist. To summarize, we study the following two research questions: **RQ1:** When detecting AI-generated tweets from a timeline, can stylometric features improve the performance of SOTA text detectors? **RQ2:** With limited training data, how well can stylometric features detect if and when a human-to-AI author change occurs in a user's Twitter timeline? We evaluate our stylometry architectures on two datasets1: an in-house dataset created to emulate the human-to-AI author change in a user's Twitter timeline and a publicly available dataset, TweepFake [5]. Our results on both datasets empirically show that 1) stylometric features improve existing PLM-based AI-generated text classifiers significantly when the length of the Twitter timeline is small, and 2) stylometric signals help detect when an author change occurs in a Twitter timeline, mainly when there is limited training data. Footnote 1: Our detection code is available at [https://github.com/TSKumarage/Stylo-Det-AI-Gen-Twitter-Timelines.git](https://github.com/TSKumarage/Stylo-Det-AI-Gen-Twitter-Timelines.git) ## 2 Related Work **Bot Detection on Twitter.** There exists a large body of work on bot detection methods on Twitter. Most bot detection methods use user account features (such as follower counts, likes, retweets, etc.) or temporal features such as activity [4, 6, 15, 22]. Unlike standard bot detection methods, our objective is to purely use the raw text of the Tweet sequence and identify whether a language model generates the text in the sequence. Hence, we do not compare our method with bot detection baselines and instead focus on AI-generated text detection work. **AI-Generated Text Detection.** Initial research on generated text detection incorporated techniques such as bag-of-word and tf-idf encoding followed by standard classifiers such as logistic regression, random forest, and SVC [10]. In recent years [29] showed the effect of exposure bias on detecting text generated by large language models. Consequently, the subsequent works used pre-trained language model architectures (BERT, RoBERTa, GPT-2, etc.) as the detector and showed state-of-the-art results in detecting AI-generated text in many domains [8, 19, 21, 24]. Similarly, a finetuned RoBERTa-based detector has also shown significant performance in detecting AI-generated tweets [5]. Few recent works in generated text detection further attempt to extend the PLM-based detectors with new learning paradigms such as Energy-Based Model(EBM) [1] and additional information such as the factual structure and topology [16, 30]. In one of the recent works, the authors incorporated text augmentation to improve the performance of detecting AI-generated text in Twitter [26]. **Stylometry for AI-Generated Text Detection.** It has been shown by Schuster et al. [19] that stylometry has limited usage when trying to discriminate between AI-generated real news and AI-generated fake news. In contrast, our goal in this paper is to use stylometry to discriminate between human-written text and AI-generated text. To our knowledge, this is the first work incorporating stylometry to discriminate AI-generated text from human-written text. However, stylometry is a well-established tool used in author attribution and verification in many domains, including Twitter [2]. For detecting style changes within a document, different stylistic cues are leveraged in order to identify a given text's authorship and find author changes in multi-authored documents [9, 28]. Our work differs from these in that, while they detect human author changes within multi-authored documents, we measure human-to-AI author changes within a given Twitter timeline. However, the underlying hypothesis is similar. PAN [28] is a series of scientific events and shared tasks on digital text forensics and stylometry. In past years, PAN has examined multiple applications of stylometry for the detection of style changes in multi-authored documents. A couple of notable examples are a BERT-based model [11], and an ensemble approach which incorporates stylometric features [25]. We use these two models as baselines in our study. ## 3 Preliminaries To address the research questions in our study, we formulate the following two tasks: 1) Human vs. AI Tweet Detection and 2) Human-to-AI Author Change Detection and Localization. We formally define these tasks as follows: **1) Human- vs. AI-Authored Tweet detection.** In this task, we want to detect whether a sequence of Tweets was generated by a language model or written by a human author. Formally, our input is a Tweet sequence \(\tau^{u}=\{t_{1}^{u},t_{2}^{u},...,t_{N}^{u}\}\), consisting of a chronologically ordered set of \(N\) tweets from a specific user \(u\)'s timeline. Given this input we want to learn a detector function \(f_{\theta}\) such that, \(f_{\theta}(\tau^{u})\rightarrow\{1,0\}\); where 1 indicates that each tweet in \(\tau^{u}\) is AI-generated and 0 means that each tweet is human written. Note that for \(N=1\), this task is simply Tweet classification. **2) Human to AI Author Change Detection and Localization.** In this task, given that the input is a _mixed_ Tweet sequence, i.e., some Tweets are AI-generated while some are human-written, and assuming that there is only one point in the sequence where such an author change occurs from a human to an AI (i.e., a PLM-based text generator), we want to localize the position/tweet where this change occurs. Formally, similar to the previous task, our input is a chronologically ordered set of \(N\) Tweets from a user \(u\)'s timeline: \(\tau^{u}=\{t_{1}^{u},t_{2}^{u},...,t_{N}^{u}\}\). Given this timeline as input, we want to learn a function \(g_{\theta}\) such that \(g_{\theta}(\tau^{u})\to j\); where \(j\in[1,N]\) is the index of the Tweet in the ordered set \(\tau^{u}\) where the author change took place. ## 4 Methodology ### Stylometric Features The stylometric features aim to indicate different stylistic signals from a given piece of text. We follow a previous work on stylometry for detecting writing style changes in literary texts [9]. In this analysis we use three categories of features: 1) **Phraseology** - features which quantify how the author organizes words and phrases when creating a piece of text (e.g., avg. word, sent. count, etc.), 2) **Punctuation** - features to quantify how the author utilizes different punctuation (e.g., avg. unique punctuation count) and 3) **Linguistic Diversity** - features to quantify how the author uses different words in the writing (e.g., richness and readability scores). Table 1 summarizes the complete set of features we used under each of the three stylometric feature categories. The phraseology and punctuation feature sets are intuitive in their calculation. However, we also incorporated two complex features for linguistic diversity: _lexical richness_ and _readability_. We measure the richness of a given piece of text by calculating the moving average type-token ratio (MTTR) metric [3]. MTTR incorporates the average frequency of the unique words in a fixed-size moving window (sequence of words) to measure the lexical diversity. For readability, we use the well-established Flesch Reading Ease metric [14], that assigns a score in-between 0-100 for the readability of a given text input. ### Employing Stylometry in Human- vs. AI-Authored Tweet detection We follow a similar setting as in current literature where AI-generated text detection is usually handled as a binary classification task where the labels are 0 and 1 ('human written' and 'AI generated') [12]. In our approach, we incorporate stylometric features as an auxiliary signal to augment the current SOTA detectors. As shown in Fig. 1(a), our proposed fusion network joins the semantic embedding power of PLM with stylometric features. For each set of input Tweets, we calculate normalized stylometric features and extract the PLM-based text embedding. Let's denote stylometric features as \(s_{K}\) where \(K\) is the number of features we have. From the last hidden layer \begin{table} \begin{tabular}{l|l} \hline Stylometry Analysis & Features \\ \hline Phraseology & word count, sentence count, \\ & paragraph count, mean and stdev of word count per sentence, mean and stdev of word count per paragraph, mean and stdev of sentence count per paragraph \\ \hline Punctuation & total punctuation count, mean count of special punctuation (!, ’,,,, ;,?, ”, -, \& @, \#) \\ \hline Linguistic Diversity & lexical richness, readability \\ \hline \end{tabular} \end{table} Table 1: Different stylometric feature categories and corresponding feature sets of the LM, denoted by \(h\), we extract the vector corresponding to the CLS token (\(h_{e}^{CLS}\)) as the text representation (\(e\) is the embedding size). After concatenating the two vectors \(s_{K}\), \(h_{e}^{CLS}\), we pass them through the reduce network. Reduce network consists of \(i\) fully-connected layers4 that learn the function \(r_{\theta}\), reducing the combination of \(s_{K}\) and \(h_{e}^{CLS}\) to \(r\), where \(r\) is the reduced vector. Finally, this reduced representation vector \(r\) is passed to the classification network (combination of \(j\) fully-connected layers followed by a softmax layer) to produce the final classification probability, \(p_{\theta}(r)\to p_{L}\). Here \(L\) is the label of the classification task. The complete fusion network is trained via cross-entropy loss. Footnote 4: Here \(i\) (and \(j\)) are tunable hyper-parameters, we found that \(i=2\) and \(j=2\) provided the best results. ### Employing Stylometry in Human-to-AI Author Change Detection and Localization For the task of human to AI author change detection and localization, we hypothesize that when the author changes from human to AI in a Twitter timeline, there will most likely be a significant change in the style of the text, which should leave a segmentation in the stylometry signal. Therefore, we propose to incorporate change point detection on stylometric features to detect if there is an author change. First, we briefly describe the task of change-point detection. **Change Point Detection.** In time series analysis, a change point is defined as the point in time at which an abrupt change or statistical variation occurs within a time series. Such an abrupt change may indicate a state transition in the system. A variety of summary statistics are used to locate the existence of potential change points in a time series, e.g., changes in the mean, standard deviation, or local trend slopes [27]. In this work, we used the Pruned Exact Linear Time (PELT) algorithm [13] for detecting change points in our data. Out Figure 2: Proposed stylometry-based architectures of the many change point detection algorithms, we chose PELT as it performs well and has excellent computational efficiency. **Change Point in Stylometry Signal.** As shown in Fig. 2b, we first extract a Twitter timeline's stylometry matrix. Here, a timeline consists of \(N\) tweets, and the number of stylometric features we use is \(K\). This computation results in \(K\) stylometric time series of length \(N\). For each of these \(K\) time series, we run a PELT analysis to determine if there exists any change points. Finally, if \(\gamma\) percent of the \(K\) stylometric features agree that there is a change point, then we say that there exists an author change within the given timeline. The percentage value \(\gamma\) is a hyper-parameter and we call it the _change point agreement threshold_. We define the localization point as the most agreed upon change point index among the above mentioned \(\gamma\) percent of the stylometric features. If there is no agreement between features, then a localization point is chosen at random among those identified by the stylometric features. For simplicity, we will call this overall methodology "_stylometric change point agreement (StyloCPA)_" in the coming sections. ## 5 Experiments We conducted comprehensive experiments under both tasks to explore the effectiveness of the proposed models. ### Datasets **In-House Dataset**: The design of our research study required us to construct sequences or "timelines" of human- and AI-authored tweets. For this, we needed a collection of human-authored tweets and a collection of AI-generated tweets. For a collection of human-authored tweets, we used two publicly available datasets on specific topics (anti-vaccine5 and climate change6). We also collected tweets on Covid-19 using the Twitter API. Note that, while of course this collection process may _potentially_ introduce a few AI-generated tweets into our human-tweet collection, we do not expect these few tweets to be significant enough to skew the results of such a large dataset. For a collection of AI-authored tweets, we generated tweets with the huggingface7 implementation of gpt2 [18], gpt2-medium, gpt2-large and EleutherAI-gpt-neo-1.3B [7]. We fine-tuned these PLMs using the human-authored tweets we collected so that the generated tweets matched the style and topic of tweets in the human-authored sample. During the finetuning, we kept a separate validation set to measure perplexity and perform early stopping to ensure the model's generation quality. As a secondary measure, we also conducted spot-checking on generated tweets to observe whether the generated tweets were of good quality. With these tweet collections, we were able to construct synthetic timelines for our analysis. To build a human-authored timeline, we queried our collection of human tweets for \(N\) tweets authored by a single user8. This set of \(N\) tweets was then defined as a human-authored timeline. Similarly, to build a AI-generated timeline, we sampled \(N\) of a given NLG's tweets. For our analysis, we varied timeline lengths (\(N\in\{1,5,10,20\}\)) to understand its affect on performance. Using the process outlined above, for each \(N\), we constructed \(M\) timelines, where \(M=5000/N\). While the number of timelines vary for each \(N\), the volume of semantic information is held constant across various \(N\). For the change point detection and localization analysis, we needed "mixed" timelines, i.e., a sequence of tweets where we see a human-to-AI author change. To construct each mixed timeline \(N-\ell\) human-authored tweets (on the same topic) were sampled as above and \(\ell\) AI-generated tweets (fine-tuned on the same topic) were concatenated. For the localization results reported here we fixed \(N=25\) and varied \(\ell\in[1,N-1]\). We repeated this process to obtain 250 mixed timelines. We will assist in reproducing our dataset as follows: 1) release all the tweet-ids (or the source of the tweet-ids) used to extract the tweet content, and 2) outline the steps of how we generate the AI-generated tweets 9. Footnote 8: If \(N\) tweets were not available from a single user we instead collected tweets from multiple users. Note, that a single user’s authorship is not a requirement for our analysis but helps with consistent style when possible. Footnote 9: The data generation code is available at [https://github.com/stresearch/machine-gen-twitter.git](https://github.com/stresearch/machine-gen-twitter.git) **TweepFake**: As a point of comparison we also applied our approach to the public TweepFake dataset [5] which was designed for Human- v. AI-authored tweet detection. For more information about this dataset and its construction see [5]. Note that analysis of this dataset is comparable to the in-house dataset with \(N=1\). ### Experimental Settings Since fine-tuned RoBERTa models are known to perform well for generated text detection [5, 24], we chose to use RoBERTa as the language model for our stylometry fusion architecture in the task of _Human- vs. AI-authored tweet detection_. RoBERTa was fine-tuned on the training dataset before extracting the embeddings for the proposed fusion model. We decided the number of training epochs based on an early stopping criterion where the validation loss was calculated on a 5% holdout set. During inference, for TweepFake and the in-house dataset (with \(N=1\)), the input to the model is an individual tweet. However, for the cases where timeline length \(N>1\), we append all the tweets in a timeline with newline separators to create a single input text. For the task of _human to AI author change detection and localization_, we used the StyloCPA model described in the Methodology section. In order to select the agreement threshold \(\gamma\), we performed grid search over a range of \(\gamma\) values and found that \(\gamma=0.15\) resulted in the best overall localization accuracy. ### Baselines for Comparison For the task of _human- vs. AI-authored tweet detection_, we use the following two categories of baselines. **Naive Feature-based**: For a naive baseline, we combine a feature extraction method with a classifier, without performing any finetuning. For the feature extraction we used bag-of-words (BOW), word2vec (w2v) [17], BERT and RoBERTa. We then used a classifier on top of the extracted features. While we experimented with xgboost, logistic regression, and random forest classifiers, for brevity, we only report the results associated with xgboost in Table 2 as this was the top performer. **Fine-tuned LM based**: Here, we follow previous works [5, 24] in AI generated text detection and use LM-based classifiers, viz., BERT and RoBERTa (fine-tuned on the training data) as the baselines. Similarly for the _human to AI author change detection and localization_ task, we use the following two categories of baselines: **Fine-tuned LM based**: For this task, for a given timeline, we calculated the classification probability for each tweet i.e, the probability that a tweet is generated by a human or AI using the top performing LM-based classifiers from the previous task. We then used the change point detection algorithm discussed in the Methodology section to detect if there is a change in the detection probability time series. **Style change classifiers**: We used the top two models from the PAN "style change detection" task [28]; 1) PAN_BERT [11]: BERT model on stylometry signals, 2) PAN_Stack_En [25]: stacked ensemble model based on BERT embedding. These PAN style change detection models identify author changes within a multi-author document. They assume that an author change only occurs at the paragraph level. In the analysis reported here, when working with the PAN models we first converted each tweet timeline into a document where each paragraph is a single tweet from the timeline. ### Experimental Results Table 2 summarizes the results of the task _Human- vs. AI-authored tweet detection_. It is evident that the stylometric fusion seems to augment the performance of detecting AI-generated tweets by a significant margin. By looking at the naive classifiers, standalone stylometric features are a good signal in discriminating AI-generated tweets. In particular, stylometric features outperform BOW and w2v embeddings. However, stylometric features are not as powerful as the pre-trained LM embeddings for this task, which is intuitive given the high volume of semantic information retained by these PLMs at the pre-training stage. The results on the TweepFake dataset further confirm the claims mentioned above. As seen in the rightmost column of Table 2, our stylometry LM fusion model outperforms the current SOTA RoBERTa baseline on TweepFake. Overall, the models tend to perform better on TweepFake compared to our in-house dataset. One possible explanation for this difference would be that our dataset was generated purely with SOTA NLG models, in contrast, TweepFake used a mix of primitive and SOTA generators. This may result in our AI-generated tweets being more realistic and thus harder to detect. In Table 2 we also present how the models perform on different timeline lengths (\(N\)). When the number of tweets in the timeline decreases, the semantic information that resides in a given timeline also decreases, therefore, making it difficult for the classifiers to discriminate AI tweets from human tweets. However, when the timeline is small, we see an accuracy gain from stylometric fusion. This may suggest that stylometric signals help compensate for performance loss due to low semantic information. Fig. 3 shows our results on the human to AI author change detection task as a function of different training set sizes. We see that the proposed StyloCPA model performs well compared to most baselines. In fact, when the number of training samples is small (50), it has the best performance across all the models. This is rather impressive because unlike fine-tuned PLM-based detectors, StyloCPA has few learnable parameters and performs well with limited training samples. Table 3 shows the localization (i.e., detecting the time of an author change) results for different window sizes (i.e, true positive occurs when the predicted change point is within a window of \(\pm W\) points from the actual change point). Our StyloCPA has the best performance compared to PLM-based detectors. In \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline Dataset \(\rightarrow\) & \multicolumn{4}{|c|}{In-House} & \multicolumn{1}{c}{TweepFake} \\ \hline Model \(\downarrow\) & \(N=1\) & \(N=5\) & \(N=10\) & \(N=20\) & \\ \hline XGB\_BOW & 0.718 & 0.819 & 0.879 & 0.951 & 0.792 \\ \hline XGB\_W2V & 0.732 & 0.873 & 0.911 & 0.963 & 0.845 \\ \hline XGB\_Stylo (ours) & 0.771 & 0.891 & 0.909 & 0.958 & 0.847 \\ \hline XGB\_BERT\_EMB & 0.796 & 0.902 & 0.911 & 0.972 & 0.853 \\ \hline XGB\_RoBERT\_EMB & 0.798 & 0.910 & 0.913 & 0.974 & 0.857 \\ \hline BERT\_FT & 0.802 & 0.913 & 0.919 & 0.979 & 0.891 \\ \hline RoBERTa\_FT & 0.807 & 0.919 & 0.927 & 0.981 & 0.896 \\ \hline RoBERTa\_FT\_Stylo (ours) & **0.875** & **0.942** & **0.961** & **0.992** & **0.911** \\ \hline \end{tabular} \end{table} Table 2: Proposed stylometry fusion model performance (accuracy) on Human- vs. AI-Authored Tweet detection. Figure 3: Accuracy in detecting mixed timelines as a function of training set size. the PLM-based detectors, the error in detecting AI vs. human tweets is propagated into the localization task. Therefore, it cannot precisely pinpoint the change; yet it would be in a close region. We see this by observing the increase in accuracies when the window size \(W\) increases. However, in StyloCPA, pinpointing the author change is more feasible and accurate, given that it detects an author change based on an agreement between all the stylometric features. ### Further Analysis Here we further study how different variations in data, generators, and stylometry features affect the proposed models. Please note that we use the label T1 for human- vs. AI-authored tweet detection and T2 for human-to-AI author change detection and localization in the below section. **Does Topic Change Affect Performance?** As seen in Figure 3(a), we do not see a significant change in results when the topic changes. All the stylometry features incorporated in our method are intended to identify style changes in tweets. Though the topic of a tweet changes, the writing style attributes of a given author is relatively invariant. Consequently, we would not see a significant difference in the performance. **Are Bigger Generators Harder to Detect?** Figure 3(b) shows the performance of the stylometry-LM fused detector across multiple generators. As expected, we see a slight decrease in performance when the size of the generator increases (large generators capable of mimicking human writing style well). **Which Stylometry Features Were The Most Important?** Figure 3(c) shows each stylometry category's aggregated average importance score for T1 and T2 across all the timeline sizes. As we see, punctuation and phraseology features are the most important in contrast to the linguistic features. This maybe be because the linguistic features require long text sequences to present a more accurate score. Therefore, the readability and diversity scores we extract might not be a good signal for detecting AI-generated tweets. This remark is further evident by the increased lexical feature importance from T1 to T2. T2 has larger timelines (\(N=25\)) compared to the T1 timelines (\(N\in\{1,10,20\}\)). \begin{table} \begin{tabular}{l|l|l|l} \hline Model & \(W=0\) & \(W=1\) & \(W=2\) \\ \hline StyloCPA & **0.822** & **0.871** & **0.892** \\ \hline RoBERTa & 0.745 & 0.824 & 0.853 \\ \hline RoBERTa\_Stylo & 0.795 & 0.865 & 0.889 \\ \hline PAN\_Stack\_En & 0.672 & 0.752 & 0.794 \\ \hline PAN\_BERT & 0.761 & 0.843 & 0.862 \\ \hline \end{tabular} \end{table} Table 3: Performance on detecting the change-point. ## 6 Conclusion In this paper, we studied the novel application of incorporating stylometry to quantify stylistic changes in AI-generated text in Twitter timelines. We proposed two simple architectures to utilize three categories of stylometric features towards 1) discriminating between human-written and AI-generated tweets and 2) detecting if and when an AI starts to generate tweets in a given Twitter timeline. We created an in-house dataset to emulate these two tasks. A comprehensive set of experiments on the in-house data and an existing benchmark dataset shows that the proposed architectures perform well in augmenting the current PLM-based AI-generated text detector capabilities. For future work, it would be interesting to see how capable stylistic signals are towards attributing a given AI tweet to a corresponding generator. ## 7 Acknowledgement This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2306.06131
Pattern Synthesizing of the Concentric Ring Arrays Using Recursive Least Square Method
This letter presents a recursive technique to synthesize the array factor (AF) of a concentric ring array. In this method, first, the problem is modeled using the traditional least square method (LSM). In the second step, a recursive technique is applied to the defined problem. It is shown that for many practical ring arrays, only a few iteration number is sufficient to obtain a solution with high accuracy. To evaluate the performance and benefits of the proposed method, various concentric ring arrays with different array factor, including equi-ripple pattern, pattern with deep nulls, flat-top array factor, are examined. It is shown that the introduced procedure is a very noble candidate for this purpose.
Atefe Akbari-Bardaskan
2023-06-08T14:42:41Z
http://arxiv.org/abs/2306.06131v1
# Pattern Synthesizing of the Concentric Ring Arrays Using Recursive Least Square Method ###### Abstract This letter presents a recursive technique to synthesize the array factor (AF) of a concentric ring array. In this method, first, the problem is modeled using the traditional least square method (LSM). In the second step, a recursive technique is applied to the defined problem. It is shown that for many practical ring arrays, only a few iteration number is sufficient to obtain a solution with high accuracy. To evaluate the performance and benefits of the proposed method, various concentric ring arrays with different array factor, including equi-ripple pattern, pattern with deep nulls, flat-top array factor, are examined. It is shown that the introduced procedure is a very noble candidate for this purpose. Array Factor, Concentric Ring Array, Recursive Least Square Method. ## I Introduction Electrical engineering, especially in the communication field, single element antenna is widely utilized for numerous applications [1-6]. A single element antenna is simple, cheap, and low profile. Although a single element antenna it is a very good choice, especially for the industrial projects, but its gain is typically low. To increase the gain, several single element antennas should be arranged in a linear, planar, or ring geometry. This arrangement is called the antenna array. An array of single element antennas are extensively used in the various applications such as 5G/6G networks, seekers, and electromagnetic compatibility instruments [7-10]. All of the antenna arrays have high gain. Also, the radiation pattern of them is easily controlled by changing the magnitude and phase of excitation coefficients of each elements. To control the radiation pattern of an array, a beam-forming network (BFN) is necessary. Additionally, the beam-forming network can control the side lobe level (SLL) of the radiation pattern, increase or decrease the half-power beam-width (HPBW), directivity, etc. However, it is not easy, and there are several challenge for this purpose [11-14]. So far, several types of beam-forming networks, including BFN based on the substrate integrated waveguide (SIW), microstrip BFN, BFN with non-uniform profile, BFN on a perforated substrate, gap-waveguide BFN, and coplanar BFN [15-22]. The total radiation pattern of a linear, planar and ring arrays is shaped by picking the appropriate values of the magnetic and phase of array weights. Hereafter, synthesizing the array factor is a chief contest in electrical engineering, especially in antenna engineering. Nowadays, analytical, iterative, and algorithm-based tactics are announced for this problem, which some of them can moderate the number of array components [23-28]. Similar to the linear and planar arrays, concentric ring arrays are employed in several applications such as radars and satellites. Concentric ring arrays have a low volume occupation. Also, the beam-forming networks of them is simpler than the planar arrays. So far, a smaller number of techniques are presented for synthesizing the radiation pattern of a concentric ring array. In [29] and [30], the algorithm-based approaches are hired to find the excitation coefficients of the array components of a non-uniform circular arrays. In these works, it is tried to suppress the maximum side lobe levels of the array factor. In [31], a new method is proposed for this type of array, which reduce the number of array rings by the simple try-error technique. In this work, conventional least square method, Richardson technique, and QR factorization are simultaneously used. The synthesis of the array factor of a concentric ring array is a complicated problem. To this end, evolutionary techniques such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) can also be employed. These algorithms offer a solution with high accuracy. However, the computational cost of these algorithms is very high. This can be a serious problem, especially for the large concentric arrays. In this letter, and in the first step, the array factor of a concentric ring array is defined. In the second step, the traditional least square method is used to establish a system of linear equations. It is shown that using the LSM method in the traditional formation leads to a solution with low accuracy. So, a recursive technique have to be employed. For practical arrays, only a few iteration is sufficient to reach an acceptable accuracy. To assess the advantages of the proposed recursive method, several concentric arrays are considered. Numerical results approve that the proposed recursive techniques is a promising method. In the next section, theory of the method will be shown. In the following, numerical results will report. The last section includes some conclusions. ## II Mathematical Theory Figure (1) shows the geometry of a concentric ring array. In a concentric ring array, the radius of all rings are not small. So, the array factor of it is independent of the azimuth angle in the spherical coordinate. If \(r_{n}\), \(N_{r}\), \(I_{n}\) and \(N_{n}\) represent the radius of ring \(n\), the number of rings, elements weights for ring \(n\) and the number of elements in ring \(n\), respectively, the array factor is defined as follows [31]. \[F=I_{{}_{0}}+\sum_{n=1}^{N_{r}}I_{{}_{N}}J_{{}_{0}}\left(kr_{{}_{2}}u\right) \tag{1}\] \[u=\sin\theta \tag{2}\] \[N_{{}_{*}}=\text{round}\Bigg{(}\frac{2\pi r_{{}_{*}}}{d_{{}_{*}}}\Bigg{)} \tag{3}\] \[d_{{}_{*}}=2r_{{}_{*}}\sin\left(\pi/N_{{}_{*}}\right) \tag{4}\] where \(\theta\), \(I_{0}\) and \(d_{{}_{*}}\) are the elevation angle in spherical coordinate, the weight of center element, and the arc distance between adjacent elements, respectively. It should be noted that equation (1) is used to calculate the array factor of a concentric ring array with a very good approximation. The array factor can be rewritten from its samples [32]. According to the Nyquist theorem, only \(M\) samples is sufficient. So, we have. \[F_{{}_{r}}=\sum_{{}_{m=1}}^{{}_{M}}F\left(u_{{}_{m}}\right)S\left(u-u_{{}_{m}}\right) \tag{5}\] In above equation, \(F_{{}_{r}}\) and \(S(.)\) are the rewritten array factor and the sampling function as follows. \[S\left(u\right)=\frac{\sin\left(Mu/2\right)}{M\sin\left(u/2\right)} \tag{6}\] The smallest value of \(M\) is determined as. \[M\geq\frac{4\left(N_{{}_{r}}-1\right)\left|r_{{}_{m=1}}-r_{{}_{*}}\right|}{\lambda} \tag{7}\] where \(\lambda\) is the free space wave-number. Using the sampling function, a system of linear equation is established as follows. \[\mathbf{A}_{{}_{M}\left({}_{M}\left({}_{M}\right)\right)\downarrow}= \mathbf{B}_{{}_{M}\downarrow} \tag{8}\] \[\mathbf{A}=\left[A_{{}_{m}}\right]\to A_{{}_{m}}=\begin{cases}N_{{}_{r}}J_{{}_ {0}}\left(kr_{{}_{*}}u_{{}_{m}}\right)&n=1,...,N_{{}_{r}}\\ 1&n=N_{{}_{r}}+1\end{cases} \tag{9}\] \[\mathbf{B}=\left[F_{{}_{r}}\left(u_{{}_{m}}\right)\right];\ \ \ m=1,\ 2,\...,\ M \tag{10}\] \[\mathbf{I}=\left[I_{{}_{0}}\quad I_{{}_{1}}\quad\cdots\quad I_{{}_{N}}\right] ^{T} \tag{11}\] In above, \(\mathbf{A}\) is the coefficient matrix of the system of equation, \(\mathbf{B}\) is the vector including the array factor samples, and \(\mathbf{I}\) is a column vector includes the complex excitation coefficients of array. According to the traditional least square method, the solution is found as follows. \[\mathbf{I}=\left(\mathbf{A}^{T}\mathbf{A}\right)^{{}^{-1}}\mathbf{A}^{{}^{ \prime}}\mathbf{B} \tag{12}\] In the traditional LSM, cost of all equations is equal. So, this issue leads to the self-biasing of the problem. As a results, the accuracy of the solution will decrease. To overcome that, recursive technique is proposed. It is assumed that in the first stage, only \(M\) samples are employed to initial guess of the solution. However, in the recursive least square method, \(M_{0}\) samples should be regarded. Our studies show that the smallest value of \(M_{0}\) is determined as. \[M_{{}_{0}}=2M\geq\frac{8\left(N_{{}_{r}}-1\right)\left|r_{{}_{m=1}}-r_{{}_{*} }\right|}{\lambda} \tag{13}\] The previous equations can be rewritten as follows. \[\mathbf{A}_{{}_{m=1}}=\begin{bmatrix}\mathbf{A}_{{}_{m}}\\ \mathbf{a}_{{}_{m=1}}^{{}_{T}}\end{bmatrix},\ m=1,\ 2,\...,\ \left(M_{{}_{0}}-M\right) \tag{14}\] \[\mathbf{a}_{{}_{m=1}}=\begin{bmatrix}a_{{}_{m=1}}\end{bmatrix}\rightarrow \tag{15}\] \[a_{{}_{m=1}}=\begin{bmatrix}N_{{}_{r}}J_{{}_{0}}\left(kr_{{}_{*}}u_{{}_{m=1}} \right)&n=1,...,N_{{}_{r}}\\ 1&n=N_{{}_{r}}+1\end{bmatrix} \tag{16}\] \[\mathbf{B}_{{}_{m=1}}=\begin{bmatrix}\mathbf{B}&\mathbf{B}_{{}_{m}}\end{bmatrix} ^{T} \tag{17}\] \[\mathbf{B}_{{}_{m}}=F_{{}_{r}}\left(u_{{}_{m=1}}\right) \tag{18}\] Finally, the excitation coefficients are calculated in a recursive manner as the following equations. \[\mathbf{I}_{{}_{m=1}}=\mathbf{I}_{{}_{m}}+\mathbf{P}_{{}_{m=1}}\mathbf{a}_{{}_ {m=1}}\left\{\mathbf{B}_{{}_{m=1}}-\mathbf{a}_{{}_{m=1}}^{{}_{T}}\mathbf{I}_{{}_ {m}}\right\} \tag{19}\] \[\mathbf{P}_{{}_{m=1}}=\begin{pmatrix}\mathbf{A}_{{}_{m}}^{T}\mathbf{A}_{{}_{m =1}}^{{}_{1}}\end{pmatrix}^{T} \tag{20}\] \[\mathbf{K}_{{}_{m=1}}=\mathbf{P}_{{}_{m}}\mathbf{a}_{{}_{m=1}}\left(\mathbf{a}_ {{}_{m=1}}^{{}_{T}}\mathbf{P}_{{}_{m}}\mathbf{a}_{{}_{m=1}}+\mathbf{1}\right)^{ {}^{-1}} \tag{21}\] The studied shown that only three iterations is sufficient. To reduce the number of rings, the similar procedure shown in [31] can be considered. ## III Numerical Examples and Discussion In the following, various concentric ring arrays with different array factor are examined. To this end, MATLAB-based program is written. ### _Flat-Top Array Factor_ A flat-top array factor is considered as the first example. This pattern has non-zero values at -0.4\(\pm\)\(\pm\)\(\pm\)\(0.4\). During the synthesis process, it is assumed \(N_{{}_{r}}\)=9, \(r_{{}_{r}}\)=\(\pm\)\(\lambda/2\) and \(N_{{}_{i}}\)=round(\(2\pi n\)). Obtained two-dimensional array factor and the target array factor are plotted in Figure (2). It is seen that the accuracy is acceptable. The obtained ring weights are displayed in Figure (3). Also, Figure (4) shows the three-dimensional array factor. The array factor is symmetric. Fig. 1: The geometry of a concentric ring array. ### _Difference Array Factor_ In the second example, a difference array factor with maximum side lobe level about -25 dB and \(N_{r}\)=11 is regarded. After applying the synthesis process, the obtained two-dimensional array factor and the target array factor are depicted in Figure (5). It is seen that the accuracy is reasonable. The obtained ring weights are displayed in Figure (6). Also, Figure (7) shows the three-dimensional array factor. Similar to previous example, the array factor is symmetric. ### _Array Factor with Deep Nulls_ An equi-ripple array factor with SLL=-16 dB, two -40 dB deep nulls, and \(N_{r}\)=14 is considered as the fourth example. The obtained two-dimensional array factor and the target array factor are depicted in Figure (11). It is seen that the accuracy is good. The obtained ring weights are displayed in Figure (12). Also, Figure (13) shows the three-dimensional array factor.
2305.03519
Leveraging BERT Language Model for Arabic Long Document Classification
Given the number of Arabic speakers worldwide and the notably large amount of content in the web today in some fields such as law, medicine, or even news, documents of considerable length are produced regularly. Classifying those documents using traditional learning models is often impractical since extended length of the documents increases computational requirements to an unsustainable level. Thus, it is necessary to customize these models specifically for long textual documents. In this paper we propose two simple but effective models to classify long length Arabic documents. We also fine-tune two different models-namely, Longformer and RoBERT, for the same task and compare their results to our models. Both of our models outperform the Longformer and RoBERT in this task over two different datasets.
Muhammad AL-Qurishi
2023-05-04T13:56:32Z
http://arxiv.org/abs/2305.03519v1
# Leveraging BERT Language Model for Arabic Long Document Classification ###### Abstract Given the number of Arabic speakers worldwide and the notably large amount of content in the web today in some fields such as law, medicine, or even news, documents of considerable length are produced regularly. Classifying those documents using traditional learning models is often impractical since extended length of the documents increases computational requirements to an unsustainable level. Thus, it is necessary to customize these models specifically for long textual documents. In this paper we propose two simple but effective models to classify long length Arabic documents. We also fine-tune two different models-namely, Longformer and RoBERT, for the same task and compare their results to our models. Both of our models outperform the Longformer and RoBERT in this task over two different datasets. ## Introduction A large portion of textual content that requires automated processing is in the form of long documents. In some domains such as legal or medical, long documents are the standard. This severely restricts the possibilities for practical use of the most advanced Transformer models for text classification and other linguistic tasks [13]. For example, models such as BERT [1] have significantly improved the accuracy of automated NLP tasks, but their usefulness is limited to relatively short text sequences [10] due to the fact that their complexity increases geometrically. Modifying BERT in such a way to disassociate sequence length from computing complexity would remove this obstacle and bring immediate benefits to numerous fields such as education, science, and business [14]. Innovative approaches that leverage the greatest advantages of Transformers while offsetting their major shortcomings are needed at this stage of development, as they could lead to full maturation of a concept that has been demonstrated to be impressively successful with semantic tasks. There have been numerous attempts to improve the performance and efficiency of BERT with long documents, using a wide variety of approaches. Some of the proposed solutions are based on the sliding window paradigm [15, 16]. The downside of this class of solutions is their inability to track long-range dependencies in the text which weakens their analytic insights. Another group of works aim to simplify the architecture of Transformers and decrease complexity as result [23, 13, 14]. So far, none of these attempts could match the same level of performance that BERT achieves with short text. Reusing previously completed steps is another strategy for adapting Transformers for longer text [1] as a prominent example. Longformer model proposed by [1] may be the most promising solution for the problem of using Transformers with long text, and it combines local and global attention to improve efficiency. The issue remains open, and new suggestions for the best method of long document processing are still being made on a regular basis. In this paper we present two BERT-based language models and fine-tune two others for Arabic long document classification. The first language model consists of four main layers: sentence segmentation layer, BERT layer, a linear classification layer, then the sentence grouping layer with respect to each document, and finally the softmax layer. In this model, we segmented the document into meaningful sentences and then fed these sentences into BERT model along with their document ID. The second model has the same idea of dividing the document into sentences, but instead we hypothesize that a majority of semantically important information is concentrated within specific sentences inside of a longer text, making it unnecessary to check for connections between all words in a document. Instead, we used BERT-based similarity match algorithm that can recognize high-relevance sentences and pass them as input to the BERT-base model that can complete the desired classification task. Both of those models are based on BERT architecture, and require supervised training for best performance. Input text is divided into sentences that don't texceed the maximum length that BERT can accurately process (512 tokens). In addition, we have fine-tuned two well-known language models for long documents classification task, which are the Longformer [1] and Recurrent over BERT [16]1. Before the fin-tuning process, these two models have been modified to be suitable for the Arabic language. We compared the pro posed models against the Longfroemer and RoBERT using two different Arabic datasets. The proposed language models were evaluated against two models, Longframer and RoBERT, using two datasets. The first dataset was collected from the Mawdoo3 website2 and the second dataset was from previous related work [1]. The results showed that the first language model based on sentences aggregating after classifying them is the best among all models on news data with a macro F1-score equal to 98%. While this model achieved a comparative result with the Longformer in the second Mawdoo3 dataset that contains 22 classes. Footnote 2: www.mawdoo3.com ## Related Works Most of the recent works addressing the problem of long document classification start from similar principles common to all deep learning methods. They also diverge in many aspects, as the authors explore different avenues for leveraging the power of the learning algorithms and overcoming the most significant obstacles [1]. Since the authors are essentially attempting to solve the same problem, namely how to maintain high accuracy of semantic predictions while keeping the computing demands reasonable, it would be fair to describe the papers as belonging to the same family despite the considerable differences in approach. In terms of methodological choices, practically all works from this group acknowledge the unmatched power of the attention mechanism for analyzing semantic relationships, and incorporate it in some way into the proposed architecture. There is a division between works that mostly (or completely) embrace an existing architecture and perform only minor operations such as fine-tuning or knowledge transfer in order to reduce the computational demands [1, 2]. On a different end of the spectrum, there are works that propose innovative hybrid solutions in which the attention mechanism and/or Transformer architecture are combined with elements of different deep learning paradigms, such as RNNs and CNNs. In particular, a common strategy is to adopt a hierarchical structure for the overall solution and use the attention mechanism only in a limited role, thus avoiding the exponential growth of complexity [13, 14]. The aforementioned methodological differences stem largely from the expectations for each paper, which range from proving a theoretical point to attempting to develop specialized model for long document classification. Works with a narrower scope tend to stay closer to the original BERT model design [1], while more ambitious efforts that aim to create new tools are more inclined to experiment with previously untested combinations of elements. In some papers, the scope of intended applications is limited to long documents from a certain domain (i.e. medical) [15], while others are approaching the problem in more general terms. Finally, there is an important distinction between works that aim for greater accuracy, and those that primarily attempt to improve computational efficiency and shorten the inference time [17]. It's a fair assessment that practically all works from this group are grappling with the same problem - the tendency of attention-based models to become prohibitively complex as the length of the analyzed text is increased. In response, the authors tried a variety of ideas that rely on vastly different mechanisms to decrease complexity. From fine-tuning and knowledge distillation to introduction of hierarchical architectures and restrictive elements such as fixed-length sliding window [1, 13], the proposed techniques are quite innovative and typically leverage some known properties of deep learning models to affect how the attention mechanism performs in a particular deployment. The diversity of ideas found in those papers illustrates that researchers are currently casting a wide net and searching for unconventional answers to a difficult problem, without a single dominant strategy. On the other hand, hybrid approaches hold a lot of promise and they combine some proven elements from different methodologies into new, potentially more optimal configurations [13, 12]. Evaluation of the proposed changes to established algorithms is crucially important, and all of the reviewed works include some form of empirical confirmation of their premises. While the numbers seemingly validate that the proposed solutions achieve state-of-the-art results under the best possible conditions, those findings are self-reported and may often be too optimistic. All of the papers are interested in document classification tasks and use it to evaluate their solutions, but datasets used for testing may not be the same in terms of size, diversity, and content. When directly comparing different solutions, it's extremely important to keep in mind the particulars of the evaluation protocols. Studies aiming to provide evaluations with independently administered comparative testing of several different BERT-like algorithms for document classification are slowly emerging and reporting some interesting findings that often diverge from self-assessed results [2, 14, 15]. Still, there are no widely accepted evaluation standards and every comparison suffers from 'apples-to-oranges' problem up to an extent. When it comes to practical use of the proposed solutions, there is a general lack of field data and even discussions of use cases are rare. This is understandable considering the main focus is on discovering more efficient methods, but without real world testing it's difficult to predict whether any of the solutions can deliver similar results to their reported findings. Some works may be directed as specific niches such as legal or medical, but even in this case little attention is paid to practicalities associated with real world application. This weakness may reflect the current state of the field, which is highly experimental and mostly built on data collected in a controlled environment. ## Data Experimental parts of our study are conducted using two different datasets, with the choice of the datasets based on the domain of research which is long length Arabic documents. The datasets are vastly different in terms of size and diversity of classes. ### Mawdoo3 Dataset The first dataset was scraped from Mawdoo3 which is the largest Arabic content website3. The number of classes from mawdoo3 is 22 class and each category contains between 700 to 12K articles. We have selected almost one thousand of long articles from each category as presented in Figure 2. Footnote 3: [https://mawdoo3.com/](https://mawdoo3.com/) ### Arabic News Dataset The second dataset was about news articles and we downloaded them from different sources [11, 12, 13]. These data have almost the same 8 categories so we merged them together and the resulted dataset is described in Figure 3. We have selected almost four thousands of long articles from each class. ## Models In this section we introduce two BERT-based language models. Both of those models are based on BERT architecture, and require supervised training for best performance. Input text is divided into sentences that don't exceed the maximum length that BERT can accurately process (512 tokens). We also fine-tune two others for Arabic long document classification. the following sections explain that in details. ### BERT-based Sentence Aggregation We propose a simple but effective model to do a long document classification task. Our proposed model consist of mul Figure 1: Proposed Model Architecture for Long Document Classification tiple layers as shown in figure 1; namely, sentence segmentation layer, BERT layer, a linear classification layer, then the sentence grouping layer with respect to each document, and finally the softmax layer. The first layer is to make a segmentation of sentences from the long text, taking into account the structure of the sentence in Arabic language. So that the sentence does not lose its meaning or break. The second layer is the BERT tokenizer followed by the embedding representation layer. Since we are using BERT base model named Arabert-V2 [12], this layer consists of 12-layer stacked encoders that receive the embedding inputs and process it and send to the an MLP layer. We train the model on all the sentences and each sentence is considered as document. The training outputs are the classification probability for each class as well as the sentence ID and orginal document ID. We make a grouping of text sentences with the probabilities of each category in each sentence, and in the end we aggregate all sentence in the category with the highest probability with respect to the document ID. ### BERT-based Key Sentences Model This model has the same idea of dividing the document into sentences, but instead we hypothesize that a majority of semantically important information is concentrated within specific sentences inside of a longer text, making it Figure 3: Arabic News Dataset we choose almost 4000 articles from each category. Figure 2: Mawdo3 Dataset that contains 22 class.We selected almost 1000 article under each class. unnecessary to check for connections between all words in a document. Instead, we used BERT-based similarity match algorithm that can recognize high-relevance sentences and pass them as input to the BERT model that can complete the desired classification task. The high-relevance sentences were selected by applying a maximal marginal relevance (MMR) Carbonell and Goldstein (1998) similarity algorithm as shown in equation 1. The length of the sentences is between 30 to 150 tokens. Footnote 3: [https://github.com/google-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-transform-language-transform-language-](https://github.com/google-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-transform-language-transform-language-language-transform-language-transform-language-language-transform-transform-language-transform-language-) Longformer as well as with the RoBERT on Mawdoo3 dataset. The results were very close between the two proposed solutions and the Longformer, with a very slight superiority to the language model based on extracting key sentences using MMR method with macro F1 score equal to 83%. While Robert performed very poorly on Mawdoo3 dataset with macro F1 score of 21%. The overall results of all models in the long document classification task are explained in Table 2. We can say that this results support our hypothesis of identify the most relevant parts of the text. The resulting solution retains the ability to capture relationships between distant tokens, but doesn't have to actively backpropagate all of them and instead focuses only on key sentences. Because of this, the model avoids geometric progression of complexity and continues to be efficient with much longer texts than the original BERT is able to. It is worth noting that we have pre-processed and removed the information at the beginning of each article in this dataset because that the parts of the document containing easily identifiable indicators of the class. ### Arabic News Dataset The results of the experiment were completely different with the Arabic news dataset. All models performed very well, and in this experiment, the first model outperformed the rest with macro F1 score of 98.4% which revealed that additional modification can have a positive impact on model performance, but it's important which dataset is used. It was discovered that classifying each sentence is better than classifying the whole sequence, which could even increase performance when working with short sentences. However, both Longformer and our second model with MMR are still performing very well with macro F1 score of 96% and 96.2%, respectfully. Whereas RoBERT model has macro F1 score of 74.4%. The overall results of all models in the long document classification task are described in Table 3. ## Conclusion Unmatched flexibility of BERT is one of the main reasons for its rapid acceptance as state-of-the-art language model. With additional algorithm and some modifications and fine-tuning, the model can be adjusted for certain topics or tasks and its accuracy pushed to even higher level. This work explores this possibility in detail, taking long text classification as the target task and searching for the best parameters for this type of usage. In particular, different possibilities for supervised pre-training and fine-tuning were examined on two different datasets. Through detailed experimentation, we were able to identify the most optimal procedures that enable BERT to be more accurate with our particular downstream task. While the value of the proposed training and tuning actions was confirmed only for text classification, it stands to reason that analogous procedures could prove to be useful for other linguistic tasks as well. Finally, we want to denote that we did not explore all hyperparameters which can be a future work to have along with trying another language models such as Roberta and Electra.
2308.05194
Evaluating Pedestrian Trajectory Prediction Methods with Respect to Autonomous Driving
In this paper, we assess the state of the art in pedestrian trajectory prediction within the context of generating single trajectories, a critical aspect aligning with the requirements in autonomous systems. The evaluation is conducted on the widely-used ETH/UCY dataset where the Average Displacement Error (ADE) and the Final Displacement Error (FDE) are reported. Alongside this, we perform an ablation study to investigate the impact of the observed motion history on prediction performance. To evaluate the scalability of each approach when confronted with varying amounts of agents, the inference time of each model is measured. Following a quantitative analysis, the resulting predictions are compared in a qualitative manner, giving insight into the strengths and weaknesses of current approaches. The results demonstrate that although a constant velocity model (CVM) provides a good approximation of the overall dynamics in the majority of cases, additional features need to be incorporated to reflect common pedestrian behavior observed. Therefore, this study presents a data-driven analysis with the intent to guide the future development of pedestrian trajectory prediction algorithms.
Nico Uhlemann, Felix Fent, Markus Lienkamp
2023-08-09T19:21:50Z
http://arxiv.org/abs/2308.05194v3
# Evaluating Pedestrian Trajectory Prediction Methods for the Application in Autonomous Driving ###### Abstract In this paper, the state of the art in the field of pedestrian trajectory prediction is evaluated alongside the constant velocity model (CVM) with respect to its applicability in autonomous vehicles. The evaluation is conducted on the widely-used ETH/UCY dataset where the Average Displacement Error (ADE) and the Final Displacement Error (FDE) are reported. To align with requirements in real-world applications, modifications are made to the input features of the initially proposed models. An ablation study is conducted to examine the influence of the observed motion history on the prediction performance, thereby establishing a better understanding of its impact. Additionally, the inference time of each model is measured to evaluate the scalability of each model when confronted with varying amounts of agents. The results demonstrate that simple models remain competitive when generating single trajectories, and certain features commonly thought of as useful have little impact on the overall performance across different architectures. Based on these findings, recommendations are proposed to guide the future development of trajectory prediction algorithms. Autonomous vehicles, pedestrian trajectory prediction, features, accuracy, runtime ## I Introduction Avoiding accidents involving vulnerable road users (VRU), such as pedestrians, is one of the paramount objectives and ongoing challenges for autonomous vehicles. Consequently, the prediction of pedestrian behavior remains an active area of research, as evidenced by recently published methods [1, 2, 3] and review papers [4, 5]. One of the many reasons for the ongoing research effort are conditions encountered in real-world scenarios, which are characterized by imperfect observations and the stochastic nature of human behavior [6, 5]. Additionally, improving our understanding of pedestrian intentions is crucial to explore the fundamental ways pedestrians interact with their environment. This field of research, known as pedestrian trajectory prediction, focuses on modeling future trajectories and enables autonomous systems to consider their behavior [7]. The trajectories are estimated with a bird's-eye-view of the scene and the motion history of each pedestrian as visualized in Fig. 1. In addition, interactions can be explicitly considered. Most commonly, only the last eight timesteps are provided (equivalent to \(3.2\,\mathrm{s}\)) and based on that, \(12\) timesteps are predicted [8, 9, 10, 11, 10]. Initially modeled through simple rule-based methods [12], various architectures based on neural networks as well as hybrid approaches [2, 3] have emerged in recent years, trying to improve the overall accuracy of the predicted paths on widely-used benchmarks like ETH/UCY [13, 14] and SSD [15]. While this focus has undoubtedly resulted in significant advancements in the precision of the overall predictions, there remain unexplored aspects that are crucial for assessing the applicability of these methods in autonomous systems. In this paper, we want to build upon the work outlined in previous publications [16, 1, 6], evaluating the state of the art to derive the reliability as well as the applicability of these models when confronted with situations encountered on the road. Determining the accuracy, feature requirements, and computational efficiency, the contributions of this work are therefore threefold: * **First**, we evaluate the overall performance of the individual contributions. To replicate the reported results, we employ a Best-of-N approach. Additionally, we report the ADE and FDE metrics when sampling a single trajectory, which is a critical measure for practical applications where usually only one prediction per agent is utilized to plan safe and collision-free paths. * **Second**, we examine the sensitivity of each model to input features by limiting the observed motion history to a maximum of two timesteps. This analysis allows us to gain insights into the broader applicability of each method. * **Third**, we benchmark all contributions on a GPU while measuring runtimes to gauge how well each approach scales with an increasing number of agents. The paper is structured as follows: In chapter II we refer to related work already conducted in this field and evaluation methods employed. Following that, our overall evaluation procedure is introduced in chapter III. Based on the findings outlined in chapter IV, we discuss the results obtained in chapter V and derive potential steps to guide future research and development in chapter VI. ## II Related Work **Pedestrian trajectory prediction** can be categorized into two different approaches. The first category encompasses rule-based methods, with the simplest model being the constant velocity model (CVM) [16]. This model predicts linear trajectories based on an agent's most recent observation and is used as a baseline for trajectory prediction. A more sophisticated approach is the social force model [12], which models human motion by considering attractive and repulsive forces that characterize the agent's interaction with the environment. It has found extensive usage in simulation environments for controlling interactive agents. Expanding on this model, advanced algorithms such as _ORCA_[17] and _BRVO_[18] have been developed and adapted for predicting human motion. These models consider the velocities of surrounding obstacles to estimate collision-free paths. The introduction of deep learning frameworks has led to a shift in focus, and the research community has employed neural networks to better approximate pedestrian trajectories. This second group of approaches is known as knowledge-based methods. In 2016, Social-LSTM [19] was published and has since remained one of the most influential approaches in the field. It utilizes a Recurrent Neural Network (RNN) to consider the past trajectory of observed agents and incorporates them into a pooling module to model interactions. Building upon the same architecture, Social GAN (SGAN) [10] was introduced in 2018, replacing the RNN architecture with a Generative Adversarial Network (GAN) and improving model accuracy by sampling reasonable and diverse trajectories. In 2020, Trajectron++ [9], a Graph Neural Network (GNN), pushed the boundaries further. It leverages graph representations to account for cross-class interactions, such as between a car and a pedestrian. Currently, Y-Net [8] holds the second position in the common ETH/UCY benchmark [19] and, to the best of our knowledge, is the best-performing open-source contribution in this field. Y-Net utilizes an Encoder-Decoder setup, incorporating past trajectories and semantic maps to generate multimodal trajectories. AgentFormer [11] was published shortly afterward, using a Transformer architecture with a specifically designed attention mechanism to model interactions. Since then, limited progress in enhancing the prediction accuracy has prompted researchers to explore other crucial factors of trajectory prediction methods. As a result, Social-Implicit [1] was introduced in 2022, leveraging feed-forward neural networks in place of recurrent ones. This shift yielded enhancements in computational efficiency without compromising on the model's accuracy, thereby contributing to the progression in this field. **The selection of models** for this evaluation was based on their strong performance, either recently or historically, establishing them as state of the art in various aspects. Each model represents a distinct neural network architectural type addressing the same problem, enabling a more insightful comparison. The chosen models include SGAN [10], Trajectron++ [9], AgentFormer [11], Y-Net [8], and Social-Implicit [1]. To establish a deterministic baseline, these models were benchmarked against a CVM. With the exception of the CVM, each architecture incorporates the motion history of multiple agents. SGAN and Trajectron++ sequentially process individual timesteps through their underlying RNN-based architecture, while the remaining ones process them in parallel. All models utilize positional data from the dataset with some also incorporating velocity (AgentFormer, Trajectron++, Social-Implicit) and, in the case of Trajectron++, even acceleration information. Additionally, explicit modeling of interactions with other agents in the scene is common among the investigated approaches, except for CVM and Y-Net. The CVM disregards interactions entirely, while Y-Net indirectly considers interactions by encoding the observed motion history in a heatmap. The mechanisms employed for modeling interactions directly are social pooling (SGAN), including spatial information within a spatial-temporal graph (Trajectron++, Social-Implicit), or the consideration in a specially designed attention layer (AgentFormer). The last feature used for the prediction is spatial information about the environment which is represented through simple obstacle maps or semantic maps containing roads, walkways, and non-walkable areas. Among these models, only Y-Net actively employs an obstacle map with two distinct classes. Trajectron++ and AgentFormer have the capability to include this type of information but have not utilized it in the given benchmark. Nevertheless, this marks them as the only two models to explicitly attempt the simultaneous integration of temporal, semantic, and interaction features. Regarding the output representation of the models, no clear trend is observed. While CVM, SGAN, and AgentFormer output timesteps sequentially, the remaining architectures generate trajectories in one pass over Fig. 1: The three features commonly employed for pedestrian trajectory prediction: the motion history of each agent, spatial information, and interactions. the entire prediction horizon (parallel). The results of this evaluation are summarized in TABLE I wherein the term 'capable' indicates the model's ability to incorporate the stated information. However, such information remained unused in the ETH/UCY dataset based on the original implementation. **To assess the effectiveness** of a given method, various metrics are commonly employed, with the Average Displacement Error and Final Displacement Error being the most prevalent ones [7, 20]. ADE measures the average Euclidean distance between predicted trajectories and observed ground truth trajectories, whereas FDE focuses solely on the last position, representing a measure of the error accumulation over the entire prediction horizon. In this evaluation, the models predict \(12\) future timesteps based on the motion history derived from the last eight positional observations. To account for the multimodal behavior of humans, a Best-of-N (BoN) approach is typically used, sampling \(20\) potential trajectories [10, 9, 8, 11]. The trajectory with the smallest ADE is then selected to determine the overall performance. While this evaluation procedure has yielded significant progress in recent years, it has also faced criticism. Firstly, Scholler et al. [16] addressed the overall complexity of neural networks by demonstrating that a CVM can yield comparable results when the same BoN evaluation is applied to it. Secondly, concerns have been raised regarding the metric itself, as small error reductions during the evaluation on the ETH/UCY benchmark might also be caused by the inherent non-deterministic nature of Neural Networks [1]. To address this, the authors propose a confidence-based metric for evaluating trajectory distributions. Lastly, the applicability of prediction algorithms in real-world scenarios, partially addressed in [16, 6], remains a pressing concern. Consequently, the primary focus of this work is the evaluation of open-source state-of-the-art prediction models while generating a single trajectory for each pedestrian in the given scenario. Conclusions regarding the suitability of the methods for a practical application can be drawn by assessing the accuracy and runtime for different input configurations. ## III Methodology In terms of practical implementation for autonomous driving, the most significant factors of a prediction algorithm encompass the precision of the most likely trajectory, the prerequisite for an available trajectory from an initial observation, and the runtime required for generating said trajectories for the whole scenario. Consequently, the evaluation is performed across the following three criteria: Accuracy (BoN, most likely), feature requirements (limiting the number of temporal observations), and the inference time of each model. Whenever possible, the reported results were cross-checked against the values presented in the original publications or contributions found within the issues of the respective repositories. With the exception of Y-Net, similar or improved results were achieved across the evaluated algorithms. The following three sections outline our evaluation procedure, describe the data utilized for testing and training, and provide details about the explicit implementation of said experiments. ### _Evaluation procedure_ To provide a measure for an overall comparison, the ADE and FDE were determined with the evaluation scripts provided by each individual contribution. Since the majority of the models are non-deterministic and, in some cases, trajectories are randomly sampled rather than selecting the most likely one, each of the five scenes in the ETH/UCY dataset was evaluated five times. Afterwards, the average for each scene was calculated before combining all values to the overall results presented in figures 2 and 3. To sample a varying number of trajectories, the respective variables in each contribution were adjusted to either one or \(20\). Additional details are provided in section III-C. As previously stated, almost all input features employed are present in real-world scenarios. Among the investigated approaches, Y-Net is the only model utilizing a semantic map on the given dataset and heavily relies on it due to its encoding mechanism. For this reason, an evaluation of the spatial information's influence on the model's performance was not conducted. Instead, the focus was shifted to the temporal information since all algorithms incorporate a motion history of eight timesteps for each agent within a scene. To investigate the impact of this feature, the number of available observations was limited and the corresponding metrics were computed. Specifically, the input tensor of each model was modified to contain either only the last timestep or the last two. Here, a timestep refers to an observation at any given time, encompassing both positional information and velocity data. While additional experiments with different numbers of timesteps were conducted, these variations did not yield significant improvements and are therefore not reported in this study. For the most part, the missing entries were represented by a zero value within the respective tensor which applies to the positional data as well as the relative displacements and velocities. All other adjustments differing from this approach are listed in the implementation details below. In order to evaluate the scalability of each approach as the number of observed agents increases, we measured the execution time of each model using the built-in _CUDA_-API for PyTorch [21]. This API offers precise time measurements through an accurate event recording. Specifically, we measured the execution time of a model starting from the point of providing individual scenario inputs until receiving predictions for all agents within the scene. The data preprocessing time was excluded from these measurements. To allow for a fair comparison, all experiments were performed on a _Nvidia Tesla V100_ device with _CUDA_ version 10.1 and Python 3.8.10. To ensure reliable estimates of the average scene prediction time, a batch size of one was chosen, and each model was evaluated five times. The median execution time and the interquartile range (IQR) were calculated across all scenes, providing insight into the range of runtimes for different numbers of agents. To visualize these results, Appendix A contains a detailed overview of the two models that exhibited the largest variations across all employed scene sizes. ### _Dataset and Training_ This study focuses on the assessment of short-term pedestrian trajectory prediction algorithms for time horizons up to \(4.8\,\mathrm{s}\). To facilitate an effective comparison and encompass scenarios of diverse complexity, the widely used ETH/UCY dataset [13, 14] was selected. This dataset includes situations with over \(50\) agents, enabling an extensive evaluation of the chosen models. The data splits employed are the ones initially introduced by [10]. In the conducted Leave-One-Out Cross-Validation, four out of five scenes (Eth, Hotel, Univ, Zara1, and Zara2) were used for training and the remaining one for testing. In cases where the network weights were not provided alongside the code for the selected method, the models were trained according to the procedure outlined in each publication. An exception was made for Y-Net, as specific parameters were not given, and the provided code did not yield the expected results. Thus, slight modifications to the code and a hyperparameter tuning were necessary to achieve optimal performance. The parameters employed for the final evaluation, yielding the best results obtained in this study, can be found in Appendix B. Besides this, no retraining was done between the individual evaluation steps, adjusting only the necessary variables in the code to vary the number of generated trajectories. ### _Implementation details_ In addition to the steps described in the overall evaluation procedure, some more modifications had to be conducted for certain models. These adjustments, alongside further implementation details, are provided in this section. **SGAN**. Two execution modes are provided here, one with the proposed pooling mechanism to consider interactions and one without. During our evaluation as well as indicated in the paper itself [10], the model without the pooling module delivered slightly better results, offering the same performance with less complexity. Since this led to a reduction in runtime, results are reported for this configuration. **Trajectron++**. This model is meant for a variety of applications, providing different evaluation modes to assess the quality of the generated predictions. For this study, the Z-Mode was used over the'most likely'-mode employed in the paper since it delivered almost identical results while decreasing the runtime drastically. **Y-Net**. Within the code, the default sampling of \(20\) potential trajectories employs two modes, _TTST_ and _CWS_. However, since clustering cannot be applied to a single trajectory, the _TTST_ mode was disabled, resulting in a reduced inference time. To assess the impact of the motion history, limiting the number of observations by setting them to zero wasn't applicable because Y-Net uses the positional data to encode the trajectory into the provided map. To address this, the available timestesteps were limited by overwriting missing entries with the earliest observation. For the runtime evaluation, due to mismatching frame numbers in the provided files, the testing data from the original SGAN contribution was utilized for an accurate, scenario-wise analysis. **AgentFormer**. For the sampling of \(20\) trajectories, the model was used as described in the publication, employing the diverse sampling technique _DLow_. Since the same approach results in issues during the evaluation of just one trajectory, the provided pre-trained models with the suffix _pre_ were used instead. For the feature evaluation, the convention from the Kitti dataset [22] was adapted where entries are set to \(-1\) when non-existent. Within the attention matrix, the earliest observation was repeated for the remaining entries, delivering the best results among all approaches tested. **CVM**. The constant velocity model utilizes the same pre-processed trajectories as the other approaches but only uses the final timestep as input. To receive the predictions, a sequential generation mechanism was employed, producing one trajectory at a time. ## IV Results In this section, the results of the conducted evaluation are reported. In the first part, the general performance of each model is compared between the BoN approach and the most likely trajectory sampling. Subsequently, the second and third paragraphs focus on the temporal feature requirements and the execution time for predicting an entire scene. Within this section, the variable \(K\) refers to the number of trajectories sampled whereas \(N\) stands for the number of timesteps provided to the network. ### _Accuracy_ While some methods benchmark the performance against a linear regression model as a baseline [10, 9], fitting a straight line to the observed motion history, a CVM has not been utilized as a reference in the investigated contributions before. However, due to its proven performance in previous studies [16], the CVM was selected as a reference for this evaluation. The average values of the ADE and FDE for one and \(20\) generated future trajectories are presented in Fig. 2, with the exact values for each scene being contained in Appendix B. The input configuration for each model is described in TABLE I, where the motion history consists of eight and the prediction horizon of \(12\) timesteps. In Fig. 2, blue shades represent the ADE metric and orange ones highlight the FDE scores. Lighter colors were chosen for the evaluation of a single trajectory whereas darker ones visualize the results obtained for the BoN approach, evaluating the best of \(K=20\) sampled trajectories against the ground truth. The horizontal, dashed lines indicate the results of the CVM, following the same color scheme. When sampling \(20\) distinct trajectories, the investigated models demonstrate the anticipated strong performance on the ETH/UCY dataset, outperforming the simple CVM baseline with an ADE of \(0.52\,\mathrm{m}\) and an FDE of \(1.141\,\mathrm{m}\). However, we were unable to replicate the reported results of Y-Net despite extensive parameter tuning, and the values for Trajectron++ are slightly worse due to previously reported issues with the velocity and acceleration derivatives. Taking these factors into account, AgentFormer achieves the best performance in our BoN evaluation with \(0.233\,\mathrm{m}\) and \(0.393\,\mathrm{m}\), respectively. With an increase in the average displacement of \(3.8\,\mathrm{cm}\) and \(10.1\,\mathrm{cm}\) for the FDE, Trajectron++ scores second. This changes when sampling only one trajectory and selecting the most probable one whenever possible. In this configuration, while all Final Displacement Errors now exceed one meter, the SGAN architecture is among the least affected and delivers competitive results. Conversely, AgentFormer exhibits a significant decline in performance, ranking second last in the provided overview. Notably, the top two models in terms of accuracy are approaches employing graph neural networks. Trajectron++ yields the most precise predictions with \(0.555\,\mathrm{m}\) in average and \(1.162\,\mathrm{m}\) in final displacement. Interestingly, none of the featured models surpass the performance of the CVM which only considers information from the last timestep. As a result, the influence of individual timesteps within the motion history on prediction accuracy remains uncertain. Therefore, alongside the examination of model applicability, the second part of this evaluation will specifically investigate the feature requirements of each model. ### _Feature requirements_ Previous studies suggest that the motion history of an agent may not be as crucial as commonly assumed [16]. Therefore, in order to gain insights into the considerations of neural networks, we examined the influence of one, two, and eight observed timesteps provided to each investigated model. Details on the necessary modifications for this analysis can be found in section III, and the summarized results are presented in Fig. 3. Similar to the previous evaluation, the figure highlights ADE values in blue whereas FDE values are represented with shades of orange. The brighter the tone, the fewer observations were provided to the network. Again, the respective metrics corresponding to the performance of the CVM are indicated through dashed, horizontal lines. When focusing on the overall trend between the metrics, it can be seen that first, both metrics express a similar behavior across all investigated methods, and second, the lowest errors are achieved when all observations are considered. Reducing the amount of information available to one timestep, not all models become worse to the same degree. Whereas AgentFormer and Y-Net exhibit the worst performance in our evaluation with Average Displacement Errors up to \(1.941\,\mathrm{m}\) and \(3.757\,\mathrm{m}\) respectively, the remaining three models show almost no difference when provided with only one or two timesteps. While the ADE for Social-Implicit improves from \(1.11\,\mathrm{m}\) to \(0.745\,\mathrm{m}\) when adding an additional observation, the change in accuracy for Trajectron++ and SGAN is less than \(1\,\mathrm{cm}\). These displacements are almost identical to the ones obtained when considering the whole motion history with gains being as little as \(0.4\,\mathrm{cm}\) in the case of Trajectron++. However, when an additional timestep is provided to both Y-Net and AgentFormer, a substantial decrease in both Average and Final Displacement Errors can be noted, with differences as high as \(2.261\,\mathrm{m}\) and \(3.635\,\mathrm{m}\), respectively. Further improvements in accuracy can be observed when considering the full motion history, although the increase is significantly smaller in comparison. ### _Runtime_ The runtime analysis shows a significant difference in the inference time of the compared models as visualized in Fig. 4. According to this, the CVM has the lowest runtime with a median inference time of \(0.15\,\mathrm{ms}\), while the highest median inference time was measured for Trajectron++ with \(131.34\,\mathrm{ms}\). However, not just the central tendency but also the runtime variability is an important measure for real-time applications. As a result, the CVM has the smallest interquartile range with a deviation of \(0.01\,\mathrm{ms}\) between the 90th and 10th percentiles. In contrast, Trajectron++ has the highest variability with an IQR of \(197.54\,\mathrm{ms}\). For the remaining models, the median runtime ranges between \(1.69\,\mathrm{ms}\), \(4.02\,\mathrm{ms}\) and \(49.01\,\mathrm{ms}\) for the Social-Implicit, SGAN and AgentFormer models, respectively. Furthermore, it can be observed that both Social-Implicit and SGAN have a low variability with an IQR of \(0.11\,\mathrm{ms}\) and \(0.22\,\mathrm{ms}\), and therefore indicating a weak runtime dependency on the number of agents in the scene. In contrast, Y-Net has a median runtime of \(82.39\,\mathrm{ms}\) and shows a large variability with an IQR of \(96.87\,\mathrm{ms}\), similar to Trajectron++. In comparison, Fig. 3: ADE (blue) and FDE (orange) across various models for \(N=1,2,8\) observations Fig. 2: ADE (blue) and FDE (orange) across various models using one and \(20\) sampled trajectories AgentFormer has an IQR of \(4.92\,\mathrm{ms}\). An evaluation of the relationship between the runtime and the number of agents in a scene is provided in Appendix A. The results also indicate no direct correlation between the model runtime and the achieved ADE. Even if the CVM has both the lowest runtime and the smallest ADE, the second most accurate network (Trajectron++) has the highest median runtime overall. Moreover, both Social-Implicit as well as SGAN achieve comparable accuracies, but their inference time deviates by more than an order of magnitude from the runtime of the CVM. While this observation holds true for AgentFormer, the Y-Net model has the highest ADE, but only the third-highest median runtime. ## V Discussion ### _Main findings_ The results of this study have revealed the similarities and differences of the investigated architectures when analyzed for practical applicability. In terms of accuracy, AgentFormer and Trajectron++ perform best in the Best-of-N evaluation, indicating their capability to generate plausible and multi-modal trajectories across diverse scenarios. However, when sampling only the most likely trajectory, Trajectron++ and Social Implicit demonstrate the smallest errors. Notably, both of these methods employ graph-based networks, highlighting the potential advantages of utilizing this architecture for modeling the task at hand. This could be attributed to the consideration of interactions through a graph representation, as the social pooling in SGAN and the attention mechanism in AgentFormer had minimal to no impact on overall accuracy. Additionally, the analysis of feature requirements reveals that the most relevant information for these two models appears to be present within the last one to two timesteps. Therefore, it seems to be disproportionate to incorporate the entire motion history in the design of future models. This finding aligns with the results obtained from the CVM, which also only requires one timestep for prediction, neglecting spatial features and interactions. Since Y-Net and AgentFormer are the only models not explicitly considering relative displacements or velocities, their differing behavior can be attributed to the missing orientational information when provided with a single timestep. Hence, to make reasonable predictions, at least two timesteps should be provided, resulting in performance more similar to SGAN, Social Implicit, and Trajectron++ in the single-timestep case. For this reason, architectures that incorporate object velocities alongside positional information can be seen as beneficial for practical applications, offering better accuracy right from the initial detection. Furthermore, while the sequential generation of output trajectories provides flexibility when adapting to changes in the prediction horizon, a parallel representation might be more suitable as it requires only a single inference pass. In terms of execution time, we have analyzed the average inference of the models and evaluated their scalability with the number of pedestrians observed in a scenario. While Trajectron++ was among the best-performing models in regards to accuracy in general, when considering the trade-off between accuracy and runtime visualized in Fig. 4, the CVM emerges as the most suitable method based on the dataset used. It is followed by Social-Implicit on the second rank, which exhibits slightly lower accuracy and inference times. This is in contrast to the development outlined in chapter II where Y-Net, Trajectron++, and AgentFormer were considered state-of-the-art models based on the Best-of-N evaluation approach. Nevertheless, these results suggest that simplistic models, such as the CVM, continue to remain competitive compared to more sophisticated, learning-based approaches and indicate the need for further research to effectively model the predominantly linear movement patterns exhibited by pedestrians. ### _Limitations and criticism_ While the CVM demonstrates superior performance in predicting linear trajectories compared to the investigated methods, neural networks are more capable when generating nonlinear ones [1]. Related to that, the question can be raised whether pedestrians predominantly move linearly or if the dataset used primarily consists of linear trajectories, thus favoring an evaluation with the CVM. Therefore, it remains uncertain whether the findings presented in this study can be generalized to other datasets such as SDD [23] or nuScenes [24]. As the ETH/UCY dataset exclusively comprises pedestrian data without any inclusion of cars, this study presents a general evaluation of the ability of certain methods to model pedestrian trajectories. Nevertheless, to apply trajectory prediction algorithms to real-world applications, the consideration of other road users as well as the road infrastructure is vital. In addition to that, it needs to be highlighted that with the employed Leave-One-Out Cross-Validation, five models were individually trained and tested, whereas in practical applications for vehicles, only one model can be utilized. Furthermore, the transferability of prediction methods across different scenes and datasets remains a challenge and an active area of research [25]. Fig. 4: Runtime of the pedestrian trajectory prediction models represented by their median inference time over all scenarios and their variability as interquartile range (IQR) between the 90th and 10th percentiles over the Average Displacement Error for \(K=1\) and \(N=8\). Referring to the models themselves, it is worth mentioning that Trajectron++ also supports an online prediction mode, which as of right now is only available for nuScenes and therefore was not utilized in this study. Besides that, the performance of AgentFormer decreased substantially when evaluated on a single trajectory which can be attributed to its operation principle. Since the purpose of this architecture is to generate diverse solutions, a mode to receive the most likely solution is missing. For this case, retraining would be necessary as has been pointed out by the authors in the corresponding repository [11]. Lastly, it should be noted that the current implementation of the CVM predicts each trajectory individually and sequentially using a for-loop. Further runtime improvements can therefore be expected through parallelization. ### _Future research_ The question that remains is how accurate a prediction needs to be in order to be applicable in autonomous vehicles. We propose that, based on the results obtained from the CVM, the focus should be shifted towards developing accurate models that are computationally efficient and require minimal input. For future investigations, when the complete motion history is considered, incorporating dropout within the input data during training could enhance the robustness and improve the overall results. Conversely, models can be specifically designed and trained to utilize a maximum of two timesteps, adapting the loss function to enhance the precision of the most likely trajectory. Considering the success of transformer architectures in natural language processing, it is still worth exploring their application to trajectory prediction with appropriate modifications to the input features and training techniques. Lastly, although expanding the evaluation to include additional datasets would be valuable, the challenge of transferring trained models across different scenarios remains a crucial research topic for practical applications. ## VI Conclusion In this study, we have conducted a comprehensive analysis of state-of-the-art pedestrian trajectory prediction methods to assess their usability in real-world applications. Our evaluation focused on measuring the ADE and FDE metrics based on single trajectories, investigating the impact of individual timesteps within the motion history, and measuring the overall runtime for different scenario sizes. Our findings demonstrate that Trajectron++ and Social-Implicit, which leverage graph-based interaction modeling, yield the most accurate results among the investigated architectures. Furthermore, we discovered that many models underutilize the available motion history, with the first two timesteps containing the most relevant information. When considering the trade-off between accuracy and runtime, Social-Implicit demonstrates the best performance, ranking second only to the CVM. In all evaluated aspects, none of the models manage to surpass the CVM on the ETH/UCY dataset, highlighting that simple prediction methods remain competitive compared to learning-based approaches. Consequently, improvements can still be achieved on the given benchmark, with future research directions being the inclusion of spatial information, modifications in the training process or the utilization of additional datasets. ## Acknowledgment As the first author, Nico Uhlemann initiated the idea of this paper and contributed essentially to its conception, implementation, and content. Felix Fent contributed to the conception of this research, the analysis of the generated data, and the revision of the research article. Markus Lienkamp made an essential contribution to the conception of the research project and revised the paper critically for important intellectual content. He gave final approval of the version to be published and agreed with all aspects of the work. As a guarantor, he accepts responsibility for the overall integrity of the paper. The authors would like to thank their project partner Enway GmbH, as well as the Munich Institute of Robotics and Machine Intelligence (MIRMI) for their support. The research was funded by the Central Innovation Program (ZIM) under grant No. KK5213703GR1. ## Appendix A Fig. 5: Runtime over the number of agents in a scene for the a) Y-Net and b) Trajectron++ model.
2306.02357
Deeply Virtual Compton Scattering at Future Electron-Ion Colliders
The study of hadronic structure has been carried out for many years. Generalized parton distribution functions (GPDs) give broad information on the internal structure of hadrons. Combining GPDs and high-energy scattering experiments, we expect yielding three-dimensional physical quantities from experiments. Deeply Virtual Compton Scattering (DVCS) process is a powerful tool to study GPDs. It is one of the important experiments of Electron Ion Collider (EIC) and Electron ion collider at China (EicC) in the future. In the initial stage, the proposed EicC will have $3 \sim 5$ GeV polarized electrons on $12 \sim 25$ GeV polarized protons, with luminosity up to $1 \sim 2 \times 10^{33}$cm$^{-2}$s$^{-1}$. EIC will be constructed in coming years, which will cover the variable c.m. energies from 30 to 50 GeV, with the luminosity about $10^{33} \sim 10^{34}$cm$^{-2}$s$^{-1}$. In this work we present a detailed simulation of DVCS to study the feasibility of experiments at EicC and EIC. Referring the method used by HERMES Collaboration, and comparing the model calculations with pseudo data of asymmetries attributed to the DVCS, we obtained a model-dependent constraint on the total angular momentum of up and down quarks in the proton.
Gang Xie, Wei Kou, Qiang Fu, Zhenyu Ye, Xurong Chen
2023-06-04T13:27:49Z
http://arxiv.org/abs/2306.02357v1
# Deeply Virtual Compton Scattering at Future Electron-Ion Colliders ###### Abstract The study of hadronic structure has been carried out for many years. Generalized parton distribution functions (GPDs) give broad information on the internal structure of hadrons. Combining GPDs and high-energy scattering experiments, we expect yielding three-dimensional physical quantities from experiments. Deeply Virtual Compton Scattering (DVCS) process is a powerful tool to study GPDs. It is one of the important experiments of Electron Ion Collider (EIC) and Electron ion collider at China (EicC) in the future. In the initial stage, the proposed EicC will have \(3\sim 5\) GeV polarized electrons on \(12\sim 25\) GeV polarized protons, with luminosity up to \(1\sim 2\times 10^{33}\)cm\({}^{-2}\)s\({}^{-1}\). EIC will be constructed in coming years, which will cover the variable c.m. energies from 30 to 50 GeV, with the luminosity about \(10^{33}\sim 10^{34}\)cm\({}^{-2}\)s\({}^{-1}\). In this work we present a detailed simulation of DVCS to study the feasibility of experiments at EicC and EIC. Referring the method used by HERMES Collaboration, and comparing the model calculations with pseudo data of asymmetries attributed to the DVCS, we obtained a model-dependent constraint on the total angular momentum of up and down quarks in the proton. ## I Introduction In high energy nuclear physics, the internal structure and dynamics of the proton is still not fully understood. Although decades have passed since the discovery that the proton internal structure consisted of quarks [1; 2; 3; 4] and gluons (partons) [5; 6; 7; 8], we still know a little about how the partons contribute to the global properties of the proton such as its mass and spin. The measurement of the fraction of the proton spin carried by quarks by the European Muon Collaboration (EMC) in 1987 indicated that only small percentages of the proton's spin comes from quarks [9]. The data of nucleon's polarized structure function \(g_{1}\left(x_{B}\right)\) in EMC has deviated significantly from the Ellis-Jaffe sum rule [10]. These results created the so-called "spin crisis", or more appropriately, the "spin puzzle". The discrepancy has since inspired many intensive experimental and theoretical studies of spin dependent nucleon structure [11; 12; 13; 14; 15; 16; 17]. It was proposed that the missing fraction of the proton spin comes from the polarized gluon contribution. Recent measurements of the polarized gluon density showed that gluons indeed contribute, but could not fill the gap in the spin puzzle [16]. The orbital angular momenta of the quarks and gluons play an important role in the proton spin. According to the generator of Lorentz transformation we can define the angular momentum operator in QCD [18], \[J^{i}=\frac{1}{2}\epsilon^{ijk}\int d^{3}xM^{0jk}, \tag{1}\] where \(M^{0jk}\) is the angular momentum density, which can be expressed by the energy-momentum tensor \(T^{\mu\nu}\) through \[M^{\alpha\mu\nu}=T^{\alpha\nu}x^{\mu}-T^{\alpha\mu}x^{\nu}. \tag{2}\] \(T^{\mu\nu}\) has the Belinfante-Improved form and is symmetric, gauge-invariant, and conserved. It can be divided into gauge-invariant quark and gluon contributions, \[T^{\mu\nu}=T^{\mu\nu}_{q}+T^{\mu\nu}_{g}, \tag{3}\] and \(\vec{J}\) has a gauge-invariant form, \(\vec{J}_{\rm QCD}=\vec{J}_{q}+\vec{J}_{g}\), where \[J^{i}_{q,g}=\frac{1}{2}\epsilon^{ijk}\int d^{3}x\left(T^{0k}_{q,g}x^{j}-T^{0j} _{q,g}x^{k}\right). \tag{4}\] In pure gauge theory, \(\vec{J}_{g}\) is a conserved angular momentum charge by itself, generating spin quantum numbers for glueballs. We can see that \(\vec{J}_{q}\) and \(\vec{J}_{g}\) are interaction-dependent. To study the orbital angular momentum of the partons, one needs to study beyond one-dimentional parton distributions. One-dimensional parton distribution functions (PDFs) provide significant informations about the structure of the proton. Although the PDFs have provided us with much knowledge on the proton, one-dimensional distributions can not give us a complete picture. Therefore, theorists developed a new density function about 30 years ago, which called GPDs. GPDs provide information including both transverse spacial and longitudinal momentum distributions. Besides the momentum fraction, GPDs depend on another independent variable, the negative value of momentum transfer square \(t=-\left(p-p^{\prime}\right)^{2}\) between the initial and final states of a proton. Thus, the GPDs give extensive informations about three-dimensional dynamics of nucleon, which includes the composition of spin and pressure distribution [19; 20; 21; 22; 23; 24]. Similar to the one dimensional PDFs, GPDs include non-polarized and polarized functions. GPDs, also named as the off-forward PDFs, have attracted a lot of attention since spin decomposition rule was first proposed [18]. It was proposed to factorize the hard exclusive processes. The corresponding factorization structure functions including the structure of nucleon are the GPDs \(H^{q}\left(x_{B},\xi,t\right)\), \(E^{q}\left(x_{B},\xi,t\right)\), \(\widetilde{H}^{q}\left(x_{B},\xi,t\right)\) and \(\widetilde{E}^{q}\left(x_{B},\xi,t\right)\). These functions correspond to the Fourier transform of the non-diagonal operators [18; 20; 22; 25]: \[\begin{array}{l}\frac{P^{+}}{2\pi}\int dy^{-}e^{jx_{B}P^{+}y^{-}}\left<p^{ \prime}\left|\bar{\Psi}_{q}(0)\gamma^{+}\Psi_{q}(y)\right|p\right>_{y^{+}= \widetilde{y}_{\perp}=0}\\ =H^{q}\left(x_{B},\xi,t\right)\bar{N}\left(p^{\prime}\right)\gamma^{+}N(p)\\ +E^{q}\left(x_{B},\xi,t\right)\bar{N}\left(p^{\prime}\right)i\sigma^{+}\gamma _{\frac{A}{2M_{N}}}N(p),\\ \frac{P^{+}}{2\pi}\int dy^{-}e^{x_{B}P^{+}y^{-}}\left<p^{\prime}\left|\bar{ \Psi}_{q}(0)\gamma^{+}\gamma^{5}\Psi_{q}(y)\right|p\right>_{y^{+}=\widetilde {y}_{\perp}=0}\\ =\widetilde{H}^{q}\left(x_{B},\xi,t\right)\bar{N}\left(p^{\prime}\right)\gamma ^{+}\gamma_{5}N(p)\\ +\widetilde{E}^{q}\left(x_{B},\xi,t\right)\bar{N}\left(p^{\prime}\right)\gamma _{5}\frac{\Delta^{+}}{2M_{N}}N(p),\end{array} \tag{5}\] where \(y\) is the coordinate of the two correlated quarks, the \(P\) is the average nucleon four-momentum in light-front frame: \(P=\left(p+p^{\prime}\right)/2\) and \(\Delta=p^{\prime}-p\). The "\(+\)" superscript means the plus component of four-momentum in light-front frame. Each GPD function defined above is for a specified flavor of quark: \(H^{q},E^{q},\widetilde{H}^{q},\widetilde{E}^{q}(q=u,d,s,\ldots)\). \(H^{q}\) and \(\widetilde{H}^{q}\) are spin non-flipped GPD functions and \(E^{q}\) and \(\widetilde{E}^{q}\) are spin flipped ones. The ordinary parton distributions and nucleon form factors are both included in the off-forward parton distributions. In \(t\to 0\) and \(\xi\to 0\) limit, we get \[\begin{array}{l}H(x_{B},0,0)=f_{1}(x_{B}),\\ \widetilde{H}\left(x_{B},0,0\right)=g_{1}(x_{B}),\end{array} \tag{6}\] where \(f_{1}(x_{B})\) is quark distribution and \(g_{1}(x_{B})\) is quark helicity distribution. According to Dirac and Pauli form factors \(F_{1}\), \(F_{2}\) and axial-vector and pseudo-scalar form factor \(G_{A}\), \(G_{P}\), the sum rules are obtained, \[\begin{array}{l}\int dx_{B}H\left(x_{B},\xi,t\right)=F_{1}\left(t\right),\\ \int dx_{B}E\left(x_{B},\xi,t\right)=F_{2}\left(t\right),\\ \int dx_{B}\widetilde{H}\left(x_{B},\xi,t\right)=G_{A}\left(t\right),\\ \int dx_{B}\widetilde{E}\left(x_{B},\xi,t\right)=G_{P}\left(t\right).\end{array} \tag{7}\] The most interesting Ji's sum rules related to the nucleon spins are described through GPDs [22], \[\int_{-1}^{1}dx_{B}x_{B}\left[H\left(x_{B},\xi,t\right)+E\left(x_{B},\xi,t \right)\right]=A(t)+B(t). \tag{8}\] Then the total spin of the proton can be expressed as: \[\begin{array}{l}J_{q,g}=\frac{1}{2}\left[A_{\frac{1}{2},g}(0)+B_{q,g}(0) \right],\\ J_{q}+J_{g}=\frac{1}{2},\end{array} \tag{9}\] where \(A_{q,g}(0)\) gives the momentum fractions carried by quarks and gluons in the nucleon (\(A_{q}(0)+A_{g}(0)=1\)), and B-form factor is analogous to the Pauli form factor for the vector current. By extrapolating the sum rule to \(t=0\), one gets \(J_{q,g}\). The GPDs can be measured in deep-exclusive processes such as DVCS and deeply virtual meson production (DVMP) [18; 22; 26; 27; 28; 29; 30]. Both of these processes are exclusive hard scattering processes in lepton-nucleon collisions. Theoretical research on these topics has been conducted for many years, and many theoretical models and predictions were created by researchers [18; 21; 22; 31; 32; 33; 34; 35; 36; 37; 38; 39]. During the past 20 years, the collaborations at HERA and Jefferson Lab (JLab) have spent a lot effort to get information of GPDs from electro-production of a real photon (DVCS processes) [31; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55], such as DESY with H1 [40; 43], ZEUS [41] and HERMES [45; 46], JLab Halls A [50; 52; 53; 54; 55] and Halls B [56; 57; 58; 51; 56; 59], and COMPASS [60; 61]. These experiments have important contributions to our exploration of the internal structure of the proton. Although there are many data from above experiments, the data don't have high precision and wide range of kinematic region. Accurate measurement of the DVCS process is a huge challenge, which requires high luminosity to compensate for very small cross section and good detector design to ensure the exclusive measurement of the final states. Both EicC and EIC are important experiments in the future that will have very high luminosity and excellent detectors for particle detection. In this work, we discuss the relation of GPDs and DVCS observables [22], and carry out a Monte-Carlo simulation of DVCS + Bethe-Heithler (BH) events and do a projection to get the statistical errors of asymmetry observables of DVCS experiments for the future EicC and EIC. Since the contribution of GPDs to amplitude is not independent, the acquisition of GPDs from the exclusive reactions is indirect. We need to use the appropriate GPDs model. After years of development, there are many theoretical models of GPD, and two of those are based on double distributions (DDs) [20; 62; 63], one has been given by Vanderhaeghen, Guichon and Guidal, which called VGG model [26; 27; 64; 65], another was presented by Goloskokov and Kroll called GK model [28; 66; 67]. By accessing the available experimental data, the researchers examined different GPD models, and show that the data from different experiments can match well with the VGG model calculation [25; 48; 54; 57; 58]. Based on these results, we perform theoretical calculations with VGG model. In VGG model, the observable \(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}\) is more sensitive to the quark total angular momentum in the nucleon than other parameters [31; 68; 69]. Thus we make a constraint on \(J_{u}\) and \(J_{d}\) by the pseudo data of Transverse Target-Spin Asymmetry (TTSA) \(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}\). The organization of the paper is as follows. The relationship between GPDs and DVCS is illustrated in Sec. II. The phenomenological parametrization of GPDs is described in Sec. III. The invariant kinematic and final state kinematic distributions of the simulation are shown in Sec. IV. The projections of DVCS experiment are shown in Sec. V. Finally, some discussions and a concise summary is given in Sec. VI. Generalized partons distribution and deeply virtual Compton scattering Deeply virtual Compton scattering on a necleon shown in Fig. 1 left panel is the simplest process to access GPDs, it's an important role in exploring the internal structure of necleon. In addition to the DVCS, there also exists another process shares the same final state with DVCS process, see Fig. 1 middle and right panels, called the BH process. The five-fold differential cross section for electroproduction of real photon \(ep\to e^{\prime}p^{\prime}\gamma\) is defined as [32]: \[\frac{d\sigma}{dx_{\mathrm{B}}dyd\,|\Delta^{2}|\,d\phi d\varphi}=\frac{\alpha ^{3}x_{\mathrm{B}}y}{16\pi^{2}\mathcal{Q}^{2}\sqrt{1+\epsilon^{2}}}\left| \frac{\mathcal{T}}{e^{3}}\right|^{2}. \tag{10}\] This cross section depends on the common Bjorken scaling variable \(x_{B}\), the squared momentum transfer \(\Delta=\left(P_{2}-P_{1}\right)^{2}\), the lepton energy fraction \(y=P_{1}\cdot q_{1}/P_{1}\cdot k\), with \(q_{1}=k-k^{\prime}\). The azimuthal angle between the lepton plane and the recoiled proton momentum is \(\phi\). There, \(\varphi\) is the angle between the polarization vector and the scattered hadron shown in Fig. 2, and \(\epsilon=2x_{\mathrm{B}}M/\mathcal{Q}\) that incorporates nonvanishing target mass effects [32; 70]. The reaction amplitude \(\mathcal{T}\) is the linear superposition sum of the BH and DVCS amplitudes, \[\mathcal{T}^{2}=|\mathcal{T}_{BH}|^{2}+|\mathcal{T}_{DVCS}|^{2}+\mathcal{T}_{I}, \tag{11}\] where \(\mathcal{T}_{I}=\mathcal{T}_{DVCS}T_{BH}^{*}+\mathcal{T}_{DVCS}^{*}\mathcal{T }_{BH}\). The squared BH term \(|\mathcal{T}_{BH}|^{2}\), squared DVCS amplitude \(\left|\mathcal{T}_{DVCS}\right|^{2}\), and interference term \(\mathcal{T}_{I}\) are given by: \[\begin{split}|\mathcal{T}_{\mathrm{BH}}|^{2}=&\frac{ e^{6}}{x_{\mathrm{B}}^{2}y^{2}\left(1+\epsilon^{2}\right)^{2}\Delta^{2} \mathcal{P}_{1}(\phi)\mathcal{P}_{2}(\phi)}\\ &\left\{c_{0}^{\mathrm{BH}}+\sum_{n=1}^{2}c_{n}^{\mathrm{BH}} \cos(n\phi)+s_{1}^{\mathrm{BH}}\sin(\phi)\right\},\end{split} \tag{12}\] \[\begin{split}|\mathcal{T}_{\mathrm{DVCS}}|^{2}=\frac{e^{6}}{y^{2 }\mathcal{Q}^{2}}\\ \left\{c_{0}^{\mathrm{DVCS}}+\sum_{n=1}^{2}\left[c_{n}^{\mathrm{ DVCS}}\cos(n\phi)+s_{n}^{\mathrm{DVCS}}\sin(n\phi)\right]\right\},\end{split} \tag{13}\] \[\begin{split}\mathcal{T}_{I}=&\frac{\pm e^{6}}{x_{ \mathrm{B}}y^{3}\Delta^{2}\mathcal{P}_{1}(\phi)\mathcal{P}_{2}(\phi)}\\ &\left\{c_{0}^{\mathcal{I}}+\sum_{n=1}^{3}\left[c_{n}^{\mathcal{I }}\cos(n\phi)+s_{n}^{\mathcal{I}}\sin(n\phi)\right]\right\}.\end{split} \tag{14}\] The results for the Fourier coefficients can be found in [32; 70]. The variables \(\xi\) and \(t\) (or \(\Delta^{2}\)) can be computed from the kinematic variables. Since we cannot directly obtain \(x_{B}\) from experiment, the Compton form factors (CFF) are obtained by integrating the GPDs, \[\begin{split}&\int_{-1}^{1}\frac{F_{q}(x_{B},\xi,t)}{x_{B}-\xi+i \epsilon}dx_{B}\\ &=\mathcal{P}\int_{-1}^{1}\frac{F_{q}(x_{B},\xi,t)}{x_{B}-\xi}dx _{B}-i\pi F_{q}(\xi,\xi,t),\end{split} \tag{15}\] where \(F_{q}\) are \(H^{q}\), \(\widetilde{H}^{q}\), \(E^{q}\), or \(\widetilde{E}^{q}\). These real and imaginary part of Eq. 15, which can be expressed in eight GPD-related quantities that can be extracted from Figure 1: The Feynman diagram of DVCS (left) and BH (right) processes. e, e\({}^{\prime}\) and p, p\({}^{\prime}\) are the initial and final states electron and proton respectively. And \(t\) is the four-momentum square transition between the initial and final state proton. Figure 2: The reference frame of scattering plane and kinematic variables of \(ep\to e^{\prime}p^{\prime}\gamma\) reaction in the laboratory [25]. DVCS observables [25]: \[H_{Re}(\xi,t) \equiv\mathcal{P}\int_{0}^{1}dx_{B}\left[H\left(x_{B},\xi,t\right)-H \left(-x_{B},\xi,t\right)\right]C^{+}, \tag{16}\] \[H_{Im}(\xi,t) \equiv H(\xi,\xi,t)-H(-\xi,\xi,t),\] \[E_{Re}(\xi,t) \equiv\mathcal{P}\int_{0}^{1}dx_{B}\left[E\left(x_{B},\xi,t\right)- E\left(-x_{B},\xi,t\right)\right]C^{+},\] \[E_{Im}(\xi,t) \equiv E(\xi,\xi,t)-E(-\xi,\xi,t),\] \[\widetilde{H}_{Re}(\xi,t) \equiv\mathcal{P}\int_{0}^{1}dx_{B}\left[\widetilde{H}\left(x_{B},\xi,t\right)-\widetilde{H}\left(-x_{B},\xi,t\right)\right]C^{-},\] \[\widetilde{H}_{Im}(\xi,t) \equiv\widetilde{H}(\xi,\xi,t)-\widetilde{H}(-\xi,\xi,t),\] \[\widetilde{E}_{Re}(\xi,t) \equiv\mathcal{P}\int_{0}^{1}dx_{B}\left[\widetilde{E}\left(x_{B},\xi,t\right)-\widetilde{E}\left(-x_{B},\xi,t\right)\right]C^{-},\] \[\widetilde{E}_{Im}(\xi,t) \equiv\widetilde{E}(\xi,\xi,t)-\widetilde{E}(-\xi,\xi,t).\] The case with subscript "\(Re\)" is accessed by observables sensitive to the real part of the DVCS amplitude, while the case with subscript "\(Im\)" is accessed by observables sensitive to its imaginary part, where the coefficient \(C^{\pm}\) defined as: \[C^{\pm}=\frac{1}{x_{B}-\xi}\pm\frac{1}{x_{B}+\xi}. \tag{17}\] As a result, the Compton form factors with four complex functions are written as: \[\mathcal{H}(\xi,t) \equiv H_{Re}(\xi,t)-i\pi H_{Im}(\xi,t), \tag{18}\] \[\tilde{\mathcal{H}}(\xi,t) \equiv\widetilde{H}_{Re}(\xi,t)-i\pi\widetilde{H}_{Im}(\xi,t),\] \[\mathcal{E}(\xi,t) \equiv E_{Re}(\xi,t)-i\pi E_{Im}(\xi,t),\] \[\tilde{\mathcal{E}}(\xi,t) \equiv\widetilde{E}_{Re}(\xi,t)-i\pi\widetilde{E}_{Im}(\xi,t).\] For the measurement of CFFs, it is mandatory to consider the interference term from BH events. The production of BH events is a pure QED process, which can be measued precisely from the form factor \(F_{1}\) and \(F_{2}\). In addition to the absolute cross section, another way to obtain the CFF is by measuring the asymmetries. The beam charge asymmetries are defined as: \[A_{C}=\frac{\sigma^{+}(\phi)-\sigma^{-}(\phi)}{\sigma^{+}(\phi)+\sigma^{-}( \phi)}, \tag{19}\] where \(\sigma^{+}\) and \(\sigma^{-}\) refer to cross sections with lepton beams of opposite charge. We can see that the asymmetries only depends on \(\phi\). The observables of interest in this paper are the correlated charge and transversely polarized target-spin asymmetries, defined as: \[A_{UT,DVCS}=\frac{\left(\sigma_{+}^{+}(\phi)-\sigma_{-}^{+}(\phi )\right)+\left(\sigma_{-}^{-}(\phi)-\sigma_{-}^{-}(\phi)\right)}{\sigma_{+}^{ +}(\phi)+\sigma_{-}^{+}(\phi)+\sigma_{-}^{-}(\phi)}, \tag{20}\] \[A_{UT,I}=\frac{\left(\sigma_{+}^{+}(\phi)-\sigma_{+}^{+}(\phi) \right)-\left(\sigma_{-}^{-}(\phi)-\sigma_{-}^{-}(\phi)\right)}{\sigma_{+}^{+ }(\phi)+\sigma_{-}^{+}(\phi)+\sigma_{-}^{-}(\phi)},\] where \(A\) with subscripts denote the cross section asymmetries of \(ep\to e^{\prime}p^{\prime}\gamma\) at certain beam (first subscript) and target (second subscript) polarization sign ("U" stands for unpolarized and "T" for transverse polarized). Note that there are two independent transverse polarization direction of proton: \(UT_{x}\) is in the hadronic plane and \(UT_{y}\) is perpendicular to it. There, the superscript and subscript of \(\sigma\) refers to the charge of the lepton beam and beam (or target) spin projection. One can measure exclusive \(ep\to e^{\prime}p^{\prime}\gamma\) cross section with different beam and target polarization since the spin asymmetries give the access to different CFFs through the interference term \(\mathcal{I}\), the BH and DVCS process. At leading-order and leading-twist, the relation linking observables and CFFs for \(ep\to e^{\prime}p^{\prime}\gamma\) process have been derived as [71; 72; 32]: \[A_{\text{UT, DVCS}}^{\sin(\phi-\phi_{*})} \propto\left[\text{Im}\left(\mathcal{H}\mathcal{E}^{*}\right)-\xi \,\text{Im}\left(\widetilde{\mathcal{H}}\widetilde{\mathcal{E}}^{*}\right) \right], \tag{21}\] \[A_{\text{UT,I}}^{\sin(\phi-\phi_{*})\cos\phi} \propto\text{Im}\left[-\frac{t}{4M^{2}}\left(F_{2}\mathcal{H}-F_{1} \mathcal{E}\right)\right.\] (22) \[\left.+\,\xi^{2}\left(F_{1}+\frac{t}{4M^{2}}F_{2}\right)\left( \mathcal{H}+\mathcal{E}\right)\right.\] \[\left.-\xi^{2}\left(F_{1}+F_{2}\right)\left(\widetilde{\mathcal{H }}+\frac{t}{4M^{2}}\widetilde{\mathcal{E}}\right)\right].\] These approximations illustrate that different experimental observables are sensitive to different CFFs. We can see that the above asymmetries have dependence on CFF \(\mathcal{E}\), which is important implication for our following study of the total angular momentum of different quarks within the proton. ## III Phenomenological parametrization of GPDs Assuming a factorized t-dependence, the quark GPD \(H^{q}\) is given by [26]: \[H^{q}(x,\xi,t)=H^{q}(x,\xi)\cdot F_{1}^{q}(t). \tag{23}\] The nucleon form factors in dipole form is given by: \[F_{1}^{\text{dipole}}\left(t\right)=\frac{1-\left(1+\kappa^{P}\right)t/4m_{N} ^{2}}{1-t/4m_{N}^{2}}\frac{1}{(1-t/0.71)^{2}}. \tag{24}\] For the function \(H^{q}\) (for each flavor \(q\)), the t-independent part \(H^{q}(x,\xi)\equiv H^{q}(x,\xi,t=0)\) is parametrized by a two-component form, \[H^{q}(x,\xi)\equiv H_{DD}^{q}(x,\xi,t=0)+\theta(\xi-|x|)D^{q}\left(\frac{x}{\xi }\right), \tag{25}\] where \(D^{q}\left(\frac{x}{\xi}\right)\) is the D-term, set to \(0\) in our following calculation. And \(H_{DD}^{q}\) is the part of the GPD which is obtained as a one-dimensional section of a two-variable double distribution (DD) \(F^{q}\), imposing a particular dependence on the skewedness \(\xi\), \[H_{DD}^{q}(x,\xi)=\int_{-1}^{1}d\beta\int_{-1+|\beta|}^{1-|\beta|}d\alpha\delta( x-\beta-\alpha\xi)F^{q}(\beta,\alpha). \tag{26}\] For the double distributions, entering Eq. 26, we use the following model, \[F^{q}(\beta,\alpha)=h(\beta,\alpha)q(\beta), \tag{27}\] where \(q(\beta)\) is the forward quark distribution (for the flavor \(q\)) and where \(h(\beta,\alpha)\) denotes a profile function. In the following estimates, we parametrize the profile function through a one-parameter ansatz, following [62; 63; 26]: \[h(\beta,\alpha)=\frac{\Gamma(2b+2)}{2^{2b+1}\Gamma^{2}(b+1)}\frac{\left[(1-| \beta|)^{2}-\alpha^{2}\right]^{b}}{(1-|\beta|)^{2b+1}}. \tag{28}\] For \(\beta>0\), \(q(\beta)=q_{\rm val}(\beta)+\bar{q}(\beta)\) is the ordinary PDF for the quark flavor \(q\). In this work, we use IMParton as input [73]. The negative \(\beta\) range corresponds to the antiquark density: \(q(-\beta)=-\bar{q}(\beta)\). The parameter \(b\) characterizes to what extent the GPD depends on the skewness \(\xi\), and fixed to 1 in this work. The spin-flip quark GPDs \(E_{q}\) in the factorized ansatz are given by: \[E_{q}(x,\xi,t)=E_{q}(x,\xi)\cdot F_{2}^{q}(t)/\kappa^{q}. \tag{29}\] Here \(F_{2}^{q}(t)\) denotes the Pauli FF for quark flavor \(q\), and is parameterized by: \[F_{2}^{q}=\frac{\kappa^{q}}{\left(1-t/4m_{p}^{2}\right)\cdot\left(1-t/m_{D}^{2 }\right)^{2}}, \tag{30}\] where \(\kappa_{q}\) is the anomalous magnetic moment of quarks of flavor \(q\), \(\kappa^{u}=2\kappa^{p}+\kappa^{n}=1.67\), \(\kappa^{d}=\kappa^{p}+2\kappa^{n}=-2.03\). Same as Eq. 25, the t-independent part of the quark GPDs, \(E_{q}(x,\xi)\) is defined as: \[E_{q}(x,\xi)=E_{q}^{DD}(x,\xi)-\theta(\xi-|x|)D_{q}\left(\frac{x}{\xi}\right). \tag{31}\] The part of the GPD \(E\) that can be obtained from the double distribution has a form analogous to the spin-nonflip case: \[E_{q}^{DD}(x,\xi)=\int_{-1}^{1}d\beta\int_{-1+|\beta|}^{1-|\beta|}d\alpha\delta (x-\beta-\alpha\xi)K_{q}(\beta,\alpha), \tag{32}\] there, \(K_{q}(\beta,\alpha)\) is given by: \[K_{q}(\beta,\alpha)=h(\beta,\alpha)e_{q}(\beta), \tag{33}\] and \(e_{q}(\beta)\) denotes the spin-flip can be written as: \[e_{q}(x)=A_{q}\cdot q_{\rm val}(x)+B_{q}\cdot\delta(x), \tag{34}\] with: \[A_{q} =\frac{2J_{q}-M_{q}^{(2)}}{M_{q_{\rm val}}^{(2)}},\] \[B_{u} =2\left[\frac{1}{2}\kappa_{u}-\frac{2J_{u}-M_{u}^{(2)}}{M_{u_{\rm val }}^{(2)}}\right], \tag{35}\] \[B_{d} =\kappa_{d}-\frac{2J_{d}-M_{d}^{(2)}}{M_{d_{\rm val}}^{(2)}}.\] By defining the total fraction of the proton momentum carried by the quarks and antiquarks of flavor \(q\) as: \[M_{2}^{q}=\int_{0}^{1}dxx[q(x)+\bar{q}(x)]=\int_{0}^{1}dxx\left[q_{\rm val} \ (x)+2\bar{q}(x)\right], \tag{36}\] and the momentum fraction carried by the valence quarks as: \[M_{2}^{q_{\rm val}}=\int_{0}^{1}dxxq_{\rm val}\ (x). \tag{37}\] The parameterizations of \(\widetilde{H}\) and \(\widetilde{E}\) are introduced in [62; 27; 64; 65]. While parameterization of \(\widetilde{H}\), we use polIMParton as input [74]. In this model, the total angular momentum carried by u-quarks and d-quarks, \(J_{u}\) and \(J_{d}\), are free parameters in the parameterization of the spin-flip GPD \(E_{q}(x,\xi,t)\). Therefore, this parameterization can be used to study the sensitivity of hard electroproduction observables to variations in \(J_{u}\) and \(J_{d}\). ## IV Distributions of invariant and final-state kinematics There is a package of Monte-Carlo (MC) simulations of DVCS and BH processes called MILOU [75]. We use this software to generate 5 million events of EicC and EIC. We use the PARTONS (PARtonic Tomography Of Nucleon Software) package as the observables input [76]. Thus, we can make some pseudo data for subsequent theoretical calculations. We focus on two future experiments (EIC and EicC), and assume the beam energy of incoming electron and incoming proton with \(E_{e}=3.5\) GeV, \(E_{p}=20\) GeV at EicC [77], \(E_{e}=5\) GeV, \(E_{p}=100\) GeV at EIC [78]. We propose to do the measurement of spin azimuthal asymmetries in deeply virtual Compton scattering on transverse polarized proton. Besides the scattered electron, real photon and the scattered proton will be measured after the incoming unpolarized electron. Transverse Target-Spin Asymmetry (\(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}\)) will be extracted from the data. The EicC facility can offer the beam integrated luminosity up to 50 fb\({}^{-1}\), which corresponds to the effective running time within one year [77]. EicC also has a large kinematic acceptance capacity, which can complement the current vacant data. Compared to EicC, EIC offer the beam integrated luminosity up to 60 fb\({}^{-1}\) in less running time [78; 79]. Combining with the EIC and EicC experiments, high precision data of most kinematic regions will be available. In order to efficiently generate the events in the kinematic region of interests, we apply the following kinematical ranges for the Monte-Carlo sampling: \(10^{-4}<x_{B}<1\), 1 GeV\({}^{2}\)\(<Q^{2}<100\) GeV\({}^{2}\), and \(10^{-3}\) GeV\({}^{2}\)\(<-t<3\) GeV\({}^{2}\). Fig. 3 and Fig 4 show the coverage of the momentum vs polar angles for final state electrons, real photons and scattered protons coming from DVCS and BH process at EicC and EIC. We see that the final proton having a large fraction of the momentum of the incoming proton and a small scattering angle. Especially, most protons locate at very small polar angles, and the momentum difference with beam is so small that we need very good momentum resolution for the forward detector. The final electron having a larger scattering angle than the final proton. According to the distribution of the final state particles, we can place the detectors appropriately to collect more valid examples. Fig. 5 and Fig. 6 show the cross-section weighted invariant kinematics distributions of \(ep\to e^{\prime}p^{\prime}\gamma\) reaction at EicC and EIC. These color z-axis distribution were weighted by the cross section computed in VGG model built in the MILOU software and shown in Log z scale. We can see that, the range of \(Q^{2}\) covers from 1.0 GeV\({}^{2}\) to 10.0 GeV\({}^{2}\), \(x_{B}\) lies between 0.003 and 0.05, and \(t\) goes from 0 down to -0.2 GeV\({}^{2}\), most of the events are in this area. Comparing the results of EicC and EIC, we can see that EIC has more data in the smaller \(x_{B}\) and smaller \(-t\) region than EicC. small \(t\). In order to make sure that the recoiled proton can be detected by forward detector, we assumed some constraints on the detection of final state protons. This low-t acceptance eliminates many forward events, taking EicC as an example (Fig. 8). Based on the event selection criteria discussed above, the number of events in each bin is calculated with the following formula, \[N=\sigma^{avg}\,\cdot\,\,\text{Lumi}\,\cdot\,\,\text{Time}\,\cdot\epsilon_{ \text{eff}}\cdot\Delta x_{B}\cdot\Delta t\cdot\Delta Q^{2}, \tag{38}\] where \(N\) is the total events in each kinematical bins, \(\sigma^{avg}\) is the average of the four cross section with different electron and proton beam polarization directions, "Lumi" is the beam luminosity, "Time" is the beam duration, and \(\epsilon_{\text{eff}}\) is the overall efficiency of detector, and the rest denotes the sizes of the kinematical bins. In this work, we conservatively assumed an acceptance of final state particles, which is 25 % at EIC and 20 % at EicC [77; 78]. The counts of events in each bin is denoted as \(N^{++}\), \(N^{+-}\), \(N^{-+}\), and \(N^{--}\), corresponding to different electron and nucleon polarization directions. One can obtain the asymmetries quantities of the target spin asymmetry (\(A_{TS}\)): \[A_{TS}=\frac{N^{++}+N^{-+}-N^{+-}-N^{--}}{N^{++}+N^{+-}+N^{-+}+N^{--}}\frac{1}{ P_{T}}, \tag{39}\] where \(P_{T}\) stands for the polarization degree of nucleon (assumed as 70 %) [77; 78]. Considering that the asymmetries quantities are in several percent level, we use the unpolarized events by MILOU to do the projection, and the total event number of all polarization conditions is denoted as \(N\). Thus the absolute statistical uncertainty of the asymmetries quantities can be expressed approximately as: \[\delta A_{TS}\approx\frac{1}{P_{T}}\frac{1}{\sqrt{N}}. \tag{40}\] Fig. 9 and Fig. 10 show the statistical errors projection in a low \(Q^{2}\) bin between 1 and 3 GeV\({}^{2}\) for EicC and EIC experiments. We focus on small \(x_{B}\) and \(-t\) region, and divide the \(x_{B}\) vs. \(-t\) plane into very small bins. We see in these plots that the statistical uncertainty goes up with \(x_{B}\) increasing. For most of the data at EicC and EIC, the projected statistical uncertainty is smaller than 3 %. When \(x_{B}\) increasing to around 0.12, the statistical uncertainty is around 5 %. These precise data will be of great help to theoretical research in the future. Now we can give the pseudo-data of the asymmetry of the cross-section in the area of interest at EicC and EIC. We divide \(x_{B}\), \(t\), and \(Q^{2}\) in different bins, show in Tab. 1. This table corresponds to the Fig. 11 and Fig. 12. For the case where only \(x_{B}\), \(t\) or \(Q^{2}\) changes, we applied a similar division approach. Here \(x_{B}\) ranges from 0.01 to 0.17 in steps of 0.02 (\(t:-0.11\sim-0.09\) GeV\({}^{2}\), \(Q^{2}:1.13\sim 1.38\) GeV\({}^{2}\)), \(t\) ranges from -0.19 GeV\({}^{2}\) to -0.03 GeV\({}^{2}\) in steps of 0.02 (\(x_{B}:0.01\sim 0.03\), \(Q^{2}:1.13\sim 1.38\) GeV\({}^{2}\)) and \(Q^{2}\) ranges from 1.13 GeV\({}^{2}\) to 3.13 GeV\({}^{2}\) in steps of 0.25 (\(x_{B}:0.01\sim 0.03\), \(t:-0.11\sim-0.09\) GeV\({}^{2}\)). As shown in Fig. 11, EicC provides large phase space coverage and good statistics, especially for small \(x_{B}\),\(-t\) and \(Q^{2}\) regions. The similar results at EIC [83] are shown in Fig. 12. Since we also divide the \(Q^{2}\) into small bins, the statistical errors of pseudo-data in Fig. 11 and Fig. 12 are much larger than those shown in Fig. 9 and Fig. 10. We develop a code to calculate observables in the exclusive reaction \(ep\to e^{\prime}p^{\prime}\gamma\) to LO precision in perturbative theory. This calculation follows the VGG model described in Sec. III. In order to compare the results from theoretical calculations with the TTSA amplitudes pseudo data in Fig. 13, the \(\chi^{2}_{\text{exp}}\) is defined as: \[\begin{array}{l}\chi^{2}_{\text{exp}}\left(J_{u},J_{d}\right)=\\ \left[\frac{A^{\text{sin}(\phi-\phi_{s})\cos\phi}}{\left(\frac{1}{\text{ Pseudo data}}\right)^{-}A^{\text{sin}(\phi-\phi_{s})\cos\phi}_{\text{theory}}\right]^{2}}{ \delta A^{2}_{\text{ratio}}+\delta A^{2}_{\text{syst}}}.\end{array} \tag{41}\] Figure 8: The cross-section weighted momentum and polar angles distributions of the scattered protons with the geometric cut. The square breach at the right side shows the eliminated data with proton momentum larger than 99 % of beam momentum and scattering angle smaller than 2 mrad. Figure 7: Kinematic range in the \(x\), \(Q^{2}\) plane at EicC (\(\sqrt{s}=16.7\) GeV) and EIC (\(\sqrt{s}=45\) GeV) [80; 81; 82]. The hatched areas indicate the areas simulated in this work, which correspond to \(0.01\leq y\leq 0.85\). The red dashed line and green dashed line indicate \(y=0.6\). \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \(x_{B}\) & \(t\) (GeV\({}^{2}\)) & \(Q^{2}\) (GeV\({}^{2}\)) & \(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}\pm stat\) \\ \hline \(0.006\) & 0.10 & 1.25 & -0.089\(\pm\)0.007 \\ \hline \(0.01\) & 0.10 & 1.25 & -0.168\(\pm\)0.016 \\ \hline \(0.1\) & 0.12 & 2.50 & -0.142\(\pm\)0.020 \\ \hline \end{tabular} \end{table} Table 2: Asymmetries with polarized electron beam and proton beam at EicC. Figure 11: Asymmetries with polarized electron beam and proton beam in some typical bins at EicC. \begin{table} \begin{tabular}{|l|l|l|l|} \hline \(x_{B}\) & \(t\) (GeV\({}^{2}\)) & \(Q^{2}\) (GeV\({}^{2}\)) & \(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}\pm stat\) \\ \hline \(0.002\) & 0.10 & 1.25 & -0.225\(\pm\)0.005 \\ \hline \(0.006\) & 0.10 & 1.25 & -0.172\(\pm\)0.008 \\ \hline \(0.01\) & 0.10 & 1.25 & -0.121\(\pm\)0.007 \\ \hline \(0.1\) & 0.12 & 2.50 & -0.020\(\pm\)0.002 \\ \hline \end{tabular} \end{table} Table 3: Asymmetries with polarized electron beam and proton beam at EicC. Figure 12: Asymmetries with polarized electron beam and proton beam in some typical bins at EIC. Figure 10: The statistical errors projection of the Transverse Target-Spin Asymmetry at low \(Q^{2}\) at EIC. We calculate the statistical errors at each bin center. The right axis shows how large the statistical errors are. Figure 13: Asymmetries with polarized electron beam and proton beam in small \(x\) region at EicC (Tab. II) and EIC (Tab. III). Figure 9: The statistical errors projection of the Transverse Target-Spin Asymmetry at low \(Q^{2}\) at EicC. We calculate the statistical errors at each bin center. The right axis shows how large the statistical errors are. There we need to consider the systematic errors. Based on the previous experiments [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51], we make a conservative estimate for EicC and EIC. Thus, for EicC and EIC, we assume experimental systematic errors are 10 %. The constraints on \(J_{u}\) and \(J_{d}\) obtained for the extracted TTSA amplitudes from the pseudo data are shown in Fig. 13. We calculate the TTSA amplitudes for \(J_{u}\) (\(J_{d}\)) ranging from 0 to 1 (-1 to 1) in steps of 0.2, and set the D-term = 0 (\(D^{q}\left(\frac{x}{\xi}\right)\) in Eq. 25). Fig. 14 shows the model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) in the same kinematic region as HERMES [68; 69]. Here we only consider the influences from statistical errors. The result of EicC, which is shown in Fig. 14, can be expressed as \[J_{u}+J_{d}/2.9=0.41\pm 0.06, \tag{42}\] and the result of EIC is \[J_{u}+J_{d}/3.0=0.39\pm 0.04. \tag{43}\] If we consider both statistical and systematic errors (\(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}=-0.142\pm 0.020\pm 0.014\) at EicC, \(A_{UT}^{\sin(\phi-\phi_{s})\cos\phi}=-0.020\pm 0.002\pm 0.002\) at EIC), the result (shown in Fig. 15) is \[J_{u}+J_{d}/2.9=0.41\pm 0.08, \tag{44}\] for EicC, and \[J_{u}+J_{d}/3.0=0.39\pm 0.06. \tag{45}\] for EIC. The uncertainty is propagated from the TTSA amplitudes uncertainty of the pseudo data, and experimental systematic errors dominate. According to the results of HERMES [68; 69; 84], \[J_{u}+J_{d}/2.9=0.42\pm 0.21, \tag{46}\] we ignore the effects of parameter b and D-term. As the Fig. 15 shows, EicC and EIC have higher accuracy to obtain smaller uncertainty for constraint on u-quark and d-quark total angular momentum. Since EIC and EicC can provide a large amount of accurate data in the small \(x\) region, we performed some calculations in this region. Both statistical and systematic errors are considered in these results. At \(x=0.01\), the results of EicC and EIC are shown in Fig. 16, where EicC is \[J_{u}+J_{d}/2.6=0.39\pm 0.05, \tag{47}\] and EIC is \[J_{u}+J_{d}/2.7=0.38\pm 0.05. \tag{48}\] In the smallest \(x\) area that EicC can provide, we obtained the flowing results, where \[J_{u}+J_{d}/2.5=0.38\pm 0.05, \tag{49}\] is the result of EicC shown in Fig. 17. The result of EIC in this kinematic region is \[J_{u}+J_{d}/2.5=0.39\pm 0.05. \tag{50}\] As Fig. 7 shows, EIC also provides accurate data in the area of \(x\sim 0.002\). In this very small \(x\) region, we present the result of EIC, \[J_{u}+J_{d}/2.4=0.35\pm 0.04, \tag{51}\] which is shown in Fig. 18. The results of EicC and EIC are both within the error range of HERMES and both have small errors. Without Figure 14: The result of model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) at EIC and EicC compared with HERMES [68; 69]. Only statistical errors are considered. Figure 15: The result of model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) at EIC and EicC compared with HERMES [68; 69]. Both statistical and systematic errors are considered. precise experiments, it is difficult for theoretical work to move forward. These precise experimental data will help us gain a deeper understanding of the nucleon structure in the future. ## VI Discussions and Summary The internal structure of the nucleon is mysterious, and we explore it by various methods. After the EMC experiment, the researchers conducted many detailed studies of nucleon spins. The proposed GPDs theory opens new paths for the study of the three-dimensional structure and spin of nucleon. By Ji's sum rule, we find that GPDs are directly related to the total angular momentum carried by the partons. DVCS experiments are a good choice to obtain GPDs, although not quite directly extracted. In contrast to the great progress in studying GPDs on the theoretical side, relatively little progress has been made on the experimental side. Because the experiment requires high statistical accuracy, this means that extremely good detectors and very high luminosity are required. In this work, we simulated the DVCS process at EicC and EIC to study the internal structure of proton. The statistical errors of these two future experiments are predicted. According to the very small statistical errors, we find that the measurement accuracy of future DVCS experiment will be limited mostly by systematical errors. It seems that the accuracy of the EIC and EicC data will be greatly improved in the future when compared with the existing real data from different experiment groups. Advanced experiment equipment to reduce systematic errors and better detection of final state particles to reduce statistical errors. We believe that future EicC and EIC experiments will yield more accurate data than those predicted in this work. This has significant implications for future experimental studies of the internal structure of nucleon. With the excellent detectors and high accelerator luminosity, DVCS experiments at EicC and EIC will have a bright prospect. Based on the EIC and EicC measurements of TTSA high-precision pseudo-data, we can have a good study of the nucleon helicity-flip GPD \(E\). Through the VGG model, the GPD E is parameterized by the total angular momentum of the up and down quarks in the nucleon. With this model we combine DVCS experiments with nu Figure 16: The result of model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) in the region of \(x\sim 0.01\) at EIC and EicC. Both statistical and systematic errors are considered. Figure 17: The result of model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) in the region of \(x\sim 0.006\) at EIC and EicC. Both statistical and systematic errors are considered. Figure 18: The result of model-dependent constraint on u-quark total angular momentum \(J_{u}\) vs d-quark total angular momentum \(J_{d}\) in the region of \(x\sim 0.002\) at EIC. Both statistical and systematic errors are considered. -clean spin studies. According to the HERMES and JLab experiments constraint on the total angular momentum of quarks in the proton and neutron, we constraint on the total angular momentum carried by up quarks and down quarks inside the proton in future EIC and EicC experiments. There are different GPD models based on experimental and theoretical research to study the mysterious nucleon structure. Current research relies on models too heavily, we look forward to more precise experimental data to verify these theoretical research in the future. ###### Acknowledgements. We thank Prof. J. P. Chen and Dr. Korotkov for suggestions and discussions. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences under the Grant NO. XDB34030301.
2301.07288
Momentum-space signatures of the Anderson transition in a symplectic, two-dimensional, disordered ultracold gas
We study Anderson Localization in two dimensional (2D) disordered spin-orbit systems described by the Gaussian symplectic ensemble using momentum-space signatures such as the coherent backscattering (CBS) anti-peak, and the coherent forward scattering (CFS) peak. Significantly, these momentum-space features are readily accessible in ultracold atom experiments through absorption imaging after time-of-flight expansion. The critical exponent and mobility edge of the metal-insulator transition are successfully obtained in this model through a finite-time analysis of the CBS width. An anomalous residual diffusion, unique to 2D, is identified at the transition point where the system changes from a metal to an insulator. A spin localization phenomenon is also observed in the deep localized regime.
Ehsan Arabahmadi, Daniel Schumayer, Benoit Gremaud, Christian Miniatura, David A. W. Hutchinson
2023-01-18T03:26:07Z
http://arxiv.org/abs/2301.07288v1
# Momentum-space signatures of the Anderson transition ###### Abstract We study Anderson Localization in two dimensional (2D) disordered spin-orbit systems described by the Gaussian symplectic ensemble using momentum-space signatures such as the coherent backscattering (CBS) anti-peak, and the coherent forward scattering (CFS) peak. Significantly, these momentum-space features are readily accessible in ultracold atom experiments through absorption imaging after time-of-flight expansion. The critical exponent and mobility edge of the metal-insulator transition are successfully obtained in this model through a finite-time analysis of the CBS width. An anomalous residual diffusion, unique to 2D, is identified at the transition point where the system changes from a metal to an insulator. A spin localization phenomenon is also observed in the deep localized regime. _Introduction--_ Anderson localization (AL), the disorder-induced suppression of wave transport by destructive interference, was first introduced [1] to explain the anomalous suppression of conductance in mesoscopic electron systems. It is, in fact, a general phenomenon, and an ubiquitous feature of any linear waves propagating in bulk random media. Since its conceptual inception, it has been observed (if indirectly) in a variety of very different systems [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Notably, over the past decade, ultracold atomic gases have provided a uniquely controllable experimental platform in which to directly observe and study AL in quantum systems [22; 23; 24; 25; 26; 27; 28]. In particular, the momentum distribution of the single-particle wavefunction has provided a directly observable signature of both weak localisation, and strong localisation through the coherent back-scattering (CBS) and coherent forward-scattering (CFS) peaks [29; 30; 31]. Their dynamic observation can be used to quantitatively characterise the three dimensional (3D) Anderson transition delineating an extended metallic regime from an insulating one [30; 31]. Historically, the first powerful phenomenological description of AL was the one-parameter scaling theory [32; 33]. It relies on the hypothesis that all transport properties of a disordered system depend only on the dimensionless conductance \(g\). The scaling behavior of \(g\) with the system size \(L\) is encapsulated in the function \(\beta(g)=\frac{d\ln g}{d\ln L}\), and obtained from a smooth interpolation between the limiting metallic and insulating expected asymptotics. This theory predicts the existence of a metal-insulator transition (MIT) in dimension 3 [6; 34]. It was also conjectured that there are distinct universality classes based on the symmetries of the Hamiltonian: orthogonal, unitary, and symplectic. For example, ultracold atoms spreading in an optical speckle potential, where both time and spatial rotational symmetries are present, are well described by the Gaussian Orthogonal Ensemble (GOE) of random matrix theory [35]. It is also well-known that disordered systems within this symmetry class are always localised for any disorder strength in dimension two or less, whereas they exhibit a metal-insulator transition in dimension three. In particular, both the mobility edge and critical exponent of this Anderson transition have been determined through the scaling behaviors of the CBS width and CFS contrast [36; 37]. On the other hand, AL within the Gaussian Unitary (GUE) and Symplectic (GSE) Ensembles has received less experimental attention in the ultracold atom community [38; 3; 2; 4]. In this Letter, we address the GSE case by considering spin-\(\frac{1}{2}\) particles in a two dimensional (2D) square lattice with onsite disorder and random spin rotation during hopping. As is well known, spin-orbit (SO) coupling induces a MIT in two dimensions at low enough disorder [39; 4]. We use the scaling properties of the CBS (anti)-peak present in the momentum distribution of the particles to extract the mobility edge and critical exponent of this transition. The scaling behaviour of the CFS peak contrast will be addressed in future work. _Theoretical model--_ Our tight-binding Hamiltonian for noninteracting spin-\(\frac{1}{2}\) particles reads: \[H=J\sum_{\langle i,j\rangle}\psi_{i}^{\dagger}U_{ij}\psi_{j}+\sum_{i}w_{i}\psi_{i} ^{\dagger}\psi_{i}\,, \tag{1}\] where the sums run over all nearest-neighbor lattice site pairs \(\langle i,j\rangle\) and lattice sites \(i\), respectively. The field operator \(\psi_{i}^{\dagger}=(\psi_{i\uparrow}^{\dagger},\psi_{i\downarrow}^{\dagger})\) is the 2-component row-spinor built from the creation operators \(\psi_{i\sigma}^{\dagger}\) at site \(i\) and spin components \(\sigma=\uparrow\), and \(\downarrow\). The onsite disorder potentials \(w_{i}\) are independent random variables uniformly distributed over \([-W/2,W/2]\), where \(W\) is the disorder strength. Hereafter, we set the hopping amplitude to \(J=1\), the lattice spacing \(a=1\) and \(\hbar=1\). Following [38], the random spin rotation during hopping is described by the \(SU(2)\) matrix \[U_{ij}=\begin{bmatrix}e^{\mathrm{i}\alpha_{ij}}\cos(\beta_{ij})&e^{\mathrm{i} \gamma_{ij}}\sin(\beta_{ij})\\ -e^{-\mathrm{i}\gamma_{ij}}\sin(\beta_{ij})&e^{-\mathrm{i}\alpha_{ij}}\cos( \beta_{ij})\end{bmatrix} \tag{2}\] where the angles \(\alpha_{ij}\) and \(\gamma_{ij}\) are independent random variables uniformly distributed over \([0,2\pi)\) while the angles \(\beta_{ij}\) are independent random variables distributed over \([0,\pi/2]\) with probability density function \(g(\beta)=\sin(2\beta)\). Since \(H\) is Hermitian, \(U_{ij}=U_{ji}^{\dagger}\) implying \(\alpha_{ij}=-\alpha_{ji}\), and similarly for \(\gamma_{ij}\) and \(\beta_{ij}\). One recovers the GOE case for \(\beta_{ij}=0\) and constant uniform angles \(\alpha_{ij}\) and \(\gamma_{ij}\). Noticeably, \(H\) is invariant under time reversal, \(THT^{-1}=H\), where \(T\) is the time reversal operator for spin-\(\frac{1}{2}\) systems and satisfying \(T^{2}=-1\)[40]. As a consequence, each eigenvalue \(\varepsilon_{n}\) of \(H\) is doubly degenerate (Kramers' degeneracy) with orthonormal eigenstates of the form \(\ket{\varphi_{n}}\) and \(\ket{T\varphi_{n}}\). Importantly, the Hamiltonian dynamics cannot couple time-reversed states, irrespectively of the disorder configuration. Indeed \[\bra{T\psi}e^{-\mathrm{i}Ht}\ket{\psi}=\\ \sum_{n}e^{-\mathrm{i}\varepsilon_{n}t}\Bigl{[}\bra{T\psi}\ket{ \varphi_{n}}\bra{\varphi_{n}}\ket{\psi}+\bra{T\psi}T\varphi_{n}\rangle\bra{T \varphi_{n}}\ket{\psi}\Bigr{]}.\] Using the \(\bra{T\psi}T\varphi_{n}\rangle=\bra{\varphi_{n}}\ket{\psi}\) relationship together with \(\bra{T\varphi_{n}}\ket{\psi}=-\bra{T\varphi_{n}}T^{2}\psi\rangle=-\bra{T\psi }\ket{\varphi_{n}}\), we see that the bracketed term in the sum above vanishes. As will be seen later, this very fact explains why a CBS dip, rather than a CBS peak, is observed in the momentum distribution for GSE systems. _Momentum distributions_-- To study the momentum-space signatures of AL, we consider the initial plane wave state \(\ket{\mathbf{k}_{0},\uparrow}\) at wave vector \(\mathbf{k}_{0}=(0,\pi/2)\) that we shape into a wave packet \(\ket{\psi_{0}}=\mathcal{F}(E,\delta E)\ket{\mathbf{k}_{0},\uparrow}\) at energy \(E\) by the filter operator \(\mathcal{F}(E,\delta E)\propto\exp\left[-(H-E)^{2}/(2\delta E^{2})\right]\). The parameter \(\delta E\) that controls the selected energy window around \(E\) should be as narrow as possible and simultaneously wide enough to keep a sufficient number of eigenstates [41]. We then compute the disorder-averaged momentum distributions \(n_{\sigma}(\mathbf{k},t)=\overline{|\langle\mathbf{k},\sigma|\exp(-\mathrm{i}Ht)|\psi_ {0}\rangle|^{2}}\) at energy \(E\) (\(\sigma=\uparrow,\downarrow\)). In the rest of the paper, we have chosen \(E=1\) and \(\delta E=0.035\) (in units of \(J\)). Fig. 1 shows the momentum distributions obtained at time \(t=100\) (in units of \(\hbar/J\)) for onsite disorder strength \(W=6.8J\) (localized phase as will be seen later). In the spin-preserving channel, we observe a CFS peak centered at \(\mathbf{k}_{0}\) on top of a flat diffusive background. In the spin-flipping channel, we observe a CBS anti-peak centered at \(-\mathbf{k}_{0}\) and dug into a flat background. Since \(\ket{-\mathbf{k}_{0},\downarrow}=T\ket{\mathbf{k}_{0},\uparrow}\), the dynamics cannot connect these two states and \(n_{\downarrow}(-\mathbf{k}_{0},t)=0\) at any time, irrespective of the disorder configuration averaging. The CBS dip is thus a genuine characteristics of GSE systems. In addition, we note that both backgrounds in each spin channels are flat. This can be traced back to the fact that the disorder-a Figure 1: Momentum distributions \(n_{\uparrow}(\mathbf{k},t)\) and \(n_{\downarrow}(\mathbf{k},t)\) obtained at time \(t=100\)\(\hbar/J\) for an initial state \(\ket{\mathbf{k}_{0},\uparrow}\) with \(\mathbf{k}_{0}=(0,\pi/2)\) filtered at energy \(E=1\) (in units of \(J\)). The linear size of the lattice is \(L=513\) (in units of \(a\)) and the onsite disorder strength is \(W=6.8\) (in units of \(J\)). The CFS peak and the CBS dip are clearly seen in their respective spin channels. At \(t=100\hbar/J\), the backgrounds in each spin channel have already reached their stationary and equal values (set to \(1/2\) by total probability conservation). However, we note that the CFS contrast has not yet reached the stationary value \(C_{F}^{\infty}=2\) expected for GSE systems. \begin{table} \begin{tabular}{l l l l l l} symmetry & \(d=1\) & \(d=2\) & \(d=3\) & \(d>3\) & system \\ \hline orthogonal & L & L & MIT & MIT & no spin-orbit coupling, no magnetic field \\ symplectic & L & MIT & MIT & MIT & spin orbit coupling \\ unitary & L & L & MIT & MIT & magnetic field \\ \end{tabular} \end{table} Table 1: Phases in symmetry classes and dimensions. Abbreviations: metal-insulator transition (MIT), only localized states (L). Corresponding review articles are Refs. [2; 3; 4] \(\overline{G(E)}=\overline{(E-H)^{-1}}\), which is a diagonal operator in momentum and spin spaces as disorder average restores translation and rotation invariances, has diagonal elements that do not depend on \(\mathbf{k}\) and \(\sigma\) but only on \(E\), i.e., \(\langle\mathbf{k}\sigma|\overline{G(E)}|\mathbf{k}\sigma\rangle=\bar{g}(E)\). This unusual property, that we have checked numerically, can be explained by the fact that the disorder-averaged Hamiltonian vanishes (\(\overline{H}=0\)), amounting to having a trivial \(\mathbf{k}\)-independent diagonal disorder-free Green's function \(\langle\mathbf{k}^{\prime}\sigma^{\prime}|G_{0}(E)|\mathbf{k}\sigma\rangle=\delta_{ \mathbf{k}\mathbf{k}^{\prime}}\delta_{\sigma\sigma^{\prime}}/(E+i0^{+})\), and by the fact that the various correlators \(\langle\mathbf{k}\sigma|\overline{H^{m}}|\mathbf{k}\sigma\rangle\), appearing in the Dyson series, are independent of \(\mathbf{k}\) for the uncorrelated hopping and on-site independent disorders that we consider here. A proof, for Gaussian disorder, can be found in the Appendix of [42]. This has to be contrasted with the standard situation of onsite disorder only where the disorder-averaged Hamiltonian exhibits a well-defined band structure \(\epsilon_{\mathbf{k}}\) in momentum space. This entails \(\mathbf{k}\)-dependent diagonal elements of the free Green's function \(\langle\mathbf{k}\sigma|G_{0}(E)|\mathbf{k}\sigma\rangle=(E-\epsilon_{\mathbf{k}}+i0^ {+})^{-1}\). Finally, from both diagrammatic approach and numerical computations, one can show that \(\bar{g}(E)=(E-\Sigma(E))^{-1}\), where the complex-valued scalar \(\Sigma(E)\) is the self-energy. Therefore, one expects not only the backgrounds in each spin channel to be flat in the Brillouin zone, but also to grow with the same scattering time scale \(\tau_{s}(E)=\hbar/(2|\mathrm{Im}(\Sigma)|)\), before reaching the same stationary values. Since \(n_{\downarrow}(-\mathbf{k}_{0},t)=0\) at any time, the flat diffusive background in the \(\downarrow\)-channel grows "around" the CBS dip. As time further increases, the CBS width shrinks and its temporal behaviour depends on whether the system is diffusive, localised, or critical. The CFS peaks develops and grows in the \(\uparrow\)-channel on a time-scale given by the localization time \(\tau_{\mathrm{loc}}\). It reaches a stationary peak-to-background relative contrast \(C_{F}^{\infty}\) at "infinite" times, \(t\gg\tau_{\mathrm{loc}}\). Based on the statistical properties of the eigenfunctions in the GSE ensemble, we expect \(C_{F}^{\infty}=2\) instead of the \(C_{F}^{\infty}=1\) for GOE systems. Note that, in Fig. 1, the momentum distributions are plotted at a time where the CFS peak has not yet reached its stationary value. Note also that deviations from the GSE value are expected when the localization length becomes too small, and comparable to the lattice constant, at large \(W\) values [43; 44]. On the other hand, by definition, the stationary CBS dip-to-background relative contrast is always \(C_{B}^{\infty}=1\), like in the GOE case. In the remainder of this Letter, we will focus on the CBS dynamics and leave the discussion of the CFS dynamics to a forthcoming paper. _CBS width dynamics--_ We define the CBS width \(\Delta k\) as the momentum size of the dip at half-maximum of the diffusive background in the spin-flipping channel. In the metallic regime, the CBS anti-peak continues to shrink in time and asymptotically tends to zero. At large enough times, its width is given by [41] \[\Delta k(t)=\sqrt{\frac{\ln 2}{D(E,W)\,t}}\quad\text{(metallic phase)}, \tag{3}\] where \(D(E,W)\) is the diffusion constant at energy \(E\) and disorder strength \(W\). In the insulating regime, the CBS width decreases until it asymptotically approaches a constant value which defines the localisation length at energy \(E\) and disorder strength \(W\) \[\Delta k(t\to+\infty)=\frac{1}{\xi_{\mathrm{loc}}(E,W)}\quad\text{( insulating phase)}. \tag{4}\] At fixed energy, \(\xi_{\mathrm{loc}}\sim|W-W_{c}|^{-\nu}\) diverges algebraically with a critical exponent \(\nu\) when approaching the critical point \(W_{c}\). Thus, \(\xi_{\mathrm{loc}}\) quickly exceeds the maximum linear size \(L\) of the lattice that is computationally manageable and the system appears diffusive (in other words, \(\Delta k\) sticks to the mesh size \(2\pi/L\) in momentum space). This is the reason why we resorts to finite-time scaling methods [45; 46; 30; 43; 31; 43] of \(\Delta k\), and introduce the length scale \(L_{t}\) through \(t=2\pi\rho(E)L_{t}^{2}\), where \(\rho(E,W)=(1/L)^{2}\sum_{n}\delta(E-\varepsilon_{n})\) is the disorder-averaged density of states (DoS) per unit surface of the system at energy \(E\) and disorder strength \(W\). _Finite-time scaling--_ Following the single-parameter scaling rationale [33], we assume that there exists a single correlation length \(\xi\) subsuming all the microscopic details of the system. This correlation length identifies with the localization length in the insulating regime. As a consequence, the inverse of the rescaled CBS width is a continuous and smooth function of the single variable \(L_{t}/\xi\) that we recast under the form: \[\Lambda\equiv[\Delta k\,L_{t}]^{-1}=F(z), \tag{5}\] where \(z=\eta(E,W)L_{t}^{1/\nu}\), \(\eta(E,W)=\xi^{-1/\nu}\) and \(F(z)\) is a function characteristic of the transition. Working at fixed energy, we now Taylor expand \(F(z)\) and \(\eta(E,W)\) up to some expansion orders, \(F(z)=\sum_{n=0}^{N}F_{n}z^{n}\) and \(\eta(E,W)=\sum_{m=1}^{M}b_{m}(W-W_{c})^{m}\)[48; 31] where we have set \(M=2\), and \(N=2\). For \(W<W_{c}\) we are in the diffusive side and for \(W>W_{c}\) we have localization. Within this approach, \(F_{n}\), \(b_{m}\), \(\nu\) and \(W_{c}\) are free parameters that we determine using a least-square fit of the gathered data for \(\Lambda\) at sufficiently long times. We plot in Fig. 2, the numerical points (dots) and the fitted curves (coloured lines) from which we obtain the estimates \(W_{c}=5.92\) and \(\nu=2.74\). In Fig. 3, we plot \(\Lambda(t)\) against \(t\) for different disorder strengths \(W\). As one can see, for \(W\) close to \(W_{c}\), \(\Lambda(t)\) is essentially constant at large enough times, showing that the CBS width \(\Delta k(t)\) at the transition has the same time dependence as in Eq.(3). Note that we have actually computed the evolution at longer times, where the plateaus are much better marked. This is an interesting result because it shows that the system still exhibits a residual diffusive motion at the critical point. This observation is consistent with Wegner's law, \(s=(d-2)\nu\)[49] which implies a vanishing critical exponent \(s=0\) for \(D\sim(W_{c}-W)^{s}\) in two dimensions and thus a constant diffusion coefficient. This behaviour has also been observed in [50]. To verify the validity of the one-parameter scaling hypothesis in this system, we have numerically extracted \(\xi(E,W)=|\eta(E,W)|^{-\nu}\) to collapse all data for \(\Lambda(t)\), obtained at different \(W\) and times, in Figs. 2 and 3, on a single scaling curve [38]. To construct the scaling function, in Fig. 4\(\ln\Lambda(t)\) is plotted as a function of \(\ln(1/L_{t})\) for different disorder strength \(W\) and then shifted horizontally by some quantity \(\ln\xi(E,W)\) to construct a smooth continuous curves when \(\ln\Lambda(t)\) is plotted as a function of \(\ln(\xi/L_{t})\). The correlation length \(\xi\), central to the one-parameter scaling hypothesis, identifies with the localization length \(\xi_{\rm loc}\) in the insulating phase. _Spin localization_-- Finally (not shown here), we have observed a spin localization phenomenon in the deep localized regime that we will address in a future work. In this regime, the CBS and CFS peaks become very wide. By broadening, the tails of the CBS dip decrease the background in the spin-flipping channel while the tails of the CFS peak do the opposite in the spin-preserving channel leading to an imbalanced spin population in the spin channels. Thus the system tends to retain its initial spin state in the deeply localised regime. _Conclusion_.-- We have analysed Anderson localization in an archetypical symplectic system, which is realized in a physical system if spin-orbit coupling is relevant. We have extracted the critical exponent and the critical disorder strength using a finite-time scaling analysis of Figure 4: Scaling function \(\ln\Lambda\) as a function of \(\ln(\xi/L_{t})\) at energy \(E=1\) (in units of \(J\)). The different colored pieces on the scaling curve correspond to the data obtained at different \(W\) (in units of \(J\)). The dashed lines are the fitted curves based on the one-parameter scaling hypothesis, see text. The horizontal gray dash-dotted line marks the separation between the extended and localized branches of the scaling function. The inset shows the correlation length \(\xi\) calculated from the CBS width \(\Delta k\) using the fitted parameters \(W_{c}=5.9155\) and \(\nu=2.7363\), see text. Figure 3: Solid lines: Inverse rescaled CBS width \(\Lambda(t)\) as a function of \(t\) for different values of the disorder strength \(W\) at fixed energy \(E=1\) (in units of \(J\)). Dashed lines: fits obtained using the Taylor expansion of the one-parameter scaling function \(F(z)\), plotted as a function of \(t\), see text. The thick solid black line corresponds to the critical point \(W=W_{c}=5.9155\). The critical exponent is \(\nu=2.7363\). At long times (not shown in the figure), for \(W<W_{c}\), \(\Lambda(t)\) displays plateaus, whereas for \(W>W_{c}\), \(\Lambda(t)\) behaves like \(1/\sqrt{t}\). Figure 2: Inverse scaled CBS width \(\Lambda(t)\) for times ranging from \(t=10^{2}\) to \(t=10^{3}\) (in units of \(\hbar/J\)), as functions of the disorder strength \(W\) (in units of \(J\)). The longer times correspond to darker curves. The energy is fixed at \(E=1\) (in units of \(J\)). All curves cross at the mobility edge \(W_{c}=5.92\) and the critical exponent is \(\nu=2.74\), values that have been extracted from fitting the Taylor expansion of \(F(z)\) to the numerical data (see text). The inset shows the smooth behavior of the disorder-average DoS per unit surface \(\rho(E,W)\) across the transition at energy \(E=1\). the coherent back-scattering anti-peak. The choice of a Gaussian symplectic ensemble confirms the universality of the critical exponent in the symplectic symmetry class. Such an analysis of this momentum-space signature of the phase transition is also accessible in experiment through time-of-flight expansion and absorption imaging. Furthermore, we have demonstrated that, because the CBS width scales as \(t^{-1/2}\) at the transition, there exists a residual diffusion in the transition region in contrast to three-dimensional systems with a metal-insulator transition. This residual diffusion is a characteristics of any two-dimensional system in which a metal-insulator transition is observed. Future work will study the Anderson transition by monitoring the CFS contrast. However, early computations have shown an additional difficulty in the localized phase: both CBS and CFS peaks exhibits slowly decaying tails in momentum space, leading to an imbalance between the two backgrounds and making an accurate measurement of the CFS contrast troublesome. We are investigating whether this imbalance is solely due to these tails or if it could be a signature of Anderson localization in the spin degrees of freedom. The Authors acknowledge discussions with Drs John Helm and Jean Decamp useful to the establishment of this project.
2307.03360
Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Language models are trained on large-scale corpora that embed implicit biases documented in psychology. Valence associations (pleasantness/unpleasantness) of social groups determine the biased attitudes towards groups and concepts in social cognition. Building on this established literature, we quantify how social groups are valenced in English language models using a sentence template that provides an intersectional context. We study biases related to age, education, gender, height, intelligence, literacy, race, religion, sex, sexual orientation, social class, and weight. We present a concept projection approach to capture the valence subspace through contextualized word embeddings of language models. Adapting the projection-based approach to embedding association tests that quantify bias, we find that language models exhibit the most biased attitudes against gender identity, social class, and sexual orientation signals in language. We find that the largest and better-performing model that we study is also more biased as it effectively captures bias embedded in sociocultural data. We validate the bias evaluation method by overperforming on an intrinsic valence evaluation task. The approach enables us to measure complex intersectional biases as they are known to manifest in the outputs and applications of language models that perpetuate historical biases. Moreover, our approach contributes to design justice as it studies the associations of groups underrepresented in language such as transgender and homosexual individuals.
Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan
2023-07-07T03:01:56Z
http://arxiv.org/abs/2307.03360v1
# Evaluating Biased Attitude Associations ###### Abstract. Language models are trained on large-scale corpora that embed implicit biases documented in psychology. Valence associations (pleasantness/unpleasantness) of social groups determine the biased attitudes towards groups and concepts in social cognition. Building on this established literature, we quantify how social groups are valenced in English language models using a sentence template that provides an intersectional context. We study biases related to age, education, gender, height, intelligence, literacy, race, religion, sex, sexual orientation, social class, and weight. We present a concept projection approach to capture the valence subspace through contextualized word embeddings of language models. Adapting the projection-based approach to embedding association tests that quantify bias, we find that language models exhibit the most biased attitudes against gender identity, social class, and sexual orientation signals in language. We find that the largest and better-performing model that we study is also more biased as it effectively captures bias embedded in sociocultural data. We validate the bias evaluation method by overperforming on an intrinsic valence evaluation task. The approach enables us to measure complex intersectional biases as they are known to manifest in the outputs and applications of language models that perpetuate historical biases. Moreover, our approach contributes to design justice as it studies the associations of groups underrepresented in language such as transgender and homosexual individuals. contextualized word embeddings, language models, AI bias, intersectional bias, psycholinguistics 2023 Shiva Omrani Sabbaghi, Robert Wolfe, and Aylin Caliskan. 2023. Evaluating Biased Attitude Associations of Language Models in an Intersectional Context. In _AAAI/ACM Conference on AI, Ethics, and Society (AIES '23), August 8-10, 2023, Montreal, QC, Canada._, 12 pages. [https://doi.org/10.1145/3600211.3604666](https://doi.org/10.1145/3600211.3604666) ## 1. Introduction Static word embeddings (Golovolovolov et al., 2012; Salakhari et al., 2013) are known to reflect the semantics and biases of the populations that produce the data on which they are trained (Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015). While problematic for their use in machine learning applications which are affected by these biased features (Sutskever et al., 2015; Sutskever et al., 2015), static word embeddings have also allowed for the development of new social scientific approaches to studying societal norms and biases (Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015). However, static word embeddings have been replaced as the dominant representational paradigm in natural language processing (NLP) by language models (Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015; Sutskever et al., 2015), which form contextualized word embeddings, dynamic representations of words that undergo change over the course of the neural network based on the words which occur around them. Prior work suggests that, as this process of "contextualization" occurs, a contextualized representation becomes more semantically similar to the words which occur in context around it (Sutskever et al., 2015). How can a principled and generalizable test for social bias, including intersectional bias, be designed for such dynamic representations? The present research proposes that, rather than studying changes in the representation of a certain word being evaluated for bias, one might instead look to the effects that a biased word has on its surrounding context. That is, instead of finding ways to compensate for the effects of contextualization when assessing bias, one can use the dynamic properties of language models to design a generalizable bias assessment method specifically suited to the paradigm of contextualization. The first challenge in designing a bias test for contextualized word embeddings, however, is that they are not easy to analyze using common mathematical methods for measuring similarity between word embeddings, such as cosine similarity. While prior work has used principal component analysis (PCA) of subtracted vectors to find the dimension that maximizes the variance between biased representations (Sutskever et al., 2015), contextualized word embeddings are known to contain high-magnitude neurons which are often not semantic in nature (Sutskever et al., 2015; Sutskever et al., 2015), preventing the development of a generalizable method for assessing semantic biases based on PCA. The present research addresses this problem by using a maximum margin support vector classifier to learn a semantic property of the contextualized word embedding space: namely, the valence (pleasantness vs. unpleasantness) subspace (Sutskever et al., 2015), onto which embeddings can be projected to measure their semantic properties. In social psychology, valence associations determine the biased attitudes towards social groups (Sutskever et al., 2015). For example, are European American men or African American women perceived more positively valenced? Our method for isolating semantics in contextual spaces also allows for the introduction of a generalizable statistical test to quantify bias in language models by measuring the effects of contextualization. This work applies these methods to five language models (GPT-Neo, XLNet, ALBERT, RoBERTa, and T5) of varying architectures and demonstrates the ability to measure both contextualized word embedding semantics and bias in language models. Code and data are made public at [https://github.com/shivaomrani/LLM-Bias](https://github.com/shivaomrani/LLM-Bias). The contributions of this research are outlined below: 1. A method based on learning a maximum margin subspace to learn the valence subspace of an embedding space is introduced for isolating semantics in the highly contextual and anisotropic upper layers of contextualizing language models. Across five evaluated language models and without resort to pooling methods or postprocessing the contextualized embedding space, the approach is demonstrated to be robust to the geometry of contextualized embedding spaces, and outperforms a cosine similarity based method in the upper layers of every model. In GPT-Neo (GPT-Neo, 2018), scores on the ValNorm intrinsic evaluation task (Peterson et al., 2019), which measures the correlation (Pearson's \(\rho\)) of human-rated valence with valence associations in models, fall to 0.56 in the top layer of the model when using cosine similarity; with the maximum margin method, the score remains high, at 0.81. A similar result is obtained for the four other language models studied, indicating the utility of the method for studying semantics in highly contextual and anisotropic embedding spaces. 2. A statistical bias measurement based on the Word Embedding Association Test (WEAT) (Kalman et al., 2017) is introduced to study differential biases arising from the process of contextualization in language models. The word "person" is placed into generated intersectional contexts with a wide variety of words reflecting social groups. "Person" is contextualized by these contexts, and its embedded representation is obtained from the top layer of a language model. The differential bias between two words is obtained by measuring their effect on the contextualized representation of the word "person" when placed in otherwise identical contexts, as measured based on the projection product with valence (pleasantness vs. unpleasantness) subspace. The method captures a wide variety of biases in language models related to age, education, gender, height, intelligence, literacy, race, religion, sex, sexual orientation, social class, and weight. The results reveal pronounced biases across five language models associated with gender identity (average effect size \(d=0.60\) - "cisgender" and "transgender"), social class (average effect size \(d=0.48\) - "affluent" and "destulture"), and sexual orientation (average effect size \(d=0.42\) - "heterosexual" and "homosexual"). 3. A method is introduced for studying biases without need for a binary, differential test. A permutation is used to generate a large sample of sentences that include social group signals in an intersectional context, each ending with the word "person." The embedded representation of person is computed from each sentence, and the projection product is obtained with the maximum margin subspace. The top 10% most pleasant sentences are returned, and the top 10% most unpleasant sentences are returned. In GPT-Neo, more than 90% of the most pleasant sentences contain the word "heterosexual," while more than 99% of the most unpleasant phrases contain the word "homosexual," again reflecting significant biases related to sexual orientation. Similar biases exist for gender identity in GPT-Neo, with more than 70% of the most pleasant phrases including the word "cisgender," and more than 93% of the most unpleasant phrases including the word "transgender." The results of this research have implications both for the study of bias in AI, where researchers might employ the bias evaluation method to analyze language models for a wide range of intersectional biases by learning subspaces separating conceptual categories or evaluate the effectiveness of bias mitigation approaches, and for the social sciences, which might employ this approach to study the human biases encoded into machines. ## 2. Related Work The present research contributes new methods for measuring semantic norms and bias in contextualized word embeddings. This section reviews related work on the measurement of semantics and bias in static and contextualized word embeddings. ### Static and Contextualized Word Embeddings Word embeddings are dense, continuous-valued vector representations of words used to encode a statistical model of human language (Bordes and Kern, 2017). Static word embeddings such as those formed using the GloVe (Golovolov et al., 2017) and fastText (Koren et al., 2017) algorithms are trained on the co-occurrence statistics of words in a language corpus, and encode the semantic properties of language (Koren et al., 2017), such that algebraic operations on embedded representations can be used to solve analogical tasks (Koren et al., 2017). Static word embeddings are known to encode societal attitudes and implicit and explicit biases of the population which produced the linguistic data on which they are trained (Koren et al., 2017; Golovolov et al., 2017; Golovolov et al., 2017). While identifying and mitigating bias in word embeddings is a noteworthy area of study due to the propagation of these biases in downstream natural language processing (NLP) applications (Koren et al., 2017; Golovolov et al., 2017; Golovolov et al., 2017; Golovolov et al., 2017; Golov et al., 2017), the encoding of population-level human attitudes in word embeddings also allows them to be used as a statistical tool for studying bias, languages, societies, and historical events (Koren et al., 2017; Golovolov et al., 2017; Golov et al., 2017; Golov et al., 2017; Golov et al., 2017; Golov et al., 2017; Golov et al., 2017). Despite their widespread usefulness for both computer science and the social sciences, static word embeddings have a central limitation, in that they collapse all of the senses of a word into a single vector representation. Contextualizing language models such as ELMo (Luo et al., 2017), BERT (Koren et al., 2017), and the GPT family of models (Koren et al., 2017; Golov et al., 2017; Golov et al., 2017; Golov et al., 2017) overcome this limitation by forming contextualized word embeddings, which incorporate information from surrounding words, such that the representation of a word depends on the context in which it appears. Therefore, while polysemes and homographs (words with the same spelling but different meaning) share the same representation in static word embeddings, contextualized word embeddings capture semantic differences based on context and alter a word's representation to reflect the sense in which it is used (Koren et al., 2017). However, polysemes and homographs are not the only words which change representation as they are processed in a contextualizing language model. Ethayarajh (Ethayarajh, 2017) shows that stopwords and articles are some of the most context-sensitive words in models like GPT-2, while Wolfe and Caliskan (Wolfe et al., 2017) demonstrate that contextualized word embeddings from seven language models become more semantically similar to the words that occur around them as they are processed in the model. This suggests that a test of social attitudes and biases encoded in language models might be designed based on the effect a word has on the embedded representations of the words which occur around it. However, contextualized word embeddings have their own limitation: anisotropy, or directional uniformity (Ethayh, 2017). Because language models are trained on a wide variety of objectives such as next-word prediction (Wolfe et al., 2017) and masked-word prediction (Kurita et al., 2019), the geometric structure of contextualized word embeddings may reflect properties useful to performing a model's pretraining task, but detrimental for assessing embedding semantics using methods such as cosine similarity (Wolfe et al., 2017). Recent research proposes methods such as the removal of non-semantic high-magnitude directions or the z-scaling of embeddings to expose semantic information in contextualized word embeddings (Wang et al., 2018; Wang et al., 2018); however, such methods necessitate the loss of information, even if that information is syntactic or otherwise non-semantic in nature. The present research introduces a method for assessing both semantic properties and bias in contextualized word embeddings with no postprocessing or loss of information. ### Bias in Word Embeddings Principled and generalizable evaluation of bias in word embeddings is grounded in cognitive psychology literature (Han et al., 2016; Li et al., 2017). These foundations, and the word embedding bias tests arising from them, are reviewed below. #### 2.2.1. Psychological Foundations for Measuring Machine Bias Psychologists quantify the emotional association of a visual or linguistic stimulus using three primary dimensions of affect (Kurita et al., 2019): valence (pleasantness vs. unpleasantness), arousal (excitement vs. calm), and dominance (control vs. subordination) (Steintein et al., 2017; Steintein et al., 2018; Wang et al., 2018). Social psychologists have compiled large lexica of affective norms, which reflect widely shared attitudes of human subjects who rate words based on valence, arousal, and dominance (Brock et al., 2016; Brock et al., 2016; Brock et al., 2016; Brock et al., 2016). A concrete example of a valence norm is that the word "vomit" triggers an unpleasant feeling for most English language speakers, while the word "love" triggers a pleasant feeling. Valence is the principal dimension of affect that exhibits the strongest affective signal in language (Wolfe et al., 2017). Psychologists use valence associations to evaluate biased attitudes towards social groups and concepts. Greenwald et al. (Greenwald et al., 2017) introduce the Implicit Association Test (IAT), which demonstrated the presence of implicit racial bias favoring European Americans over African Americans by showing that human subjects more readily paired European American names with pleasant words than they did African American names. The IAT inspired the design of the Word Embedding Association Test (WEAT) of Caliskan et al. (Caliskan et al., 2017), which demonstrated that a similar phenomenon occurs in static word embeddings, wherein names of European Americans are more similar to pleasant words based on measurements of cosine similarity than are names of African Americans. In addition to its empirical grounding in social psychology, the WEAT offers theoretical benefits arising from its design as a statistical test: first, the WEAT returns an effect size, Cohen's \(d\)(Cohen, 1977). Cohen's \(d\) is defined such that \(0.20\) is small, \(0.50\) is medium, and \(0.80\) is large, and in most cases \(d\) ranges between \(-2\) and \(2\); second, the WEAT returns a \(p\)-value based on a permutation test (Caliskan et al., 2017). These qualities make the WEAT a useful method for interpreting the magnitude and statistical significance of bias in embedded representations. While Caliskan et al. (Caliskan et al., 2017) define the WEAT using cosine similarity, there is no inherent reason that cosine similarity should be the only measurement available for assessing the association of an embedding with some target. For example, Kurita et al. (Kurita et al., 2019) develop a version of the WEAT which uses the masked word prediction objective of BERT to measure differential biases in masked language models. The WEAT has been adapted previously to study biases in contextualized word embeddings and sentence embeddings formed by language models. May et al. (May et al., 2018) apply the WEAT to measure sentence-level biases in language models such as ELMo and BERT, while Tan and Celis (Tan and Celis, 2018) use a combination of the WEAT as well as method of May et al. (May et al., 2018) to measure biases in a variety of language models such as BERT and GPT-2 (Tan and Celis, 2018). Guo and Caliskan (Guo and Caliskan, 2018) model contextualization as a random effect to measure the overall magnitude of bias across contexts in contextualizing language models. Wolfe and Caliskan (Wolfe et al., 2017) show that biases exist in the contextualized word embeddings formed by GPT-2 after non-semantic principal components are removed from the embeddings. #### 2.2.2. Valence-Based Intrinsic Evaluation of Word Embeddings Prior work shows that the correspondence between the human-rated valence of a word and the valence association of its static (Wolfe et al., 2017) or contextualized (Wolfe et al., 2017) word embedding can be used to evaluate the intrinsic quality of embedding spaces, and to identify when the geometry of an embedding space interferes with the measurement of semantics using techniques based on cosine similarity (Wolfe et al., 2017). Wolfe and Caliskan (Wolfe et al., 2017) find that contextualized word embeddings produced by language models most strongly encode the valence dimension of affect, and that human ratings of dominance also correlate moderately with dominance associations in the contextualized embedding space; arousal, on the other hand, correlates only weakly, with correlations \(\rho<0.30\). This research measures bias in language models using the valence dimension of affect, which corresponds to evaluating biased attitudes towards concepts and social groups. ### Subspace Projection for Bias Detection and Mitigation Another strand of prior work measures bias in word embeddings by identifying a bias subspace. Using 10 pairs of female-male difference vectors such as "woman" - "man" and "girl" - "boy," Bolukbasi et al. (Bulkbaasi et al., 2017) capture a "gender dimension" in static word embeddings by applying PCA to the vector differences and finding the component that best accounts for the variance (Miy et al., 2018). Obtaining the projection of other embedded representations of words with this bias subspace yields a metric for quantifying gender bias. Bolukbasi et al. (Bulkbaasi et al., 2017) demonstrate that traditionally masculine occupations such as doctor and pilot project towards masculinity, while traditionally feminine occupations such as nurse and librarian project towards femininity on the gender subspace. Similarly, using difference vectors such as "rich"- "poor, Kozlowski et al. (2015) find the "affluence dimension" in a study of social class in diachronic (chronologically ordered) static word embeddings. Subspace projection methods have also been adapted to contextualized word embeddings. Zhao et al. (2018) measure and mitigate biases in ELMo's contextualized word embeddings, and show that a coreference resolution system in ELMo inherits its gender bias. Liang et al. (2019) use a variation of a subspace projection method to measure and mitigate biases in ELMo and BERT's sentence representations. Ravfogel et al. (2019) use an iterative variation of a subspace projection method to mitigate biases in contextualized word embeddings, and Basta et al. (2019) apply the subspace projection as well as the method of Gonen and Goldberg (2019) to measure gender bias in ELMo embeddings. When subspace projection approaches are used to develop techniques for bias mitigation, the success of these interventions is sometimes evaluated using the WEAT (Zhou et al., 2019). The present research builds upon prior work by introducing a machine learning method to learn a semantic subspace in the highly contextual and anisotropic upper layers of language models, and introducing a principled statistical test for measuring biases, in an intersectional setting, arising from contextualization in language models. ## 3. Data The present research examines semantics and bias in five language models based on the transformer architecture of Vaswani et al. (2017), which employs a self-attention mechanism to allow word representations to draw information from the representations in the context around them. Models are selected to represent the state-of-the-art for three widely used transformer architectures: decoder-only causal language models; autoencoders; and encoder-decoder models. ### Language Models **GPT-Neo** is an open source replication of GPT-3 (Vaswani et al., 2017), trained on the next-word prediction objective and employ masked self-attention such that the current token only has access to information from words which precede it in a sentence. GPT-Neo is trained on the Pile, an 825 GB dataset of English text composed of 22 diverse and high quality sub-datasets (Zhou et al., 2019). Models trained on the Pile have been shown to outperform models trained on both raw and filtered versions of the Common Crawl on many benchmarks and downstream evaluations (Zhou et al., 2019). Prior work finds that GPT-Neo most strongly encodes human judgments of valence compared to six other language models, including GPT-2 (Zhou et al., 2019), T5, and BERT (Zhu et al., 2019). This research studies the contextualized word embeddings generated by the 24-layer, 1.3 billion parameter version of GPT-Neo (Bordes et al., 2019). While GPT-Neo is one of the largest and empirically best-performing language models available open source (Zhou et al., 2019), it is still much smaller than the largest version of GPT-3, which has 175 billion parameters (Vaswani et al., 2017). **XLNet** is a causal language model that learns bidirectional contexts by permuting the factorization order of text input (Zhu et al., 2019). XLNet is trained on five corpora: English Wikipedia, BookCorpus (Zhu et al., 2019), Giga5 (Giga et al., 2019), filtered versions of ClueWeb 2012-B (Gia et al., 2019), and the Common Crawl corpus (Gia et al., 2019). The 12-layer base-cased version is used in this research. **RoBERTa** is an optimized version of the bidirectional "BERT" autoencoder architecture of Devlin et al. (2017), trained on masked language modeling (prediction of a hidden word) with dynamic masking to prevent memorization of the training data (Zhou et al., 2019). RoBERTa is trained on five corpora: English Wikipedia, BookCorpus (Zhu et al., 2019), a curated subset of CommonCrawl News (Zhou et al., 2019), OpenWebText (Zhu et al., 2019), and Stories (Sandel, 2019). The 12-layer base version is studied in the present research. **ALBERT** is a parameter-reduced version of the BERT architecture which introduces factorized embedding parameterization, cross-layer parameter sharing, and inter-sentence coherence loss, and outperforms BERT and RoBERTa on a variety of NLP benchmark evaluations (Sandel, 2019). ALBERT trains on English Wikipedia and the BookCorpus (Zhu et al., 2019). This research uses the 12-layer V-2 base version of ALBERT, which is also trained on additional corpora used to train RoBERTa and XLNet (Sandel, 2019). **T5** is an encoder-decoder transformer model that takes text as input and produces text as output, and is trained on a variety of supervised and unsupervised NLP objectives (Wang et al., 2019). T5 is trained on the Colossal Cleaned Common Crawl (C4), a large filtered version of the Common Crawl (C4). The present research uses the 12 encoder layers of the base version of T5. All models used are the PyTorch implementations available via the Transformers library of Wolf et al. (2017). ### Valence Stimuli As detailed in section 4.1, the present research learns a valence dimension by training a support vector classifier (SVC) to form a maximum margin subspace between groups of pleasant and unpleasant words. In keeping with prior research in contextualized word embeddings (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019), the groups of pleasant and unpleasant words used to measure valence are the stimuli used to measure social biases in the IAT (Zhu et al., 2019) and the WEAT (Gia et al., 2019). Pleasant vs. Unpleasant stimuli obtained from Caliskan et al. (Gia et al., 2019) to learn an affective valence dimension are included below. **Pleasant**: carcass, freedom, health, love, peace, cheer, friend, heaven, loyal, pleasure, diamond, gentle, honest, lucky, rainbow, diploma, gift, honor, miracle, sunrise, family, happy, laughter, paradise, vacation **Unpleasant**: abuse, crash, filth, murder, sickness, accident, death, grief, poison, stink, assault, disaster, hatred, pollute, tragedy, divorce, jail, poverty, ugly, cancer, kill, rotten, vomit, agony, prison ### Social Biases and Categories The present research designs a method for language models to study the effects of multiple biases interacting in a single string input in an intersectional setting. This requires the identification of a variety of societal biases which may overlap and compound each other in contextualizing language models, as they are known to human society. Drawing on prior work in psychology and AI bias (Zhu et al., 2019; Zhu et al., 2019), 12 western societal biases are identified for study in this work. These include biases based on age, weight, height, intelligence, education, literacy, social class, race, sexual orientation, religion, gender, and sex. For each of these 12 social biases, two categories are selected such that bias arising from the difference in these categories can be measured. For example, the categories "tall" and "short" are selected to measure bias based on height. Because word frequency can affect the representational quality of a word in a contextualizing language model (Yang et al., 2017), categories are selected such that they have relatively balanced frequency based on human usage as measured using Google ngrams (Krishna et al., 2017). For example, though "educated" and "unedacted" could be used to quantify biases based on educational attainment, "unedacted" is used roughly 10 times less frequently than "educated" in ngrams (Krishna et al., 2017). To balance the frequency of the words, "ignorant" is selected as the second category in the pair with "educated" Table 1 shows the social biases evaluated and their corresponding categories. Column \(r\) describes the term frequency ratio of the first category to the second category. Although the intention is to represent each bias with category terms that have similar rates of frequency, the categories for gender bias are highly imbalanced. For the lack of a more suitable alternative, "cisgender" remains one of the gender categories, despite its imbalance with the more commonly used term "transgender." Many of the biases examined in this research could be represented with more than two categories. There are, for example, more religions, sexual orientations, and genders than those captured here. This research introduces a new method and demonstrates that it captures these well-studied social biases. The method generalizes beyond the categories defined herein. ## 4. Approach The present research describes a new method for measuring biases based on valence in contextualized word embeddings. This involves first learning an affective dimension in the contextualized embedding space, and then measuring bias based on the projection product of a contextualized word embedding with the learned dimension. Figure 1 summarizes the approach. ### Learning an Affective Dimension While recent work shows that the semantic properties of contextualized word embeddings, including valence (Yang et al., 2017), can be isolated by removing top principal components, these methods have the significant drawback of postprocessing the embeddings, and removing information from the model's representations. To mitigate this constraint, the present research proposes a method which requires no postprocessing of the embedding space, but instead learns a property of the space against which contextualized representations can be measured. The valence direction is learned in the contextualized embedding space by training an SVC with a linear kernel given the high dimensionality of the space. For valence, the SVC is trained to classify contextualized representations of 25 pleasant words and 25 unpleasant words such that the separating subspace between the pleasant words and the unpleasant words maximizes the distance between them. The coefficients of the separating subspace are extracted, and used as a valence dimension of the contextualized embedding space, respectively. The words used to learn a separating subspace are input to the model in the decontextualized setting, _i.e.,_ with no surrounding context. Each decontextualized word is preceded by the <BOS> (Beginning of Sequence) token, extracted from the model tokenizer. ### Contextualizations of "Person" While prior work evaluates word embedding bias on the word level (Krishna et al., 2017; Yang et al., 2017) or the sentence level (Yang et al., 2017), this research measures biases resulting from the contextualization of the word "person," such \begin{table} \begin{tabular}{l l l|l l l} \hline \hline **Social Bias** & **Categories** & \(r\) & **Social Bias** & **Categories** & \(r\) \\ \hline age & young, old & 0.59 & social class & affluent, destitute & 0.55 \\ weight & thin, fat & 1.40 & race & white, black & 1.30 \\ height & tall, short & 0.12 & sexual orientation & heterosexual, homosexual & 0.64 \\ intelligence & smart, stupid & 0.98 & religion & christian, muslim & 14.36 \\ education & educated, ignorant & 1.70 & gender & cisgender, transgender & 0.05 \\ literacy & literate, illiterate & 0.85 & sex & male, female & 0.98 \\ \hline \hline \end{tabular} \end{table} Table 1. Category terms chosen from related work in AI bias and social psychology to represent human social biases. The term \(r\) denotes the frequency ratio of the first category to the second according to Google ngrams corpus of English books (Krishna et al., 2017). Figure 1. A support vector classifier is used to learn the valence dimension in the upper layers of contextualizing language models. Biases related to pleasantness are evaluated by taking projection product of the contextualized representation of “person” at the end of a context with the learned valence dimension. that it is altered along the valence dimension. More concretely, the question under consideration is whether the word person becomes more pleasant or unpleasant when it occurs in a sentence with a word like "transgender" or "cisgender" (_e.g.,_ "a transgender person"). Because causal language models employ masked self-attention such that the current word only has access to the information of the words which precede it, this research positions the word person at the end of the sentence, such that information can be drawn from all other words in the sentence. Models such as BERT and T5, which employ bidirectional self-attention, are also able to retrieve information from any word in a context given this format. ### Measuring Valence Associations The valence association of a word's embedded representation is measured by its orthogonal scalar projection onto the learned affective dimension. For a vector \(v\), and subspace \(U\) defined by \(n\) orthogonal vectors \(u_{1},u_{2},\ldots,u_{n}\), the scalar projection of \(v\) onto \(U\) is computed as follows: \[\mathrm{S}(v,U)=\sum_{i=1}^{n}\frac{(v.u_{i})}{(u_{i}.u_{i})} \tag{1}\] where \((a.b)\) refers to the dot product of \(a\) and \(b\). The valence dimension is learned such that positive values of \(s\) correspond to greater association with pleasantness (_i.e.,_ high-valence words will project onto the positive side of the valence dimension), while negative values of \(s\) correspond to greater association with unpleasantness (projection onto the negative side of the valence dimension). ### Quantifying Differential Bias Using the SC-WEAT The WEAT and the SC-WEAT measure biased associations and return two values: an effect size, Cohen's \(d\), and a \(p\)-value based on a permutation test (Kalikan et al., 2012). Caliskan et al. (2012) define the WEAT as using cosine similarity as a means of assessing the similarity between two embedded representations, as this distance metric reflects a widespread paradigm for measuring similarity in static word embeddings (Kalikan et al., 2012; Yang et al., 2013). However, the WEAT is a statistical method for assessing differential similarity of two sets of targets (_e.g.,_ two social groups) with two sets of attributes (_e.g.,_ pleasantness and unpleasantness), and is not necessarily dependent upon cosine similarity as a distance metric when a more appropriate measure is validated for an embedding space. For the present research, a WEAT is defined to capture the differential bias of two words in contextualizing language models, based on their projection product with the valence dimension. The formula of the SC-WEAT is readily adaptable for this purpose, as it measures the differential similarity of a single target vector with two attribute groups: \[\frac{\mathrm{mean}_{a\in A}\mathrm{S}(a,U)-\mathrm{mean}_{b\in B}\mathrm{S} (b,U)}{\mathrm{std}_{\mathrm{dev}_{x\in A\cup B}\mathrm{S}(x,U)}} \tag{2}\] In this case, the learned affective dimension \(U\) (_i.e.,_ valence) is used as the target. To measure the differential bias for two words across contexts, the \(A\) attribute group is defined to include the embedded representations \(a\) of the word "person" in all of the sentences which include a certain attribute word, such as "transgender," and a \(B\) attribute group is defined to include a set of sentences which are identical to the \(A\) group, but with the target word replaced with an opposing category word, such as "cisgender," for which the differential bias effect size will be obtained. The bias measurement is defined as the difference in the mean projection product of the \(A\) group with the valence dimension and the \(B\) group with the valence dimension, divided by the joint standard deviation of projection products, commensurate with Cohen's \(d\). A p-value is obtained using the same permutation test as employed in the SC-WEAT (Kalikan et al., 2012). ## 5. Experiments In this section, details are provided for three different experiments and their results. Experiment 1 examines the utility of the learned valence dimension for capturing semantics in language models. Experiment 2 studies differential biases based on valence in language models. Experiment 3 examines the words most biased based on association with valence. ### Evaluating Learned Affective Dimensions Against Human Judgments of Semantics The utility of the learned dimension for representing valence in the contextualized word embedding space is assessed using the ValNorm method of Toney and Caliskan (Toney and Caliskan, 2011). ValNorm is an intrinsic evaluation task that obtains the correlation (Pearson's \(\rho\)) of a word's human valence rating in a valence lexicon with the SC-WEAT valence association of its embedded representation. Toney and Caliskan (Toney and Caliskan, 2011) employ three valence lexica in evaluating ValNorm, of which we select Belleza's lexicon (Belleza, 2013), a set of 399 words rated by human subjects based on pleasantness, which Wolfe and Caliskan (Wolfe and Caliskan, 2013) show is sensitive to the presence of non-semantic high-magnitude directions in language models. To show that the method is effective in the highly contextual upper layers of language models (Kalikan et al., 2012), a ValNorm score (Pearson's \(\rho\)) is obtained at every layer of the language model using the projection product with the valence subspace as a word's valence association in the model. ### Bias Evaluation Using SC-WEAT The categories derived from the 12 social biases considered in this research are placed into a sentence in the order shown in Table 1, _i.e.,_ "a young thin [...] female person." While maintaining the order of the biases, the context is altered such that every category occurs in a sentence with every other combination of categories, except for its own opposing category. This leads to a total of \(2^{12}\), or \(4,096\) contexts. Each category occurs in exactly half of these contexts, or \(2,048\) occurrences. The order of the social bias categories in the sentence template was chosen in an attempt to make the sentences sound more natural. For example, "thin female person" is used more frequently than "female thin person" (Kalikan et al., 2012). Ideally, one should generate all permutations of the social biases to eliminate the impact of word order on the bias captured by "person" at the end of the sentence. However, the total number of permutations in this experiment would have been about 2 trillion sentences which was beyond our computational capacity. Future work can investigate the impact of word order on bias computations. The two categories described in Table 1 are selected for each of the 12 biases examined in this research, with the first category (_e.g._, affluent) set as the \(A\) attribute, and the second category (_e.g._, destitute) set as the \(B\) attribute, such that a positive effect size reflects stereotype-congruent bias (_e.g._, affluent individuals are evaluated more positively than destitute individuals). For each category, the \(2,048\) sentence combinations in which the \(A\) attribute occurs are selected, and the embedded representation for the word "person" is obtained for each of these contexts. The same process is repeated for the \(B\) attribute, and the two sets of embeddings are used as input to the projection product SC-WEAT. To obtain the most contextual representation produced by a transformer, _i.e._, the representation most altered by the words in its context, the contextualized word embedding in the top (output) layer of the model is obtained, commensurate with prior research which finds that top layers of language models are the most contextual (Zhou et al., 2018; Wang et al., 2019). A bias effect size \(d\) and a \(p\)-value are obtained for each test. In total, five transformers of varying architectures and pretraining objectives are examined. ### Identifying the Strongest Biases Across Contexts A final experiment examines the most biased categories in GPT-Neo, the largest of the language models studied herein. We choose GPT-Neo because Nadeem et al. (Nadeem et al., 2017) observes that larger, better-performing language models are also more biased. Five historically disadvantage societal biases are selected for study: race, sex, religion, gender, and sexual orientation. The ten categories associated with these concepts are drawn from Table 1. Sentences are created with five categories present per sentence (_e.g._, a white female cisgender heterosexual Christian person). All possible permutations are generated using the ten categories in question, such that every category is seen in combination with every other category in every position in the sentence. The total number of permutation of phrases constructed in this manner is \(3,840\). The valence projection product is obtained for the embedded representation of the word "person" in every generated sentence. The characteristics of the top 10% most positively valenced and top 10% most negatively valenced Figure 2. Across five contextualizing language models, using a support vector classifier to learn the valence dimension improves VaNorm evaluation scores in the upper layers of the language model over comparable results obtained using cosine similarity, without the need for postprocessing of embeddings. This result suggests the robustness of the methods proposed in this research for capturing semantics across widely varying language modeling architectures and pretraining objectives. five-category sentences are examined. For these subsets of the generated sentences, the percentage of the time each category word occurs in each of the positions preceding "person" is quantified. ## 6. Results The evidence indicates that learning the valence is useful for detecting semantics and social biases in the contextual and anisotropic upper layers of language models. ### Evaluating the Learned Affective Dimension Across five state-of-the-art transformer language models with different architectures, tokenization algorithms, and training objectives, learning an affective dimension in the embedding space outperforms cosine similarity on the ValNorm intrinsic evaluation task with no postprocessing of the embeddings. The effect is especially noticeable in the highly contextual upper layers of these models, where non-semantic high-magnitude directions distort measurements of semantics based solely on cosine similarity. As shown in Figure 2, the ValNorm score (Pearson's \(\rho\)) drops to 0.56 in the top layer of GPT-Neo 1.3B when using cosine similarity, but stays high, at 0.81, when using the projection product. Figure 2 also shows that a similar effect occurs in all five of the language models studied in this research, indicating that this method allows for the measurement of human-interpretable semantics and bias in highly contextual and anisotropic embedding spaces. ### Measuring Differential Bias Based on Valence As shown in Table 2, the evidence suggests that language models encode consistent valence biases based on gender identity, sexual orientation, and social class signals in an intersectional context. A statistically significant positive effect size is obtained for the heterosexual vs. homosexual and cisgender vs. transgender test for all five of the models studied in this research. For ALBERT and RoBERTa, effect sizes are large (\(d=1.34\) and \(d=1.22\)) for the gender identity test; medium effect sizes are obtained for GPT-Neo (\(d=0.64\)) and T5 (\(d=0.61\)) for the sexual orientation test. Statistically significant valence bias effect sizes are also obtained for four language models for the affluent vs. destitute test. Bias effect sizes are medium (\(d>0.5\)) or large (\(d>0.8\)) in three of the five models. The large effect size for social class speaks to the presence of biases related to social class in language models, a relatively unexplored bias type in AI except for the work of Kozlowski et al. (Kozlowski et al., 2017) analyzing the meaning of class in static word embeddings. Figure 3 visualizes the difference in the mean projection onto the valence dimension for each of the 12 biases studied. Another noteworthy result is that three of five language models (ALBERT, GPT-Neo, and RoBERTa) differentially associate men with pleasantness over women. While effect sizes are small, this deviates from psychological research suggesting that women are evaluated as more pleasant than men. For example, while men are often associated with aggression and violence, women are associated with more communal attributes such as warmth. This is known as the "women-are-wonderful-effect" (Krishkin, 2001; Krishkin et al., 2002; Krishkin et al., 2003; Krishkin et al., 2004). It is possible that women are portrayed negatively in the training corpora of these language models, causing men to be more differentially pleasant. This possibility is supported by the recent research of Birhane et al. (2018), who find that corpora used for training language-and-image models contain misogynistic and toxically stereotypical depictions of women. The association of women with pleasantness is, however, observed in T5. Results across five language models suggest the utility of the method proposed in this research for capturing widespread societal biases in contextualized word embeddings. Bias effect sizes are stereotype-congruent in at least 9 of 12 tests for three of the five models assessed, and in every model at least half of the bias tests yield positive effect sizes. Moreover, the results presented here further affirm the findings of Nadeem et al. (Nadeem et al., 2019), who find that larger language models are both better at language modeling and more biased based on a downstream evaluation of bias. The present research observes that GPT-Neo, the largest of the language models studied herein and previously observed to outperform other language models on both intrinsic and downstream evaluations of semantic quality (Kozlowski et al., 2017; Kozlowski et al., 2017), has a statistically significant bias effect size of at least 0.50 for 6 of the 12 bias tests, the most of any of the models studied herein. ### Identifying the Strongest Affective Biases in a Language Model As shown in Figure 4, retrieving the top 10% of the most pleasant contexts shows that heterosexuality and cisgender identity are over-represented in the most positively valenced phrases in GPT-Neo, with more than 93% of the most pleasant phrases containing the word "heterosexual," and more than 70% of the most pleasant phrases containing the word "cigender." The word "Christian" is also positively valenced, with more than 65% of the most pleasant phrases containing the word. On the other hand, the word "homosexual" occurs in the most positively valenced phrases less than 7% of the time, and retrieving the top 10% of the most unpleasant contexts shows that homosexuality and transgender identity are among the most negatively valenced words assessed, with more than 99% of the most negative phrases containing the word "homosexual," and more than 93% of the most negative phrases containing the word "transgender." None of the eight other words assessed occurs in more than 55% of the most negative phrases. The word "heterosexual" occurs less than 1% of the time in the most negatively valenced phrases. The word "Muslim" occurs more frequently in the most negatively valenced phrases than it does in the most positively valenced phrases, as does the word "white." The words "male" and "female" occur roughly equally in the most positively and negatively valenced phrases. Both figures 3 and 4 show that a "white" person is slightly more negatively valenced than a "black" person in GPT-Neo (\(d=-0.12\) in Table 2). In representational models, such as multi-modal vision-language models, the default unmarked person in English is associated with "white" (Yin et al., 2017), as a result the noun the person does not typically get marked with the identity descriptor of "white" (Yin et al., 2018). The effect of markedness for a "black" person might potentially be causing the stereotype incongruent result. The results of this method, which does not require the definition of binary groups for differential measurement, are mostly consistent with the results obtained from the differential statistical test introduced in the second experiment. This suggests the utility of the projection method for measuring biases in contextualized word embeddings even when an opposing category does not exist such that a differential bias test can be performed. ## 7. Discussion The contributions of the present research are threefold: a method for measuring semantics in contextual and anisotropic embedding spaces; a novel and generalizable differential bias measurement which takes into account the contextualization property of all language models, and which returns an effect size indicating magnitude and a \(p\)-value measuring statistical significance; and a means for quantifying biases in contextualized word embeddings in an intersectional setting. By analyzing sexual orientation, social class, and gender bias without having to use gender binary, our approach is more inclusive compared to various previous analyses. The findings of this work indicate that the biases demonstrating low regard based on sexual orientation observed by Sheng et al. (2018) in the text output of language models can be traced back to the contextualized embedding space, where the occurrence of the words "homosexual" and "transgender" lead to greater association with unpleasantness and negative attitudes. Future work might use the method put forth in this research to further examine the link between bias in contextualized word embeddings and the propagation of that bias to model objectives such as language generation or other downstream NLP tasks such as sentiment analysis, machine translation (Kang et al., 2017), or consequential decision making. Many of the results reported in this work suggest that biases of contextualization have consistent indirect impacts on the representation of societally disadvantaged people. For example, the results indicate that biases related to education and social class exist in many language models. These categories speak directly to \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{SC-WEAT Differential Valence Association} \\ \hline \multirow{2}{*}{Bias Test} & \multicolumn{2}{c|}{ALBERT} & \multicolumn{2}{c|}{GPT-Neo} & \multicolumn{2}{c|}{RoBERTa} & \multicolumn{2}{c|}{T5} & \multicolumn{2}{c|}{XLNet} \\ \cline{2-10} & \(d\) & \(p<\) & \(d\) & \(p<\) & \(d\) & \(p<\) & \(d\) & \(p<\) & \(d\) & \(p<\) \\ \hline young vs. old & \(-0.68\) & \(10^{-30}\) & \(0.50\) & \(10^{-30}\) & \(0.33\) & \(10^{-30}\) & \(0.30\) & \(10^{-30}\) & \(-0.67\) & \(10^{-30}\) \\ thin vs. fat & \(-0.02\) & \(n.s.\) & \(0.56\) & \(10^{-30}\) & \(0.20\) & \(10^{-9}\) & \(-0.13\) & \(10^{-4}\) & \(-0.06\) & \(.05\) \\ tall vs. short & \(0.22\) & \(10^{-12}\) & \(-0.06\) & \(0.5\) & \(0.20\) & \(10^{-9}\) & \(-0.27\) & \(10^{-16}\) & \(0.86\) & \(10^{-30}\) \\ smart vs. stupid & \(0.02\) & \(n.s.\) & \(0.56\) & \(10^{-30}\) & \(0.82\) & \(10^{-30}\) & \(-0.01\) & \(n.s.\) & \(0.48\) & \(10^{-30}\) \\ educated vs. ignorant & \(0.32\) & \(10^{-30}\) & \(0.92\) & \(10^{-30}\) & \(0.81\) & \(10^{-30}\) & \(-0.22\) & \(10^{-12}\) & \(-0.04\) & \(n.s.\) \\ literate vs. illiterate & \(-0.18\) & \(10^{-10}\) & \(0.17\) & \(10^{-9}\) & \(-0.05\) & \(0.01\) & \(n.s.\) & \(0.11\) & \(10^{-4}\) \\ affluent vs. destitute & \(0.67\) & \(10^{-30}\) & \(1.00\) & \(10^{-30}\) & \(0.12\) & \(10^{-3}\) & \(-0.03\) & \(n.s.\) & \(0.52\) & \(10^{-30}\) \\ white vs. black & \(0.35\) & \(10^{-30}\) & \(-0.12\) & \(10^{-3}\) & \(0.14\) & \(10^{-5}\) & \(0.31\) & \(10^{-30}\) & \(-0.08\) & \(.01\) \\ heterosexual vs. homosexual & \(0.35\) & \(10^{-30}\) & \(0.64\) & \(10^{-30}\) & \(0.12\) & \(10^{-4}\) & \(0.61\) & \(10^{-30}\) & \(0.40\) & \(10^{-30}\) \\ christian vs. muslim & \(0.27\) & \(10^{-30}\) & \(-0.15\) & \(10^{-6}\) & \(-0.63\) & \(10^{-30}\) & \(0.01\) & \(n.s.\) & \(-0.16\) & \(10^{-6}\) \\ cisgender vs. transgender & \(1.34\) & \(10^{-30}\) & \(0.24\) & \(10^{-14}\) & \(1.22\) & \(10^{-30}\) & \(0.09\) & \(.01\) & \(0.12\) & \(10^{-4}\) \\ male vs. female & \(0.27\) & \(10^{-30}\) & \(0.10\) & \(10^{-3}\) & \(0.10\) & \(10^{-3}\) & \(-0.93\) & \(10^{-30}\) & \(0.01\) & \(n.s.\) \\ \hline \end{tabular} \end{table} Table 2. Across five language model architectures, the most severe biases occur for sexual orientation and gender identity, with positive effect sizes obtained from all five models assessed. GPT-Neo includes six effect sizes of \(.5\) or greater, the largest number of any language model, corresponding to the observation of Nadeem et al. (2018) that larger, better-performing language models are also more biased. Figure 3. Differences in the mean valence of the word “person” when it co-occurs with the above categories in \(4,096\) phrases. Length of green lines represents the magnitude of differential valence for each pair of categories. Red circles indicate stereotypically higher-valence categories, while red squares represent stereotypically lower-valence categories. the opportunities and outcomes afforded over which an individual typically has little control. Moreover, many of the biases observed in this work are likely to interact with and intensify other biases. For example, a bias based on sex (male vs. female) is observed in most language models, such that men are more associated with pleasantness than women. However, biases based on weight, and age are also observed, such that the word person is more pleasant when it occurs with "thin," and "young." While such biases may affect any context in which they are observed, they are likely to have greater impact on the representation of women in language models, as women are more likely at a societal level to be described with regard to their physical appearances, and biases related to age are often directed more strongly toward women, and at younger ages than men (Kumar et al., 2019). The consequence of the contextualization effect observed in this research is that representations of people more likely to be described in a biased manner will become even more negatively valenced in the model than the categorical biases indicate when considered individually. The methods described in this research have ramifications not only for studies of bias in AI, but also for the social sciences, as social scientists may use the computational approaches described in this research to quantify properties of human language and culture, without the problem of meaning being collapsed into a single vector representation, as occurs in static word embeddings. While norms and biases based on valence are studied in this work to ground a new method in prior psychological research, a maximum margin subspace could be learned to represent many other semantic properties; for example, future research might learn a political spectrum subspace to study biases beyond those observable based on valence. Finally, while this work assesses in-context biases, it evaluates their impact in individual or two differential categories. However, the method can be trivially extended such that intersectional identities can be assessed by observing biases based on word bigrams, trigrams, or longer descriptive sequences. This is facilitated by using the contextualized representation of the word "person" as the target embedding for all bias measurements, rather than attempting to directly measure the embedded representations of bias-inducing words or categories. ### Limitations and Future Work The results reported for experiment 5.2 are obtained by generating combinations of categories representing 12 social biases. While useful for studying biases arising from contextualization, the contexts generated from these combinations of social biases are unlikely to occur in human-authored text, as most descriptions of people will not remark on more than one or two characteristics at a time. Future work might explore the use of this method in more natural contexts, perhaps similar to the approach used by Wolfe and Caliskan (Wolfe and Caliskan, 2018), who study racial and gender biases related to names by interchanging names in otherwise identical contexts derived from human-authored sources. A word order experiment might show that the words at the beginning of a sentence, or closest to the target word, contribute the most to bias. ## 8. Conclusion This research introduces a novel and effective machine learning approach to measuring valence associations in contextualized word embeddings. The method is used to design differential and individual tests of bias which are applied to five language models of varying architectures and training objectives. Applying the method reveals widespread biases in state-of-the-art transformer language models based on gender identity, social class, and sexual orientation. ###### Acknowledgements. This work is supported by the U.S. National Institute of Standards and Technology (NIST) Grant 60NANB20D212T. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect those of NIST.
2303.11293
Advancing Network Securing Strategies with Network Algorithms for Integrated Air Defense System (IADS) Missile Batteries
Recently, the Integrated Air Defense System (IADS) has become vital for the defense system as the military defense system is vital for national security. Placing Integrated Air Defense System batteries among locations to protect locations assets is a crucial problem because optimal solutions are needed for interceptor missiles to intercept attacker missiles for maximizing protection of assets across locations or places. In this research, the procedures of using network algorithms along with developing several network algorithms are going to be demonstrated to develop a model for sequential development of seven network securing strategies of placing Surface to Air Missile (SAM) batteries to maximize the protection of assets across locations (based on given asset values) by generating optimal solutions through computation to destroy maximum attacker missiles by using minimum interceptor missiles with given intercept probability. This network securing strategies can be implemented not only for Integrated Air Defense System (IADS) planning but also Counter Air (CA) planning as Integrated Air Defense System (IADS) is conducted with defensive counter air supported by attack operations in offensive counter air.
Rakib Hassan Pran
2023-03-13T23:54:55Z
http://arxiv.org/abs/2303.11293v2
Advancing Network Securing Strategies with Network Algorithms for Integrated Air Defense System (IADS) Missile Batteries ###### Abstract Recently, the Integrated Air Defense System (IADS) has become vital for the defense system as the military defense system is vital for national security. Placing Integrated Air Defense System batteries among locations to protect locations assets is a crucial problem because optimal solutions are needed for interceptor missiles to intercept attacker missiles for maximizing protection of assets across locations or places. In this research, the procedures of using network algorithms along with developing several network algorithms are going to be demonstrated to develop a model for sequential development of seven network securing strategies of placing Surface to Air Missile (SAM) batteries to maximize the protection of assets across locations (based on given asset values) by generating optimal solutions through computation to destroy maximum attacker missiles by using minimum interceptor missiles with given intercept probability. This network securing strategies can be implemented not only for Integrated Air Defense System (IADS) planning but also Counter Air (CA) planning as Integrated Air Defense System (IADS) is conducted with defensive counter air supported by attack operations in offensive counter air. Integrated Air Defense System (IADS), network algorithms, network clustering, network centralities, small worldness, network data generation, network analysis, big data computation, regression analysis ## I Introduction Integrated Air Defense System (IADS) is an air defense system which is the aggregation of Service/functional component and agency Air Missile Defense (AMD) systems consist of sensors, weapons, C2, communications, intelligence systems and personnel operating in a theater or Joint Operations Area (JOA) [1][2]. Generally, IADS is established by Area Air Defense Commander (AADC) and, a theater AMD system in IADS typically depends on support and enabling functions from national assets and systems which are not controlled by the JFC (Joint Force Commander) [1]. As part of IADS functionality, IADS needs to provide protection of a country's assets by placing IADS missile batteries in optimal locations across the country [2]. An IADS missile battery is usually an unit of AMD which can be manually portable such as tactical air defense weapons systems or unportable such as emplaced IADS missile batteries to protect population over places which are not mobile and readily detectable based on the functionality, structure and attribute of IADS missile battery [2]. Placing an IADS missile battery over locations is a critical problem as limited IADS missile batteries are intended to protect assets over a large number of locations by considering missile ranges of IADS missile batteries for asset values over locations [2]. Locations can be represented in a network as locations are connected through real world routes or paths and on the other hand, locations can also be represented as a network where edges represent euclidean distances among locations. But in case of missile range, euclidean distances among locations need to be considered as edges for representing a location network [2]. In case of placing IADS missile batteries in a network, optimal location nodes need to be found, for that, a huge number of network algorithms existed in literature such as network algorithms based on the sum of the fraction of all-pairs shortest paths [3][4], the reciprocal of the average shortest path distance [5][6], the structure of incoming links [7], the centrality of neighbors [8], the sum of the fraction of all-pairs shortest paths [9], etc. In this research, a model for sequential development of seven strategies with the use of six network algorithms has been introduced. These seven strategies have been applied to generated weighted edge location networks with assigned asset values. These weighted edge location networks with generated assigned values are basically generated by using Watts Strogatz graph algorithm [10] which is based on small worldness [11] computed with network clustering coefficient algorithm [12][13]. The results that came from after applying seven strategies to generated location networks have been analyzed with ordinary least square linear regression analysis [14] at the end. ## II Methodology and Computational Experiment For developing sequential model of seven network securing strategies, six network algorithms have been selected to measure each node of the location networks, where network algorithm_1 is based on the sum of the fraction of all-pairs shortest paths [3][4], network algorithm_2 is based on the reciprocal of the average shortest path distance [5][6], network algorithm_3 is a ranking of nodes in graph based on the structure of incoming links [7], network algorithm_4 is based on the centrality of neighbors [8], network algorithm_5 is based on the sum of the fraction of all-pairs shortest paths [9] similar to network algorithm_1, and network algorithm_6 is measuring a node as the fraction of nodes that it is connected to. These network algorithms have been used to develop a sequential developing model of seven network securing strategies in the section of IV named as "STRATEGIES". To generate weighted edge location networks with assigned asset values, Watts Strogatz small world network generation [10][11] algorithm has been selected where assigned asset values (see Figure 4) are following power law distribution [2]. Besides considering Euclidean distances, it is assumed that to hit each location's attacker missile, the interceptor missiles have different flying altitudes and different trajectory paths which doesn't depend on Euclidean distance because of geographical attributes such as hills, mountains, etc. For that reason, the trajectory path distance of an interceptor missile has been considered as range for that interceptor missile. Because of considering different flying altitudes' trajectory paths as distances among nodes, generated location networks don't follow Pythagorean theorem or Euclidean theorem [27][28]. As a scale of small worldness [11], five types of small worldness have been selected for generating 20 networks of 50 location nodes with different diameters where small worldness is close to zero means more to small world characteristics. By applying seven strategies upon previously generated 20 location networks with 50 nodes, total unprotected asset value percentages (Worst Case Scenarios) have been computed for each strategy with 98% of IADS missile battey's interceptor missile's interceptor probability [15]. To analyze the result that came from applying seven strategies upon generated location networks, ordinary least square linear regression analysis [14] has been used for analyzing relations among sum of all IADS missile ranges, network diameters of generated location networks and unprotected asset value percentages for optimal strategy of each generated network. Besides that, ordinary least square linear regression analysis [14] has also been used for analyzing relations among sum of all IADS missile ranges, network diameters of generated location networks and network diameters of generated location networks and unprotected asset value percentages for optimal strategy of each generated network. Besides that, ordinary least square linear regression analysis [14] has also been used for analyzing relations among sum of all IADS missile ranges, network diameters of generated location networks and small worldness for optimal strategy of each generated network. ## III Computational Instruments Python 3 Google Compute Engine [16] has been used where System RAM is 12.7 GB and Disk is 107.7 GB. Code has been written in Python language [17] and Packages are: Google.colab: 0.0.1a2, Networkx: 3.0 [18], Pandas: 1.3.5 [19], Numpy: 1.21.6 [20], Matplotlib: 3.5.3 [21], Scipy: 1.10.0 [22], Array, Statsmodels.api: 0.13.5 [23], Sklearn: 1.2.1 [24]. Documentation done in Google Docs [29]. ## IV Strategies In this section, seven network securing strategies have been developed with six network algorithms which have been introduced in the Methodology section. ### Strategy_1: Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_1 Step_3. Putting IADS missile battery with maximum interceptor missile range on the place of first maximum network_algorithm_1 measure and if second IADS missile battery exists with with maximum interceptor missile range or less than interceptor missile range then, putting the second IADS missile battery on the place of the next maximum network_algorithm_1 measure where distance from the node of first maximum network_algorithm_1 measure is more than maximum IADS missile battery interceptor missile range. Step_4. And doing the same for placing rest of IADS batteries on next maximum network_algorithm_1 measure where node distance from first placed IADS missile battery is more than first IADS battery's interceptor missile range For example, one generated weighted edge location network has been introduced below. One, two and three IADS missile batteries have been applied sequentially on that generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. Figure 1: Weighted edge network with asset values. Each asset value is scored as a fraction number (between 0.0 and 1.0) out of 1. All nodes are unprotected (colored as red) as no IADS missile battery has been placed. Here, The diameter of network is 180 km Figure 4: Asset value distribution histogram of weighted edge network Figure 5: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 Figure 3: Degree distribution histogram of weighted edge network Figure 6: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node number 12 and Second IADS missile battery is placed at location node number 11. First placed IADS missile battery’s interceptor missile range is 80 km and second placed IADS missile battery’s interceptor missile range is 70 km. Here, total unprotected asset value is 0.4 Figure 5: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 Figure 2: Weighted edge network with location node numbers. All nodes are unprotected (colored as red) as no IADS missile battery has been placed. Here, The diameter of network is 180 km **Strategy_2:** (same as Strategy_1 except using network_algorithm_2 instead of network_algorithm_1) Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_2 Step_3. Putting IADS missile battery with maximum interceptor missile range on the place of first maximum network_algorithm_2 measure and if second IADS missile battery exists with with maximum interceptor missile range or less than interceptor missile range then, putting the second IADS missile battery on the place of the next maximum network_algorithm_2 measure where distance from the node of first maximum network_algorithm_2 measure is more than maximum IADS missile battery interceptor missile range. Step_4. And doing the same for placing rest of IADS batteries on next maximum network_algorithm_2 measure where node distance from first placed IADS missile battery is more than first IADS battery's interceptor missile range For example, One, two and three IADS missile batteries have been applied sequentially on that previously generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. node 5, Second IADS missile battery is placed at location node 34 and Third IADS missile battery is placed at location node 39. First placed IADS missile battery's interceptor missile range is 80 km, second placed IADS missile battery's interceptor missile range is 70 km and third placed IADS missile battery's interceptor missile range is 20 km. Here, total unprotected asset value is 0.5 **Strategy 3:** Step_1. Considering weighted edge network with asset values for placing IADS's batteries. Step_2. Computing network_algorithm_1, network_algorithm_2, network_algorithm_3, network_algorithm_4, network_algorithm_5 and network_algorithm_6. Step_3. Creating six sequences by ordering location node numbers as descending orders of the results of six network algorithms sequentially such as sequence_number_1 is the sequence of node numbers ordered as descending order of network_algorithm_1's measuring values on nodes. Step_4. Estimating total unprotected asset values for placing IADS missile battery with maximum missile range on the place of first location node number from each sequence and choosing the lowest unprotected asset value corresponded node among six unprotected asset value corresponded nodes for placing first IADS with maximum missile range. Step_5. Estimating total unprotected asset values for placing second IADS missile battery (if any) with maximum missile range or with second maximum missile range on the place of second location node number from each sequence and choosing the lowest unprotected asset value corresponded location node among six unprotected asset value corresponded location nodes for placing second IADS with maximum missile range or with second maximum missile range. If the lowest unprotected asset value corresponded location node resulted from considering second node position of each sequence is same as the lowest unprotected asset value corresponded node resulted from considering first node position of each sequence, then repeat step 4 for the place of third location node number in sequences for placing second IADS missile battery with maximum missile range or with second maximum missile range. Step_6. Repeating Step_4 and Step_5 for placing the next IADS missile battery with previous missile ranges or with the next longer missile range (which is less than previous missile range). Step_7. If there are \(M\) number of nodes where the lowest unprotected asset value corresponded location node resulted from considering any node position of each sequence is same as the lowest unprotected asset value corresponded node resulted from considering previous node position of each sequence, then, for placing the last \(M\) number of IADS batteries with previous missile ranges or with the the most minimum missile ranges, detect all unprotected location nodes where previously placed IADS batteries interceptor missiles can't reach. Step_8. Estimating total unprotected asset values for placing the last \(M\) number of IADS batteries on each unprotected location node and choosing the lowest total unprotected asset value corresponding to \(M\) number of unprotected location nodes for placing the last \(M\) number of IADS batteries with previous missile ranges or iADS batteries with previous missile target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for target value corresponded to \(M\) number of induces higher values for missile battery's interceptor missile range is 70 km. Here, total unprotected asset value is 0.4 ### Strategy_4: (same as Strategy_3 for first four steps) Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_1, network_algorithm_2, network_algorithm_3, network_algorithm_4, network_algorithm_5 and network_algorithm_6 Step_3. Creating six sequences by ordering location node numbers as descending orders of the results of six network algorithms sequentially such as sequence_number_1 is the sequence of node numbers ordered as descending order of network_algorithm_1's measuring values on nodes. Step_4. Estimating total unprotected asset values for placing IADS missile battery with maximum missile range on the place of first location node number from each sequence and choosing the lowest unprotected asset value corresponded node among six unprotected asset value corresponded nodes for placing first IADS with maximum missile range. Step_5. After placing the first IADS missile battery with maximum missile range, for placing the next IADS missile batteries, we shall consider those location nodes in six sequences where the distances from the first placed IADS missile battery are more than the first placed IADS missile battery's missile range. Step_6. Repeating Step_4 for placing the next IADS missile battery with previous missile ranges or with the next longer missile range (which is less than previous missile range) until all IADS missile batteries are placed. If the lowest unprotected asset value corresponded location node resulted from considering any node position of each sequence is same as the lowest unprotected asset value corresponded node resulted from considering previous node position of each sequence, then, we shall consider the second lowest unprotected asset value corresponded location node resulted from considering that node position of each sequence for placing IADS missile battery. For example, One, two and three IADS missile batteries have been applied sequentially on that previously generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. Figure 14: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 Figure 13: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node 12, Second IADS missile battery is placed at location node 22 and Third IADS missile battery is placed at location node 11. First placed IADS missile battery’s interceptor missile range is 80 km, second placed IADS missile battery’s interceptor missile interceptor missile range is 70 km and third placed IADS missile battery’s interceptor missile range is 20 km. Here, total unprotected asset value is 0.4 Figure 15: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node number 12 and Second IADS missile battery is placed at location node number 34. First placed IADS missile battery’s interceptor missile range is 80 km and second placed IADS missile battery's interceptor missile range is 70 km. Here, total unprotected asset value is 0.3000000000000004 ### Strategy_5: **(same as Strategy_3 and Strategy_4 for first four steps)** Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_1, network_algorithm_2, network_algorithm_3, network_algorithm_4, network_algorithm_5 and network_algorithm_6 Step_3. Creating six sequences by ordering location node numbers as descending orders of the results of six network algorithms sequentially such as sequence_number_1 is the sequence of node numbers ordered as descending order of network_algorithm_1's measuring values on nodes. Step_4. Estimating total unprotected asset values for placing IADS missile battery with maximum missile range on the place of first location node number from each sequence and choosing the lowest unprotected asset value corresponded node among six unprotected asset value corresponded node for placing first IADS with maximum missile range. Step_5. After placing the first IADS missile battery with maximum missile range, we shall estimate total unprotected asset values (but for this strategy, unlike Strategy_3, we shall consider only those unprotected asset values which are not protected by previously placed IADS missile batteries) for placing next IADS missile battery with maximum missile range or with the rest of minimum missile ranges on the place of the next location node number where the distances from the first placed IADS missile battery is more than the first placed IADS missile battery's missile range in each sequence and choosing the lowest unprotected asset value corresponded location node among six unprotected asset value corresponded location nodes for placing the next IADS with maximum missile range or with the rest of minimum missile ranges. Step_6. Repeating Step 5 for placing the next IADS missile battery with previous missile ranges or with the next longer missile range (which is less than previous missile range) until all IADS missile batteries are placed. For example, One, two and three IADS missile batteries have been applied sequentially on that previously generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. Figure 16: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node node node 12, Second IADS missile battery’s interceptor missile 34 and Third IADS missile battery is placed at location node 39. First placed IADS missile battery’s interceptor missile battery’s interceptor missile battery’s interceptor missile battery’s interceptor missile range is 80 km, second placed IADS missile battery’s interceptor missile range is 70 km and third placed IADS missile battery’s interceptor missile range is 20 km. Here, total unprotected asset value is 0.30000000000004 Figure 17: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 Figure 18: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node number 12 and Second IADS missile battery is placed at location node number 10. First placed IADS missile battery's interceptor missile range is 80 km and second placed IADS missile battery's interceptor missile range is 70 km. Here, total unprotected asset value is 0.2 **Strategy_6:** (same as Strategy_3 and Strategy_4 for first four steps) Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_1, network_algorithm_2, network_algorithm_3, network_algorithm_4, network_algorithm_5 and step_3. Creating six sequences by ordering location node numbers as descending orders of the results of six network algorithms sequentially such as sequence_number_1 is the sequence of node numbers ordered as descending order of network_algorithm_1's measuring values on nodes. Step_4. Estimating total unprotected asset values for placing IADS missile battery with maximum missile range on the place of first location node number from each sequence and choosing the lowest unprotected asset value corresponded node among six unprotected asset value corresponded nodes for placing first IADS with maximum missile range. Step_5. After placing first IADS missile battery, we shall choose nodes from longest shortest distance path from first placed IADS missile battery and shortest path between previous shortest distance end node (end node of shortest distance path from first placed IADS missile battery) and first placed IADS missile battery location node. Step_6. Including that, we shall choose nodes from shortest distance path between network diameter source node and first placed IADS_msile_battery's location node and we shall also choose nodes from shortest distance path between network diameter target node and first placed IADS missile battery's location node. Step_7. Creating sequence of chosen nodes without any order of ascending or descending and, from that sequence, we shall select optimal location nodes for placing the rest of IADS missile batteries based on less unprotected asset value cost as per rest of IADS missile batteries ranges. For example, One, two and three IADS missile batteries have been applied sequentially on that previously generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. **Figure 19**: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node 12, Second IADS missile battery is placed at location node 23 and third IADS missile battery is placed at location node 49. First placed IADS missile battery’s interceptor missile range is 80 km, second placed IADS missile battery’s interceptor missile battery’s interceptor missile range is 70 km and third placed IADS missile battery’s interceptor missile range is 20 km. Here, total unprotected asset value is 0.1 **Figure 20**: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 ### Strategy_7: (same as Strategy_3 and Strategy_4 for first four steps) Step_1. Considering weighted edge network with asset values for placing IADS's batteries Step_2. Computing network_algorithm_1, network_algorithm_2, network_algorithm_3, network_algorithm_4, network_algorithm_5 and network_algorithm_6 Step_3. Creating six sequences by ordering location node numbers as descending orders of the results of six network algorithms sequentially such as sequence_number_1 is the sequence of node numbers ordered as descending order of network_algorithm_1's measuring values on nodes. Step_4. Estimating total unprotected asset values for placing IADS missile battery with maximum missile range on the place of first location node number from each sequence and choosing the lowest unprotected asset value corresponded node among six unprotected asset value corresponded nodes for placing first IADS with maximum missile range. Step_5. After placing first IADS missile battery, we shall choose nodes from longest shortest distance path from first placed IADS missile battery and shortest path between previous shortest distance end node (end node of shortest distance path from first placed IADS missile battery) and first placed IADS missile battery location node. Step_6. Including that, we shall choose nodes from shortest distance path between network diameter source node and first placed IADS_mssle battery's location node and we shall also choose nodes from shortest distance path between network diameter target node and first placed IADS missile battery's location node. Step_7. Creating sequence of chosen nodes without any order of ascending or descending and from that sequence, creating another sequence from betweenness centrality subset of previously created subset. Step_8. From the last created sequence, we shall select optimal location nodes for placing the rest of IADS missile batteries based on less unprotected asset value cost as per rest of IADS missile batteries ranges. For example, One, two and three IADS missile batteries have been applied sequentially on that previously generated weighted edge location network where interceptor missile ranges are sequentially 80 km, 70 km and 20 km. Figure 23: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 Figure 22: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node 12, Second IADS missile battery is placed at location node 48 and third IADS missile battery is placed at location node 23. First placed IADS missile battery’s interceptor missile range is 80 km, second placed IADS missile battery’s interceptor missile range is 70 km and third placed IADS missile battery’s interceptor missile range is 20 km. Here, total unprotected asset value is 0.3000000000000004 Figure 23: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. One IADS missile battery is placed at location node number 12. Placed IADS missile battery’s interceptor missile range is 80 km Here, total unprotected asset value is 0.5 location node number 23. First placed IADS missile battery's interceptor missile range is 80 km and second placed IADS missile battery's interceptor missile range is 70 km. Here, total unprotected asset value is 0.3000000000000004 ## V Computational Results Table 1 has been created for results of two IADS missile batteries where interceptor missile battery ranges are sequentially 80 km and 70 km where worst case scenario's unprotected asset value percentages (including calculating interceptor missile's interceptor probabilities by considering only one IADS missile battery's one interceptor missile for one attacker missile) have been listed for few generated weighted edge networks and for each strategy. ## VI Computational Results Table 1 has been created for results of two IADS missile batteries where interceptor missile battery ranges \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline **Strategy** & **Generated** & **Generated** & **Generated** & **Generated** & **IADS** & **IADS** & **After** & **placing** & **IADS missile** \\ **number** & **weighted** & **weighted** & **weighted** & **weighted** & **weighted** & **batteries** & **batteries** & **batteries** & **IADS** & **batteries**, **total** \\ & **edge** & **edge** & **edge** & **edge** & **location** & **location** & **interceptor** & **missile** & **Unprotected** \\ & **location** & **location** & **location** & **location** & **missile** & **missile** & **batteries** & **batteries** & **batteries** & **batteries** \\ & **location** & **location** & **location** & **location** & **location** & **missile** & **batteries** & **batteries** & **batteries** & **batteries** \\ & **network’s** & **network’s** & **network’s** & **network’s** & **network** & **missile** & **missile’s** & **batteries** & **batteries** & **batteries** & **batteries** \\ & **small** & **node** & **Diameter** & **Minimum** & **total asset** & **ranges in** & **interceptor** & **probability** & **Unprotected** & **(Worst Case** \\ & **worldness** & **number** & & **edge length** & **value** & **followerer** & **asset value** & **Scenario** ) \\ & & & & **Maximum** & & & & & & \\ & & & & **edge length** & & & & & \\ & & & & **(Minimum,** & & & & & \\ & & & & **Maximum)** & & & & & & \\ \hline Strategy\_1 & \(\sim\)**0.39692** & 50 & 123.0 km & (20.5 km, 184.5 km) & \(\approx\)14.9 & 80 km, 70 km & 98\% & 1.6 & \(\approx\)**12.52349\%** \\ \hline Strategy\_2 & \(\sim\)**0.39692** & 50 & 123.0 km & (20.5 km, 184.5 km) & \(\approx\)14.9 & 80 km, 70 km & 98\% & 1.6 & \(\approx\)**12.52349\%** \\ \hline Strategy\_3 & \(\sim\)**0.39692** & 50 & 123.0 km & (20.5 km, 184.5 km) & \(\approx\)14.9 & 80 km, 70 km & 98\% & 0.6 & \(\approx\)**5.94631\%** \\ \hline Strategy\_4 & \(\sim\)**0.39692** & 50 & 123.0 km & (20.5 km, 184.5 km) & \(\approx\)14.9 & 80 km, 70 km & 98\% & 1.8 & \(\approx\)**13.83893\%** \\ \hline Strategy\_5 & \(\sim\)**0.39692** & 50 & 123.0 km & (20.5 km, 184.5 km) & \(\approx\)14.9 & 80 km, 70 km & 98\% & 0.2 & \(\approx\)**3.315436\%** \\ \hline \end{tabular} Figure 25: Violet nodes are where IADS missile batteries have been placed and red nodes are unprotected location node numbers where IADS missile battery’s interceptor missiles can’t reach. First IADS missile battery is placed at location node 12, Second IADS missile battery is placed at location 23 and third IADS missile battery is placed at location node 48. First placed IADS missile battery’s interceptor missile range is 80 km, second placed IADS missile battery’s interceptor missile range is 70 km and third placed IADS missile battery’s interceptor missile range is 20 km. Here, total unprotected asset value is 0.3000000000000004 Table 2 has been created for results for three IADS missile batteries where interceptor missile battery ranges are sequentially 110 km, 90 km and 80 km where worst case scenario's unprotected asset value percentages (including calculating interceptor missile's interceptor probabilities by considering only one IADS missile battery's one interceptor missile for one attacker missile) have been listed for few generated and for each strategy. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Strategy number** & **Generated weighted edge** & **Generated weighted edge** & **Generated weighted edge** & **Generated weighted edge** & **IADS missile edge** & **IADS missile batteries** & **After placing IADS missile batteries, total in interceptor missile’s** & **Inprotected asset value** & **Inprotected asset value** & **Inprotected asset value** & **Inprotected asset value** & **Inprotected asset value** \\ \hline Strategy\_1 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.9 & \(\approx\)**8.39130\%** \\ \hline Strategy\_2 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 1.0 & \(\approx\)**9.101449\%** \\ \hline Strategy\_3 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.6 & \(\approx\)**6.26087\%** \\ \hline Strategy\_4 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.1 & \(\approx\)**2.71014\%** \\ \hline Strategy\_5 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.1 & \(\approx\)**2.71014\%** \\ \hline Strategy\_6 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.6 & \(\approx\)**6.26087\%** \\ \hline Strategy\_7 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 318 km) & 13.8 & 110 km, 90 km, 80 km & 98\% & 0.6 & \(\approx\)**6.26087\%** \\ \hline \end{tabular} \end{table} Table 1: Results for two IADS missile batteries where interceptor missile battery ranges are sequentially 80 km and 70 km where worst case scenario’s unprotected asset value percentages (including calculating interceptor missile’s interceptor probabilities by considering only one IADS missile battery’s one interceptor missile for one attacker missile) have been listed for each of few generated weighted edge networks and for each strategy. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline & & & & & & \multicolumn{1}{p{42.7pt}|}{80 km} & & & \\ \hline Strategy\_7 & \(\approx\)0.640606 & 50 & 200.0 km & (25 km, 225 km) & 15.7 & 110 km, 90 km, 90 km, 80 km & 98\% & 1.0 & \(\sim\)8.2420382\% \\ \hline Strategy\_1 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 5.1 & \(\sim\)34.45455\% \\ \hline Strategy\_2 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 8.9 & \(\sim\)58.636363\% \\ \hline Strategy\_3 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 8.0 & \(\sim\)52.909090\% \\ \hline Strategy\_4 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 4.1 & \(\sim\)28.090909\% \\ \hline Strategy\_5 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 3.5 & \(\sim\)24.272727\% \\ \hline Strategy\_6 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 5.8 & \(\sim\)38.90909\% \\ \hline Strategy\_7 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 80 km & 98\% & 6.2 & \(\sim\)41.454545\% \\ \hline Strategy\_1 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 10.5 & \(\sim\)74.464789\% \\ \hline Strategy\_2 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 10.8 & \(\sim\)76.535211\% \\ \hline Strategy\_3 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 10.8 & \(\sim\)76.535211\% \\ \hline Strategy\_4 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 10.3 & \(\sim\)73.084507\% \\ \hline Strategy\_5 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 9.4 & \(\sim\)66.873239\% \\ \hline Strategy\_6 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 98\% & 10.2 & \(\sim\)72.394366\% \\ \hline Strategy\_7 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 80 km & 11.2 & \(\sim\)79.295775\% \\ \hline \end{tabular} \end{table} Table 2: Results for three IADS missile batteries where interceptor missile battery ranges are sequentially 110 km, 90 km and 80 km where worst case scenario’s unprotected asset value percentages (including calculating interceptor missile’s interceptor probabilities by considering only one IADS missile battery’s one interceptor missile for one attacker missile) have been listed for each of few generated weighted edge networks and for each strategy. Table 3 has been created for results for four IADS missile batteries where interceptor missile battery ranges are sequentially 120 km, 110 km, 90 km and 80 km where worst case scenario's unprotected asset value percentages (including calculating interceptor missile's interceptor probabilities by considering only one IADS missile battery's one interceptor missile for one attacker missile) have been listed for few generated weighted edge networks and for each strategy. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline **Strategy** & **Generated** & **Generated** & **Generated** & **Generated** & **Generated** & **IADS** & **IADS** & **After** & **placing** & **IADS missile** \\ **number** & **weighted** & **weighted** & **weighted** & **weighted** & **weighted** & **missile** & **missile** & **missile** & **IADS** & **batteries, total** \\ & **edge** & **edge** & **edge** & **edge** & **edge** & **edge** & **interceptor** & **interceptor** & **missile** & **Unprotected** \\ & **location** & **location** & **location** & **location** & **location** & **interceptor** & **interceptor** & **intercer** & **intercer** & **missile** & **batteries, total** \\ & **network’s** & **network’s** & **network’s** & **network’s** & **network** & **missile** & **missile’s** & **batteries, total** & **batteries, total** & **batteries, total** \\ & **small** & **node** & **Diameter** & **Minimum** & **total asset** & **ranges in** & **interceptor** & **total** & **Percentage** \\ & **worldness** & **number** & & & **edge length** & **value** & **kilometer** & **probability** & **Unprotected** \\ & & & & & & & & & **asset value** & **Scenario )** \\ & & & & & & & & & & \\ & & & & & & & & & & \\ \hline Strategy\_1 & \(\approx\)0.25351 & 50 & 270.0 km & (45 km, 405 km) & 14.1 & 120 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 100 km, 110 km, 100 km, 110 km, 110 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 1000 km, 100 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline & & & & 119 km) & & & 110 km, & & & & \\ & & & & & & 80 km & & & & \\ \hline Strategy\_3 & \(\approx\)0.934170 & 50 & 697.0 km & (17 km, 119 km) & 14.6 & 120 km, 110 km, 90 km, 80 km & 98\% & 6.8 & \(\approx\)47.64383\% \\ \hline Strategy\_4 & \(\approx\)0.934170 & 50 & 697.0 km & (17 km, 119 km) & 14.6 & 120 km, 110 km, 90 km, 80 km & 5.7 & \(\approx\)40.26027\% \\ \hline Strategy\_5 & \(\approx\)0.934170 & 50 & 697.0 km & (17 km, 119 km) & 14.6 & 120 km, 110 km, 90 km, 80 km & 98\% & 3.0 & \(\sim\)22.136986\% \\ \hline Strategy\_6 & \(\approx\)0.934170 & 50 & 697.0 km & (17 km, 119 km) & 14.6 & 120 km, 110 km, 90 km, 80 km & 98\% & 3.1 & \(\sim\)22.8082\% \\ \hline Strategy\_7 & \(\approx\)0.934170 & 50 & 697.0 km & (17 km, 119 km) & 14.6 & 120 km, 110 km, 90 km, 80 km & 98\% & 4.0 & \(\sim\)28.849315\% \\ \hline Strategy\_1 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 98\% & 9.7 & \(\sim\)61.043478\% \\ \hline Strategy\_2 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 10.3 & \(\sim\)64.695652\% \\ \hline Strategy\_3 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 11.4 & \(\sim\)71.39130\% \\ \hline Strategy\_4 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 11.8 & \(\sim\)73.826089\% \\ \hline Strategy\_5 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 98\% & 7.0 & \(\sim\)44.608696\% \\ \hline Strategy\_6 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 98\% & 9.7 & \(\sim\)61.043478\% \\ \hline Strategy\_7 & \(\approx\)1.11291 & 50 & 960.0 km & (15 km, 135 km) & 16.1 & 120 km, 110 km, 90 km, 80 km & 10.0 & \(\sim\)62.869565\% \\ \hline \end{tabular} \end{table} Table 3: Results for four IADS missile batteries where interceptor missile battery ranges are sequentially 120 km, 110 km, 90 km and 80 km where worst case scenario’s unprotected asset value percentages (including calculating interceptor missile’s interceptor Table 4 has been created for results for five IADS missile batteries where interceptor missile battery ranges are sequentially 200 km, 120 km, 110 km, 90 km and 80 km where worst case scenario's unprotected asset value percentages (including calculating interceptor missile's interceptor probabilities by considering only one IADS missile battery's one interceptor missile for one attacker missile) have been listed for few generated weighted edge networks and for each strategy. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline **Strategy** & **Generated** & **Generated** & **Generated** & **Generated** & **Generated** & **IADS** & **IADS** & **After** & **Alleging** & **Alleging** \\ **number** & **weighted** & **weighted** & **weighted** & **weighted** & **weighted** & **weighted** & **whistereters** & **haldes** & **haldes** & **haldes** \\ & **edge** & **edge** & **edge** & **edge** & **edge** & **edge** & **batrieres** & **haldes** & **haldes** & **haldes** \\ & **location** & **location** & **location** & **location** & **location** & **Interceptor** & **interceptor** & **missile** & **minispile** \\ & **network's** & **network's** & **network's** & **network's** & **network's** & **network** & **missile** & **missile's** & **haldes** \\ & **small** & **node** & **Diameter** & **Minimum** & **total asset** & **ranges in** & **interceptor** & **total** & **Percentage** \\ & **worldness** & **number** & & & **edge length** & **value** & **kilometer** & **probability** & **Unprotected** & **(Worst Case)** \\ & & & & & **Maximum** & & & & & & **sexel value** \\ & & & & & **diag length** & & & & & & **sexel value** \\ & & & & & **Maximum** & & & & & & \\ \hline Strategy\_1 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 110 km, 90 km, 80 km & 3.3 & \(\approx\)21.96296\% \\ \hline Strategy\_2 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 110 km, 90 km, 80 km & 3.8 & \(\approx\)24.987654\% \\ \hline Strategy\_3 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.5 & \(\approx\)29.2222\% \\ \hline Strategy\_4 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 90 km, 80 km & 3.8 & \(\approx\)24.98765\% \\ \hline Strategy\_5 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 90 km, 80 km & 2.3 & \(\approx\)15.91358\% \\ \hline Strategy\_6 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 110 km, 80 km, 80 km & 4.0 & \(\approx\)26.19753\% \\ \hline Strategy\_7 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 120 km, 90 km, 80 km & 3.7 & \(\approx\)24.382716\% \\ \hline Strategy\_8 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 90 km, 80 km & 3.8 & \(\approx\)24.98765\% \\ \hline Strategy\_9 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 120 km, 90 km, 80 km & 4.0 & \(\approx\)26.19753\% \\ \hline Strategy\_9 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.0 & \(\approx\)26.19753\% \\ \hline Strategy\_1 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 120 km, 110 km, 90 km, 80 km & 3.7 & \(\approx\)24.382716\% \\ \hline Strategy\_1 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 110 km, 90 km, 80 km & 3.8 & \(\approx\)24.98765\% \\ \hline Strategy\_1 & \(\approx\)0.344096 & 50 & 480.0 km & (80.0 km, 640.0 km) & 16.2 & 200 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 11 \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Strategy\_1 & \(\sim\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 8.6 & \(\sim\)59.333\% \\ \hline Strategy\_2 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 10.5 & 72.0\% \\ \hline Strategy\_3 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 110 km, 110 km, 90 km, 80 km & 5.3 & \(\sim\)37.3333\% \\ \hline Strategy\_4 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.4 & \(\approx\)31.333\% \\ \hline Strategy\_5 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 2.6 & \(\simeq\)19.33333\% \\ \hline Strategy\_6 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.3 & \(\approx\)50.66667\% \\ \hline Strategy\_7 & \(\simeq\)0.431319 & 50 & 375.0 km & (75.0 km, 675.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.6 & \(\simeq\)32.6666\% \\ \hline Strategy\_1 & \(\simeq\)0.652398 & 50 & 550.0 km & (50.0 km, 400.0 km) & 15.3 & 200 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 1000 km, 100 km, 1000 km, 1000 km, 100 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 100 km, 1000 km, 1000 km, 100 \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Strategy\_5 & \(\simeq\)0.652398 & 50 & 550.0 km & (50.0 km, 400.0 km) & 15.3 & 200 km, 120 km, 110 km, 90 km, 80 km & 2.6 & \(\simeq\)18.65359\% \\ \hline Strategy\_6 & \(\simeq\)0.652398 & 50 & 550.0 km & (50.0 km, 400.0 km) & 15.3 & 200 km, 120 km, 110 km, 90 km, 80 km & 6.8 & \(\simeq\)45.555556\% \\ \hline Strategy\_7 & \(\simeq\)0.652398 & 50 & 550.0 km & (50.0 km, 400.0 km) & 15.3 & 200 km, 110 km, 110 km, 90 km, 80 km & 5.1 & \(\simeq\)34.66667\% \\ \hline Strategy\_1 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.9 & \(\simeq\)34.666667\% \\ \hline Strategy\_2 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 6.7 & \(\simeq\)46.666667\% \\ \hline Strategy\_3 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 5.5 & \(\simeq\)38.666667\% \\ \hline Strategy\_4 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 4.4 & \(\simeq\)31.33333\% \\ \hline Strategy\_5 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 90 km, 80 km & 1.8 & 14.0\% \\ \hline Strategy\_6 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 110 km, 120 km, 110 km, 90 km, 80 km & 6.5 & \(\simeq\)45.333333\% \\ \hline Strategy\_7 & \(\simeq\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 120 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 100 km, 110 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, By aggregating each optimal solution for each generated network, we found Table 5. By aggregating each optimal solution for each generated network, we found Table 5. By aggregating each optimal solution for each generated network, we found Table 5. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Strategy\_2 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.9 & \(\approx\)57.3\% \\ \hline Strategy\_3 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.5 & \(\approx\)54.5\% \\ \hline Strategy\_4 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.1 & \(\sim\)51.7\% \\ \hline Strategy\_5 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.5 & \(\sim\)54.5\% \\ \hline Strategy\_6 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.5 & \(\sim\)54.5\% \\ \hline Strategy\_7 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.5 & \(\sim\)54.5\% \\ \hline Strategy\_8 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.5 & \(\sim\)54.5\% \\ \hline Strategy\_9 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.1 & \(\sim\)51.7\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.1 & \(\sim\)51.7\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.1 & \(\sim\)51.7\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 7.6 & \(\sim\)48.2\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 6.6 & \(\sim\)48.2\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 6.6 & \(\sim\)48.2\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 6.6 & \(\sim\)50.3\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 80 km & 98\% & 6.9 & \(\sim\)50.3\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 90 km, 110 km, 90 km, 80 km & 98\% & 6.9 & \(\sim\)50.3\% \\ \hline Strategy\_1 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 120 km, 110 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 90 km, 110 km, 110 km, 90 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 10 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 10 km, 110 km, 110 km, 110 km, 110 km, 10 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 10 km, \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Strategy 5 & \(\approx\)**0.807039** & 50 & 710.0 km & (10 km, 70 km) & \(\approx\)15.4 & 80 km, 70 km & 98\% & 6.6 & \(\approx\)44.00\% \\ \hline Strategy 6 & & & & & & & & \\ Or, & & & & & & & & \\ Strategy 7 & & & & & & & & \\ \hline Strategy 5 & \(\approx\)1.145887 & 50 & 560.0 km & (10 km, 80 km) & \(\approx\)14.1 & 80 km, 70 km & 98\% & 7.9 & \(\approx\)56.907801\% \\ \hline Strategy 5 & \(\approx\)0.423134 & 50 & 210.0 km & (35 km, 315 km) & 13.8 & 110 km, 90 km, 90 km, 80 km & 0.0 & 2.0\% \\ \hline Strategy 4 & \(\approx\)0.564176 & 50 & 270.0 km & (18 km, 144 km) & 15.0 & 110 km, 90 km, 90 km, 80 km & 2.1 & \(\approx\)15.72\% \\ Strategy 5 & & & & & & 80 km & & \\ \hline Strategy 5 & \(\approx\)0.64066 & 50 & 200.0 km & (25 km, 225 km) & 15.7 & 110 km, 90 km, 90 km, 90 km, 80 km & 0.1 & \(\approx\)2.6242038\% \\ \hline Strategy 5 & \(\approx\)0.829442 & 50 & 345.0 km & (23 km, 184 km) & 15.4 & 110 km, 90 km, 90 km, 80 km & 3.5 & \(\approx\)24.27277\% \\ \hline Strategy 5 & \(\approx\)1.16390 & 50 & 960.0 km & (20 km, 160 km) & 14.2 & 110 km, 90 km, 90 km, 80 km & 9.4 & \(\approx\)66.873239\% \\ \hline Strategy 5 & \(\approx\)0.25351 & 50 & 270.0 km & (45 km, 405 km) & 14.1 & 120 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 100 km, 110 km, 100 km, 110 km, 1100 km, 1100 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 110 km, 100 km, 110 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 1000 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 100 km, 1000 km, 100 km, 1000 km, 100 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 100 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 10000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1000 km, 1 Ordinary Least Square Linear Regression analysis (Figure 26) of Table 5 where dependent variable is generated weighted edge location network's small worldness and independent variables are generated weighted edge location network diameter and sum of IADS missile batteries interceptor missile ranges. Here, R-squared is \(\approx\)0.7, coefficient of generated weighted edge location network diameter is 0.0008, coefficient of sum of IADS missile batteries interceptor missile ranges is -0.0007 and constant is 0.5615 Ordinary Least Square Linear Regression analysis (Figure 27) of Table 5 where dependent variable is total unprotected asset value percentage and independent variables are generated weighted edge location network diameter and sum of IADS missile batteries interceptor missile ranges. Here, R-squared is \(\approx\)0.6, coefficient of generated weighted edge location network diameter is 0.0463, coefficient of sum of IADS missile batteries interceptor missile ranges is -0.0440 and constant is 15.6197 ## 6 Conclusion As it is found from computational results that Strategy_5 is the most optimal strategy for small world networks and networks which are not small world networks. In some cases, Strategy_4, Strategy_6 and Strategy_7 are also optimal strategies along with Strategy_5. Eventually, by analyzing computational results, it can be stated that Strategy_6 and Strategy_7 tend to be optimal when the location network has less small world characteristics and the diameter is far longer than the sum of all IADS missile batteries interceptor missile ranges. On Figure 26: 3D Depiction of Table 5’s linear regression analysis for each sum of IADS missile batteries interceptor missile ranges where sum of IADS missile batteries interceptor missile ranges are sequentially 150 km, 280 km, 400 km and 600 km. Red dots are training data, blue dots are predicted data and blue lines are linear regression lines for each sum of IADS missile batteries interceptor missile ranges. \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline Strategy\_5 & \(\approx\)0.652398 & 50 & 550.0 km & (50.0 km, 400.0 km) & 15.3 & 200 km, 120 km, 110 km, 90 km, 80 km & 98\% & 2.6 & \(\approx\)18.65359\% \\ \hline Strategy\_5 & \(\approx\)0.880917 & 50 & 600.0 km & (40.0 km, 360.0 km) & 14.7 & 200 km, 110 km, 90 km, 80 km & 98\% & 1.8 & 14.0\% \\ \hline Strategy\_5 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 110 km, 110 km, 80 km & 98\% & 4.1 & \(\approx\)30.7\% \\ \hline Strategy\_5 & \(\approx\)1.12557 & 50 & 1425.0 km & (25.0 km, 250.0 km) & 14.0 & 200 km, 110 km, 110 km, 90 km, 80 km & & \\ \hline \end{tabular} \end{table} Table 5: Aggregation of optimal solution for each generated network Figure 27: 3D Depiction of Table 5’s linear regression analysis for each sum of IADS missile batteries interceptor missile ranges where sum of IADS missile batteries interceptor missile ranges are sequentially 150 km, 280 km, 400 km and 600 km. Red dots are training data, blue dots are predicted data and blue lines are linear regression lines for each sum of IADS missile batteries interceptor missile ranges. the other hand, Strategy_4 tends to be optimal when the location network has more small world characteristics and the diameter is equal or less than the sum of all IADS missile batteries interceptor missile ranges. From regression analysis, it can be stated that, with increasing the location network diameter and the sum of all IADS missile batteries interceptor missile ranges, small worldness value of the location network for optimal strategy also increases (more small world value means less small world characteristics) and, with increasing the location network diameter and the sum of all IADS missile batteries interceptor missile ranges, total unprotected asset value percentages (worst case scenario) of the location network for optimal strategy also increases. For future research, we may consider scale-free networks [25] as location networks and in place of regression analysis we may use other supervised learning methods such as artificial neural networks [26] to understand relationship patterns among location networks properties and IADS missile battery's interceptor missile's properties.
2306.03049
WHO-IS: Wireless Hetnet Optimization using Impact Selection
We propose a method to first identify users who have the most negative impact on the overall network performance, and then offload them to an orthogonal channel. The feasibility of such an approach is verified using real-world traces, network simulations, and a lab experiment that employs multi-homed wireless stations. In our experiment, as offload target, we employ LiFi IR transceivers, and as the primary network we consider a typical Enterprise Wi-Fi setup. We found that a limited number of users can impact the overall experience of the Wi-Fi network negatively, hence motivating targeted offloading. In our simulations and experiments we saw that the proposed solution can improve the collision probability with 82% and achieve a 61 percentage point air utilization improvement compared to random offloading, respectively.
Thomas Sandholm, Irene Macaluso, Sayandev Mukherjee
2023-06-05T17:21:46Z
http://arxiv.org/abs/2306.03049v3
# WHO-IS: Wireless Hetnet Optimization ###### Abstract We propose a method to first identify users who have the most negative impact on the overall network performance, and then offload them to an orthogonal channel. The feasibility of such an approach is verified using real-world traces, network simulations, and a lab experiment that employs multi-homed wireless stations. In our experiment, as offload target, we employ LiFi IR transceivers, and as the primary network we consider a typical Enterprise Wi-Fi setup. We found that a limited number of users can impact the overall experience of the Wi-Fi network negatively, hence motivating targeted offloading. In our simulations and experiments we saw that the proposed solution can improve the collision probability with 82% and achieve a 61 percentage point air utilization improvement compared to random offloading, respectively. ## 1 Introduction With the emergence of wireless IoT devices, an increased emphasis on remote video conferencing, and an ever increasing demand from new bandwidth-hungry applications, such as AR/VR, Wi-Fi networks are struggling to keep up. New spectrum availability and more spectrum efficient protocols mitigate congestion but do not fully solve the problem, due to the trade-offs involved in operating on different spectrum bands, e.g. range, throughput trade-offs. In an office environment this trend is exacerbated with dense AP and station deployments, where limited orthogonal Wi-Fi channels force use of narrower bands and hence lower throughput capacity to avoid excessive interference. In such environments, saturation of airtime utilization, and spiking collision probabilities cause packet delays that lead to a poor user experience, especially for latency-sensitive applications. These trends have led to renewed interest in meeting the demand with heterogeneous networks (HetNets) composed of technologies and protocols operating in non-interfering spectrum bands. One such technology often proposed for load balancing, and capacity enhancement is Light Fidelity (LiFi). The first generation of LiFi systems operated in the visible light spectrum. However, newer generations of LiFi systems use the infrared (IR) spectrum. With the introduction of IR-based as opposed to visible light communication LiFi no longer suffers from many of the initial drawbacks that made it impractical for general use, such as light dimming degradation and poor uplink performance due to glare. Furthermore, throughput performance of commercially available hardware is starting to match that of Wi-Fi. The current downsides of LiFi include expensive hardware, limited reach (line of sight, LoS), and intrusive deployments with essentially a dedicated LiFi antenna for each receiver. LiFi APs can handle tens of clients concurrently. However, the very limited range and sensitivity to receiving orientation angle (ROA) make the 1-1 mapping of antenna to receiver the most practical setup at the present time, at least until the multi-user time-slicing standards currently in development are finalized. LiFi APs typically operate on the same spectrum band, so inter-cell interference becomes an issue if deployments are too dense. To address some of these practical problems we utilize a setup that involves a LiFi antenna that can be steered through a pan-tilt servo mount to serve one dedicated user over LiFi among a group of users within range. Beyond cost savings, there are many benefits to directing the antenna more precisely to the receiver including achieving longer distances, allowing a much larger area to be covered while mitigating inter-cell interference, and avoiding multi-user access degradation. In our experimental setup a single antenna can easily serve a cluster of four standard-sized office cubicles while mounted in a ceiling 3-4m high whereas the LiFi hardware only allows a 68% angle reach within about 2 meters, limiting it to transmit from something like an office lamp to a device on a desk. Although the technology will undoubtedly improve both in reach and angle coverage we believe the servo steering technique could be feasible to extend the range of LiFi while still providing dedicated Line of Sight (LoS) communication, which is considered to be a security benefit of LiFi. Our primary contribution in this work is a predictive model, based on neural networks, that: * _predicts_ which Wi-Fi station should be offloaded to an orthogonal channel, in our case LiFi, * _given_ current measurements both from the primary and offloading target network * _to optimize_ network KPIs, such as air-time utilization and collision probability. We validate our work with real-world trace analysis, Wi-Fi network simulations (in NS3), and a lab experiment with commercial-grade Wi-Fi and LiFi hardware and off-the-shelf end-user devices. The remainder of the paper is organized as follows. We discuss related work in Section 2, and motivate our approach using an analysis of public Wi-Fi traces in Section 3. In Section 4 we define our problem more formally, followed by an evaluation using simulations (Section 5) and experiments (Sections 6 and 7). Finally, we provide concluding remarks in Section 8. ## 2 Related Work Predicting or selecting one or more users to offload from a given network or access point to another based on some optimality criterion has been studied for a long time under the topics of "user association" and "user handoff." For example, [1] applied a game theoretic framework to analyze association in a network with High Speed Downlink Packet Access (HSDPA) and LTE, while [2] formulated a stochastic game to model non-cooperative users competing for limited resources from multiple cellular base stations. A general utility-optimizing formulation for user association in a heterogeneous network was proposed in [3], while [4] applied a multiple attribute decision making method with careful selection of user attributes to reduce computational complexity. Treating user association as a combinatorial optimization problem instead, a stochastic decision framework was proposed and analyzed in [5]. A fuzzy logic approach to designing handoff between a WLAN and a cellular network to reduce call dropping probability was studied in [6, 7], while [8, 9] applied a constrained Markov Decision Process formulation instead, and [10] proposed a user association scheme based on load-balancing using a cell-breathing scheme. To the best of our knowledge, our work is the first approach of using an auto-trained Neural Network1 (NN) to predict the user that is optimal to offload (e.g. from Wi-Fi to LiFi) using a KPI impact perspective. The approach has, however, been inspired by previous contributions in the areas of Machine Learning (ML) and Wi-Fi/LiFi HetNets. Footnote 1: we refer to our approach as a neural network as it may be implemented as a deep neural network (DNN) with many hidden layers in some deployments, but simpler more shallow networks in others. ### ML-driven HetNets Interference is classical problem in HetNets, and in [11] the authors propose an ML classification and offloading scheme to improve co-tier interference between femtocells. Support Vector Machine (SVM), Random Forest (RF), Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN) are all evaluated as alternative models for interference classification and subsequent offloading decisions. CNN and RF were the top performers. Interference is similar to our collision probability KPI; however, our solution differs in the way we formulate the offloading decision, in that we try to predict the user who impacts the current network most negatively. A reinforcement learning solution is proposed for joint power control and user association in a millimeter wave heterogeneous network in [12]. In [13], a recommender model was proposed to map users to access points (LTE or Wi-Fi), while in [14], the authors modeled the user association problem as a restless multi-armed bandit and exploited individual user behavior characteristics to maximize long-term expected system throughput. In [15]\(k\)-means clustering is used to classify users to improve handover decisions across HetNets based on user context. In [16] a DNN is trained offline to make optimal cache placement decisions in a HetNet. Our approach differs from all of the above in that we can accommodate multiple KPIs into our optimization through our problem formulation based on the negative impact score. ### Wi-Fi/LiFi HetNets From the early days of LiFi there has been work considering how to best manage a hybrid Wi-Fi and LiFi HetNet network [17, 18, 19, 20, 21]. These studies were evaluated with custom simulations, and assumed fixed LiFi beam directions. They also focused on improving user satisfaction and throughput as opposed to network KPIs as in our study. Furthermore, the models proposed were either using linear programming models or optimization heuristics based on game theory or genetic algorithms. Based on the complexity of Wi-Fi alone, we believe machine learning (ML) approaches such as neural networks (NNs) are a more promising basis for a solution and would also scale better to more users. More recently, this idea has been revisited [22] to formulate a resource allocation optimization problem minimizing delay and meeting a minimum data rate by assigning resource shares across Wi-Fi and LiFi APs. In that work, the LiFi antennas are deployed and directed statically to cover an entire meeting room. Since they allow multiple users on the same AP, they need to account for interference both on the LiFi and the Wi-Fi bands. Furthermore, since the LiFi beams are not targeted, the best signal of an AP is not guaranteed to be where the user is located who receives the signal. We believe that ML techniques such as NNs are better suited than traditional optimization problem formulations in capturing the complexity of both Wi-Fi and LiFi networks, and we think the allocations can be more efficient with dedicated LiFi channels since the beam range is so limited. Given the current cost of a LiFi antenna it is also both a cost issue and deployment hassle to litter the ceiling with one antenna for each position a device may be located. Furthermore, most office ceilings are higher than the 2m range of current LiFi transmitters. In [23] beam forming inspired by mmWave technology is mentioned as a future direction of LiFi, and the general problem of spectrum shortage is highlighted as a future research problem where LiFi offloading could help. We also note that, according to this overview which is based on the hardware we are using, current chipsets do not support handover between Wi-Fi and LiFi and thus mobility is a problem. However, the new converged 802.11bb specification does support handover, and once that is implemented in chipsets it becomes more interesting to study which users to offload rather than making the switch more efficient, which helps explain the focus of our work. ## 3 Motivation We analyze packet capture traces from Wi-Fi deployments to: * motivate our general approach of defining and selecting so-called _negative impact users_ to offload to LiFi (see below), * evaluate candidate statistics as impact predictors, and * validate some impact prediction models. ### Negative Impact Score First, we need to define what we mean by negative impact in order to identify Wi-Fi users (STAs) that are candidates for offloading to LiFi. The trace is divided into equally-sized time segments indexed by \(t=1,2,\dots\), and we then measure \(r_{t}\), the overall packet retry probability across all captured packets in each time segment \(t\). We also track all users who started sending or receiving packets (entered) or stopped sending or receiving packets (dropped out) in each time segment \(t\). Suppose a new user \(u\) entered the system in time segment \(t\), but all other active users in time segment \(t-1\) remained active in time segment \(t\) and no other users entered or departed the system in time segment \(t\). (This is likely to be the case if the time segments are short in duration and the system is not very heavily loaded.) Then we say that user \(u\) had a _negative impact_ on the system if the overall packet retry probability \(r_{t}\) in \(t\), the first time segment _with_ user \(u\) active, is _greater_ than the overall packet retry probability \(r_{t-1}\) in the previous time segment: \[r_{t}>r_{t-1}, \tag{1}\] or equivalently, \[\Delta_{u,t}^{\mathrm{e}}\equiv r_{t}-r_{t-1}>0. \tag{2}\] Note that since \(r_{t-1}\) and \(r_{t}\) are both aggregate system measurements, so is their difference \(\Delta_{t}=r_{t}-r_{t-1}\). However, because of our assumption that the only change to the system between time intervals \(t-1\) and \(t\) is the entry of the user \(u\), we can attribute the change in overall packet retry probability to \(u\), hence we are justified in attaching the subscript \(u\) to \(\Delta_{t}\). The superscript \(\mathrm{e}\) denotes the _entry_ of this user \(u\) into the system in time segment \(t\). Similarly, suppose that user \(u^{\prime}\) was active in time segment \(t^{\prime}-1\) and departed the system in time segment \(t^{\prime}\), while all other active users in time segment \(t^{\prime}-1\) remained active in time segment \(t^{\prime}\) and no other users entered or departed the system in time segment \(t^{\prime}\). Then we say that user \(u^{\prime}\) had a _negative impact_ on the system if the overall packet retry probability \(r_{t^{\prime}}\) in \(t^{\prime}\), the first segment _without_ user \(u^{\prime}\) active, is _lower_ than the overall packet retry probability \(r_{t^{\prime}-1}\) in the previous time segment: \[r_{t^{\prime}}<r_{t^{\prime}-1}, \tag{3}\] or equivalently, \[\Delta^{\mathrm{d}}_{u^{\prime},t^{\prime}}\equiv r_{t^{\prime}}-r_{t^{\prime }-1}<0, \tag{4}\] where again the subscript \(u^{\prime}\) for the aggregate system measurement \(\Delta_{t^{\prime}}\) is justified because of our assumptions above, and the superscript \(\mathrm{d}\) denotes the _departure_ of user \(u^{\prime}\) from the system. Now consider a single user \(u\), and suppose that in the trace, \(u\) is seen to enter the system in time segments \(t_{1},t_{2},\ldots,t_{n}\) and depart the system in time segments \(t^{\prime}_{1},t^{\prime}_{2},\ldots,t^{\prime}_{m}\). As before, we assume that the duration of each time segment is short enough that in each of these time segments, \(u\) is the only user to either enter or depart the system, and all other users retain their state of activity or inactivity unchanged from the immediately prior time segment. We also assume that the total time for the trace is short enough for us to assume (quasi)-stationarity, so that we may model \(\Delta^{\mathrm{e}}_{u,t_{1}},\ldots,\Delta^{\mathrm{e}}_{u,t_{n}}\) as independent identically distributed (i.i.d.) random variables with common expected value \(\mu^{\mathrm{e}}(u)\), and similarly model \(\Delta^{\mathrm{d}}_{u,t^{\prime}_{1}},\ldots,\Delta^{\mathrm{d}}_{u,t^{\prime }_{m}}\) as i.i.d. random variables with common expected value \(\mu^{\mathrm{d}}(u)\). Note that from (2) and (4), it follows that \(\mu^{\mathrm{e}}(u)>0\) and \(\mu^{\mathrm{d}}(u)<0\) respectively. The mean magnitudes \(|\mu^{\mathrm{e}}(u)|=\mu^{\mathrm{e}}(u)\) and \(|\mu^{\mathrm{d}}(u)|=-\mu^{\mathrm{d}}(u)\) may be seen as measures of the negative impact of user \(u\) entering and departing the system respectively. We can then define the Negative Impact Score (NIS) of user \(u\) as the sum of the above two negative impact measures of \(u\) entering and departing the system: \[\mathrm{NIS}(u)=|\mu^{\mathrm{e}}(u)|+|\mu^{\mathrm{d}}(u)|=\mu^{\mathrm{e}} (u)-\mu^{\mathrm{d}}(u). \tag{5}\] In practice, the two expectations \(\mu^{\mathrm{e}}(u)\) and \(\mu^{\mathrm{d}}(u)\) are estimated by \[\hat{\mu}^{\mathrm{e}}(u)=\frac{\Delta^{\mathrm{e}}_{u,t_{1}}+\cdots+\Delta^{ \mathrm{e}}_{u,t_{n}}}{n} \tag{6}\] and \[\hat{\mu}^{\mathrm{d}}(u)=\frac{\Delta^{\mathrm{d}}_{u,t^{\prime}_{1}}+\cdots +\Delta^{\mathrm{d}}_{u,t^{\prime}_{m}}}{m} \tag{7}\] respectively. ### STA Measurements Since many users may enter or drop out in the same segments, an accurate negative impact score for a single user relies on sampling over many segments where users enter and drop out many times. Our metric here can thus be seen as an approximation of measuring the impact directly on a per-user basis, which is not possible in this case due to the fact that we use public data sets without this granularity. However, in a real system deployment it may be possible to predict \(\mathrm{NIS}\) from STA statistics measured more directly. To determine the feasibility of different statistics we collect the following measurements from the traces for each STA: * _rx_ received bytes per second. * _tx_ sent bytes per second. * _size_ packet size (bytes). * _rssi_ RSSI signal (dBm). * _phyrate_ PHY rate based on MCS obtained (Mbps). * _packets_ number of packets. * _iat_ inter-arrival time of packets (s). * _retries_ retry probability of packets. ### Data Sets We use a public data set captured in five different venues around Portland State Univerity, Oregon (PSU) [24], as well as our own private radio capture in an Enterprise office setting (ENT). The private capture was necessary to obtain _rssi_ and _phyrate_ measurements, as well as to do a longitudinal study over an extended period of time (12 hours). ### Impact Outliers Given our approach of selecting individual STAs to offload to LiFi, we want to verify whether there are a few users (more than one, but not too many) with high enough negative impact scores to make it: 1. worthwhile to offload individual users to improve the overall network performance significantly, and 2. non-trivial to make the optimum selection of the user(s) to offload to LiFi. To determine whether there are outliers in terms of high negative impact we rank, and plot negative impact scores across STAs using the different data sets. Note that a higher \(\mathrm{NIS}\) means that the STA has a more negative impact on the overall network, and the \(x\)-axis is the NIS rank of a particular user, so we are looking for outliers in the top left of the plots. From Fig. 1 we see that all data sets revealed high NIS outliers. We should note that for some data set the top outlier was further away from the average than others as can be seen from the scale of the \(y\)-axis. Note that the NIS numbers are in probability units as NIS is computed from collision probabilities. ### Metric Predictor Analysis Next, we study the ability to predict NIS scores from the STA measurements listed in Sec. 3.2. We use ANOVA analyses of measurements statistics and look at F-score significance as a measure of which statistics show promise in predicting NIS for different data sets. The ANOVA analysis results can be seen in Table 1. ### Impact Class Prediction Based on the ANOVA analysis we now take the metric deemed as the best predictor in terms of lowest F-score (see Table 1) and predict the NIS with that predictor for a random user and train with the other users. We then pick 100 different random users and compute the average prediction success rate (over these 100 predictions). Instead of trying to predict the NIS of a user directly, we simplify the prediction problem by trying to predict only a binary value representing the tercile of this user's NIS value: \(0\) if this \begin{table} \begin{tabular}{|l|l|l|} \hline **Data Set** & **Top Metric (Lowest F score)** & **Significant Metrics** \\ \hline PSU CS & rx (\(1.7\times 10^{-1}\)) & - \\ \hline PSU UG & size (\(4.3\times 10^{-4}\)) & size \\ \hline PSU Library & size (\(1.2\times 10^{-5}\)) & size, tx \\ \hline PSU Powells & iat (\(8.8\times 10^{-2}\)) & - \\ \hline ENT & size (\(2\times 10^{-16}\)) & size, rx, rssi, iat \\ \hline \end{tabular} \end{table} Table 1: Significant metrics (5% level) in ANOVA analysis for different data sets. Figure 1: Ranked NIS scores user's NIS is in the top \(1/3\) of all users' NIS values, and \(1\) otherwise. We also compare our model predictor with a random predictor that picks 0 with probability \(1/3\) and 1 with probability \(2/3\). We make this prediction in 30 rounds and compute the average and standard deviation of the success rates, shown in Table 2. In summary we have shown with this trace analysis that there is an opportunity to predict outlier users with a high negative impact on the overall health of the network, and that STA measurements can predict this impact. ## 4 Model Each LiFi AP is enhanced with a pan-tilt unit that can orientate the AP to cover different areas. We assume a finite number \(C_{l}\) of spatial configurations for LiFi AP \(l\), with spatial configuration \(i\) corresponding to _user area_ (the service area for that LiFi AP configuration \(i\)) \(u_{i,l}\), \(i=1,\ldots,C_{l}\). For example, in an office environment a user area could correspond to a cubicle. A LiFi AP _coverage area_ is the union of all the user areas that can be served by the AP. We will assume that a user area can be served by at most one LiFi AP, i.e., the LiFi coverage areas do not overlap. It is worth noting that this decomposition is not an exact geometric representation of the environment. Assuming that it is possible to collect a set of measurements for each device in the network at regular intervals on both Wi-Fi and LiFi, we denote by \(\mathbf{w}_{n}=[w_{n,1}(t),\ldots,w_{n,K_{\mathrm{Wi}}}(t)]\) and \(\mathbf{l}_{n}=[l_{n,1}(t),\ldots,l_{n,K_{\mathrm{Li}}}(t)]\) respectively the set of \(K_{\mathrm{Wi}}\) Wi-Fi measurements and the set of \(K_{\mathrm{Li}}\) LiFi measurements for device \(n\) at time \(t\). For each user area \(u\in\{u_{1,l},u_{2,l},\ldots,u_{C_{l},l}\}\) of LiFi AP \(l\), we can aggregate2 the Wi-Fi and LiFi measurements of all the devices in that user area and denote the aggregated measurements as \(\mathbf{w}_{u}^{(a)}\) and \(\mathbf{l}_{u}^{(a)}\) respectively. Footnote 2: The aggregation of measurements may be implemented in different ways, e.g. using the sum, mean, max, or min. An important point to note is that we do not want "ping-ponging," i.e., frequent transfer of an STA between Wi-Fi and LiFi, or frequent switching of the STA served by a given LiFi AP. This \begin{table} \begin{tabular}{|l|l|l|} \hline **Data Set** & **Linear Regression** & **Random** \\ \hline PSU CS & \(.70\pm.04\) & \(.54\pm.05\) \\ \hline PSU UG & \(.72\pm.03\) & \(.55\pm.05\) \\ \hline PSU Library & \(.72\pm.04\) & \(.55\pm.04\) \\ \hline PSU Powells & \(.67\pm.04\) & \(.52\pm.04\) \\ \hline ENT & \(.74\pm.04\) & \(.58\pm.04\) \\ \hline \end{tabular} \end{table} Table 2: NIS Class Prediction Mean Success Rate \(\pm 1.96\sigma\). is similar to the ping-ponging problem of handover in a mobile cellular wireless network, and the remedy is the same, namely the use of a hysteresis factor to retain the association of an STA with a LiFi AP for a certain interval of time after the STA has been offloaded to LiFi. This hysteresis may be implemented in several ways; in the present work, we implement it by "boosting" the \(\mathbf{l}_{u}^{(a)}\) measurements by the hysteresis factor \(h_{u}^{\rm LiFi}>1\), which in general may be dependent on the user area \(u\). In other words, the vector \[\mathbf{E}_{l}(t)=[\mathbf{w}_{u_{1,l}}^{(a)},h_{u_{1,l}}^{\rm LiFi}\mathbf{l}_{u_{1,l}}^{(a )},\mathbf{w}_{u_{2,l}}^{(a)},h_{u_{2,l}}^{\rm LiFi}\mathbf{l}_{u_{2,l}}^{(a)},\ldots, \mathbf{w}_{u_{C_{l},l}}^{(a)},h_{u_{C_{l},l}}^{\rm LiFi}\mathbf{l}_{u_{C_{l},l}}^{(a)}] \tag{8}\] contains the Wi-Fi and LiFi measurements of all the devices that can connect to LiFi AP \(l\), aggregated per user area. We model each LiFi AP as an autonomous agent that can decide which device(s) should be selected to be served3 by LiFi and change its orientation accordingly. Each LiFi AP makes a decision with the goal of optimizing the overall network performance \(f\), which in general could be defined as a scalar function of several KPIs in the combined network4. To do this, each LiFi AP learns a mapping between the current network state \(\mathbf{E}(t)\), the possible actions, and the overall network performance. LiFi AP \(l\) can use as network state the vector \(\mathbf{E}_{l}(t)\), i.e. consider only the measurements corresponding to all the user areas in its coverage area, or it can include also additional measurements, for example the measurement vectors \(\mathbf{E}_{j}(t)\) of nearby LiFi APs. Footnote 3: Multiple devices in the same user area could be selected or a single device could be targeted. In the latter case, a one-to-one mapping between user areas and devices need to be established. Footnote 4: For example, the user throughput, retransmissions, collisions, or air utilization can all be incorporated in the definition of \(f\). We can formulate the problem as a _Contextual Multi-Armed Bandit_ (CMAB), where the network state \(\mathbf{E}(t)\) is the context. The mapping between context, actions, and the resulting network performance may take different forms. It may be modeled as a function from the (context, action) pair to the resulting network performance. Another option is to model the mapping from context to the network performance of each action. In both cases, the mapping can be learned by a neural network. ## 5 Simulation In the trace analysis we were able to show the opportunity and ability to predict the negative impact score of users to select candidates for offloading to improve the overall network performance for all users. ### Layout and geometry Since the analysis was done on static traces, we have no way of measuring the actual impact of offloading these users. Therefore, we now simulate an offloading scenario using NS3, where we model a network with only a single Wi-Fi AP containing eight users: (i) a cluster of four STAs that are candidates to be offloaded, and (ii) another cluster of four STAs that serve as background users impacting the performance. In this simulation setup, there is no LiFi AP; instead, the effect of offloading an STA from Wi-Fi to LiFi is simulated by simply dropping that STA from the simulation, thereby allowing us to simulate only the Wi-Fi network on each NS3 run. The four STAs that are candidates for offloading to LiFi are closer to the (Wi-Fi) AP and have _average throughput_ 20% lower than the four other STAs that serve as background users. Although it may be counter-intuitive to have the candidate users have _lower_ average throughput than the background users, it has the effect that any outlier (in terms of traffic) amongst the candidate users therefore has an outsize impact on the system KPIs when it drops out. At the same time, this setup allows for "headroom" for the traffic at this outlier candidate, ensuring that with high probability, even the outlier traffic does not hit (and get capped at) the maximum throughput possible in the system. ### Workload trace generation We replay workloads with a generative adversarial network (GAN)-based synthetic workload generation tool, MASS [25], trained on a public data set from Telefonica [26]. On each NS3 run, MASS is used to generate a 100-epoch long trace for each of our 8 STAs. The traces are split into multiple sections, each section of duration 10 epochs, which we call a _period_. The upload and download rate can vary for each STA from one period to the next. In other words, each trace may be seen as a _time series_ of (upload, download) rate pairs for a particular STA, with as many such pairs as there are periods in the trace. Moreover, the upload and/or download rate for that STA can only change at the boundary of a period. For each STA, each epoch of each period of the trace constitutes a MASS-generated workload with the appropriate (uplink/downlink) rate for that period, replayed for 2 s. Since the client iPerf processes (on the STAs) and the LiFi offload controller process (on the LiFi AP) are not synchronized, a STA may be selected for offload to LiFi at any time, and the offloading will take effect from the next epoch in the trace of that STA. The short 2-second duration of each epoch therefore ensures that the maximum delay in offloading an STA to LiFi is 2 s. The maximum requested TCP download and upload rates are both set to 100 Mbps, and a 20 MHz wide 5 GHz 802.11ac channel is used for all STAs and the AP. Figure 2: Simulation Collision Time Series ### Statistics collected We then run five simulations (in NS3) for each 10-epoch period in the trace, corresponding to the scenarios where no candidate STA is offloaded, and each of the four candidate STAs is offloaded respectively. For each of these five simulations in each period, we collect the pcap Wi-Fi packet trace between each STA and the AP. This allows us to capture all upload and download packets, as well as retries and other statistics such as effective send and receive rates5, and packet inter-arrival times. We call the resulting statistics our _measurements_. Footnote 5: The effective upload and download rates are obtained by smoothing using an average over 5 periods. For each NS3 run, we also collect a global statistic of average retry probability across all packets sent in the system. We call this statistic the _collision_ KPI, and it is our measure of overall network performance. The goal is to reduce this collision KPI by selecting the best STA to be offloaded (among the four candidates). Fig. 2 shows the collision KPI for all five decisions (i.e., no STA offloaded or one of the possible four candidate STAs offloaded) during one simulation. It is clear that it is not optimal to select just one of these STAs and offload it alone throughout the entire duration of the trace. We perform 50 NS3 runs, i.e., we repeat the steps of generating a 100-epoch (= 10-period) trace, running simulations (five per period, as described above) and collecting statistics, 50 times. ### Predicting offload candidates To evaluate different offloading predictors, we first train the prediction models with the STA measurements across all 8 STAs based on which state the system is in, i.e. no STA offloaded or, one of the four candidate STAs offloaded. During training, the prediction model output is set to the collision KPI in the next period for each of these five possible states. This setup allows us to use the model to predict the KPI in the next period, given any possible state in the current period. #### 5.4.1 Prediction model training and inference The prediction models are trained with the data from the first 8 periods of the (10-period-long) traces from the 50 NS3 runs (with five simulations per period, one per state), resulting in a training set with \(8\times 5\times 50=2000\) samples (_#periods \(\times\) #states \(\times\) #simulations_). The data of the remaining 2 periods of each trace are used for testing, for a total of \(2\times 50=100\) predictions (_#periods \(\times\) #traces_). These predictions correspond to the selection of a user for offload in each of the final 2 periods of each of the 50 traces. When predicting the best user to offload in the next period, we always assume that all four candidate STAs are present in the Wi-Fi network, i.e., the present state of the system is one where no STA has been offloaded. It should be noted that each of the 100 test predictions requires four predicted _collision_ KPIs to be computed, one for each of the candidate STAs. The best user to offload is chosen as the STA that, when offloaded, results in the minimum collision KPI as predicted by the prediction model. #### 5.4.2 Prediction model evaluation From our NS3 runs, we observed that the maximum collision probability reduction for a genie-aided (clairvoyant) predictor, which always predicts the STA that minimizes collision KPI upon being offloaded, is about \(16\)%. The _collision improvement score_, \(\mathrm{cis}\), of a predictor \(p\) is defined as: \[\mathrm{cis}=\frac{P_{0}^{c}-P_{p}^{c}}{P_{0}^{c}-P_{\mathrm{cv}}^{c}} \tag{9}\] where \(P_{0}^{c}\) is the collision probability when no STA is offloaded, \(P_{p}^{c}\) is the collision probability when the STA picked by \(p\) is offloaded6, and \(P_{\mathrm{cv}}^{c}\) is the collision probability for the clairvoyant predictor. A clairvoyant predictor would hence have a collision improvement score of \(100\)%. Footnote 6: Actually, \(p\) predicts the collision KPIs when each of the four STAs is offloaded, so \(P_{p}^{c}\) is the smallest of these four collision KPIs predicted by \(p\). The prediction accuracy of predictor \(p\) simply measures how many times it picked the correct STA to offload, i.e., how many times \(P_{p}^{c}=P_{\mathrm{cv}}^{c}\). For a baseline, we also define the so-called _random predictor_, which does not attempt to predict the collision KPI upon the offloading of any of the four STAs, but instead simply directly chooses one of the four STAs at random (with the same probability \(1/4\) for each STA) for offloading. Since this randomly-selected STA is expected to match the STA chosen by the clairvoyant predictor only 25% of the time, this random predictor should be expected to have a prediction accuracy of 25%. #### 5.4.3 Comparing predictors We compare the performance of the following predictors, listed below in descending order of sophistication: * : A Neural Network model; * : A Linear Regression model; * : A naive model that predicts that the collision KPIs corresponding to the offloading of each of the four candidate STAs in the _next_ period simply equal the corresponding observed collision KPIs from the simulations in the _current_ period; * : The baseline random predictor described above, which simply selects one of the four candidate STAs at random with the same probability of \(1/4\) for offloading. Note that even the random predictor is expected to improve the collision rate on average, as offloading any user should reduce the traffic on the channel under contention. Table 3 shows the comparison between the different predictors. While the prediction accuracy of all models except RAND are similar, NN outperforms all the other models in terms of collision improvement score. Indeed, the prediction accuracy is not necessarily a good indicator of a predictor's performance. For example, RAND has the lowest prediction accuracy but significantly outperforms COL in terms of reducing the collisions in the network, as measured by cis. This happens because often, even if a predictor fails to predict the best STA to offload, the selected user is still impacting the KPI significantly. For example, if we compare the NN and LR performance, we notice that their prediction accuracy is quite close. However, NN selects STAs whose offloading improves the network performance more than that of the STAs selected by LR. In summary, we see an 82 percent improvement in collision probability when using the NN offloader compared to only 31 percent with the random model. ## 6 Experimental Setup The purpose of our experimental system is to verify and reproduce the simulation results with real hardware radios and optical links in a lab setting that mimics an Enterprise Wi-Fi offloading use case. A key difference between simulations and experiments is also the introduction of a LiFi beam steering mechanism designed to reuse a LiFi antenna over many clients positioned in a greater area, in our case a 4-person cubicle grid. One LiFi/Wi-Fi multi-homed wireless station is placed in each cubicle. The antenna is directed in such a way that only one wireless station or client gets dedicated LiFi connectivity at any time. All clients always have Wi-Fi to fall back on, but continuously probe and connect to LiFi if it is available. Our goal here is to improve the Wi-Fi KPI by picking \begin{table} \begin{tabular}{|l|l|l|} \hline **Predictor** & **Prediction Accuracy** & **cis** \\ \hline NN & \(.51\pm.05\) & \(.82\pm.1\) \\ \hline LR & \(.49\pm.05\) & \(.72\pm.13\) \\ \hline COL & \(.50\pm.05\) & \(.31\pm.28\) \\ \hline RAND & \(.24\pm.04\) & \(.47\pm.29\) \\ \hline \end{tabular} \end{table} Table 3: Simulation Prediction Accuracy and Collision Improvement Scores (\(\pm\mathit{SE}\)). Figure 4: Communication Paths and Protocols. Figure 3: Physical Network Links. It should be noted that only one of the LiFi links (shown in green) is present at any given time. Figure 5: LiFi Antenna mounted on Pan-Tilt servos and connected to RaspberryPi in ceiling. Figure 6: NUC with LiFi receptor dongle. Figure 8: Experiment Setup, 3D View. All distances denote inches. Figure 7: Experiment Setup. All distances denote shortest path from sender to receiver in inches. the best client to offload to LiFi at any given time, based on Wi-Fi and LiFi measurements and KPI predictions. We use LiFi hardware from Oledcomm (LiFiMax), and Wi-Fi hardware from Aruba (550 Series). To monitor the LiFi traffic we also do switch port monitoring with a Cisco switch connected to the LiFi controller, which in turn receives all LiFi traffic from the LiFi antenna. We developed an antenna steering REST API service on a Raspberry Pi, which is connected via GPIO pins to servos controlling the angle of the LiFi antenna beams with pulse-width modulation signals. For mechanical antenna control, we use a Lynxmotion Pan and Tilt Kit with two HiTec HS-422 180 degree servo motors. We built central offloading control and monitoring services in Python that implement the NN prediction model using Tensorlow and collect KPI and measurements from the Wi-Fi and LiFi traffic in the network. The offloading control, LiFi/Wi-Fi multi-homed clients, and the switch monitor are all deployed on Intel NUCs running Ubuntu 22.04 LTS with the 5.15 kernel. The physical link architecture is depicted in Fig. 3 and the communication architecture in Fig. 4. Photos of tbe LiFi AP, mounted in the ceiling, and one of the four NUC LiFi clients are shown in Fig. 5 and Fig. 6. The LiFi client and antenna position layout is visualized in Figs. 7 and 8. The four LiFi beam positions are fixed and calibrated before the experiment and the LiFi client dongles remain stationary. To simplify communication over different network interfaces, LiFi and Wi-Fi for experiment traffic and Wired for experiment orchestration, we created two subnets so that each interface gets an IP in a different subnet. ### Traffic Replay The four wireless stations run iPerf3 clients against two sets of iPerf3 servers, one in the LiFi subnet and one in the Wi-Fi subnet7. One client-server pair is used for upload and one for download, for a total of 8 streams per iPerf server set. Footnote 7: Owing to the design of iPerf, a single iPerf server cannot handle both upload and download sessions; instead, we need a “set” of two iPerf servers, one for upload and one for download, for each subnet. It is important to ensure that the load on the CPU of the server due to iPerf does not become the bottleneck on network performance. In the LiFi case only at most two streams from a single station will be active, so the load due to iPerf is light, and the LiFi iPerf set is deployed on a NUC. On the other hand, the Wi-Fi iPerf set may in some runs serve all four STAs concurrently, so its load on the server CPU may be high. So we run the Wi-Fi iPerf set on a more powerful CPU (MacBook Air). Figure 9: Workload trace used for the initial exploration stage (first quarter) and the predictions (last three quarters). Figure 10: At the beginning of each experiment samples \(\{s_{1},s_{2},\ldots,s_{T_{e}}\}\) are collected and used to train the model at time \(T_{e}\). This model is used to decide the LiFi antenna orientation that will be used in the next \(T_{s}\) time units, where a time unit is the time required to collect a sample. The model is updated at time \(T_{e}+T_{s}\), using the most recent \(T_{s}\) samples. Traffic is generated with a specific bitrate based on trace files as in the simulation. The traces (one per station) comprise a sequence of 100 epochs8, and are replayed in each experiment condition. Footnote 8: A 5-period smoothing is applied, as in the simulations. For each STA, each epoch of each period of the trace constitutes a MASS-generated workload with the appropriate (uplink/downlink) rate for that period, replayed for 10 min. The STAs use a wired connection (in order not to load the wireless network) to check for LiFi availability (with a curl to the LiFi server) every 10 s. Since the client processes (on the STAs) and the LiFi offload controller process (on the LiFi AP) are not synchronized, an STA may be selected for offload to LiFi at any time, and the offloading will take effect from the next epoch in the trace of that STA. Forcing the STAs to check with the LiFi offload controller every 10 s ensures that the maximum delay in offloading an STA to LiFi is 10s. Fig. 9 shows the total -- upload and download -- workload per epoch of the four STAs. ### Offloading Prediction The offloading control service monitors Wi-Fi and LiFi measurements as well as Wi-Fi KPI every 30-40s. Each experiment takes almost 17 hours and corresponds to about 1500 collected samples, where each sample consists of Wi-Fi and LiFi measurements and the corresponding KPI. We run 5 independent experiments for each of the three prediction models -- NN, LR, and RAND -- for a total experiment run time of about 250h. Each experiment starts with a initial exploration during which the LiFi antenna positions are selected in round-robin and \(T_{e}=400\) samples (out of about 1500 total) are collected. This data is used to train the initial prediction model. This means that about the first quarter of the traces shown in Fig. 9 are used for this initial stage. During the entire experiment, including the initial exploration, switches between LiFi positions are only allowed every \(T_{s}=4\) samples, or about every 2 min. After the initial exploration, a prediction model is trained and used to make decisions on which station should have the LiFi antenna directed to it at any given time. Before making any new decision, the prediction model is updated with the \(T_{s}=4\) most recent samples. Fig. 10 depicts this process. Experiment ### Defining the prediction models As explained in Sec. 4, we formulate the problem of deciding which device should be selected for LiFi as a CMAB. In this section we evaluate different models that approximate the mapping between the (context, action) pair in the CMAB and the resulting network performance. First, we extract air utilization (air) as the KPI from the Aruba Wi-Fi AP on a scale from 0 to 100. Since it is more natural to consider a higher KPI better we invert these measurements as follows: \[\mathrm{kpi}=100-\mathrm{air} \tag{10}\] The goal of the proposed system is to optimize future KPI in the overall hybrid Wi-Fi/LiFi network. This is done in a two-step process: first, given the current network state \(\mathbf{E}(t)\), a future KPI is estimated for each possible action (position), and then the actions are sorted to retrieve the top position. We pre-process the Wi-Fi and LiFi measurements and extract features that are then used as the context \(\mathbf{E}(t)\) of the CMAB. The most significant difference between the inputs to the models in simulation and in the experiment is the inclusion of LiFi measurements in the latter. The models trained and tested in simulation did not include LiFi measurements since no LiFi module is currently available in NS3. The other difference is that in the case of the experiment the raw Wi-Fi and LiFi measurements are processed to extract features (the impact vector explained below) to be able to train the models with a small number of samples. ### Inputs to the prediction models The input to the prediction model is the context of the CMAB, which from Sec. 4 is defined by (8) \[\mathbf{E}(t)=[\mathbf{w}_{u_{1}}^{(a)},h^{\mathrm{LiFi}}\mathbf{l}_{u_{1}}^{(a)},\mathbf{w}_{ u_{2}}^{(a)},h^{\mathrm{LiFi}}\mathbf{l}_{u_{2}}^{(a)},\ldots,\mathbf{w}_{u_{C}}^{(a)},h^{ \mathrm{LiFi}}\mathbf{l}_{u_{C}}^{(a)}],\] where for brevity of notation, we have dropped the LiFi AP index \(l\) and assumed the same hysteresis factor9\(h^{\mathrm{LiFi}}\) for all \(C\) user areas \(u_{1},\ldots,u_{C}\). In our experimental setup, \(C=4\), corresponding to the four possible positions that the LiFi antenna can be pointed to. In addition to \(\mathbf{E}(t)\), our context for the CMAB also includes the present position that the LiFi antenna is pointed to.10 Footnote 10: This is represented by a 4-element one-hot indicator vector with a \(1\) in the position that the antenna is pointed to. The indicator vector allows us to probe for KPI estimates for different LiFi antenna positions for a given context. Since exactly one STA (among four) can be served by the LiFi antenna in each of its four positions, we further simplify \(\mathbf{E}(t)\) by replacing the Wi-Fi and LiFi measurements in the user areas \(u_{1},\ldots,u_{C}\) by the corresponding quantities computed at the four STAs themselves. The Wi-Fi measurements for the four STAs are their relative normalized traffic loads on the downlink and uplink respectively, as defined through the following Softmax operation for each direction (downlink or uplink) separately: \[\mathbf{w}^{\rm dir}(t)={\rm softmax}\left(\frac{N_{1}^{\rm dir}(t)-N_{1,\rm min}^ {\rm dir}}{N_{1,\rm max}^{\rm dir}-N_{1,\rm max}^{\rm dir}},\ldots,\frac{N_{4} ^{\rm dir}(t)-N_{4,\rm min}^{\rm dir}}{N_{4,\rm max}^{\rm dir}-N_{4,\rm max}^ {\rm dir}}\right), \tag{11}\] where \({\rm dir}\) denotes the direction (uplink or downlink), and \(N_{n,\rm min}^{\rm dir}\), \(N_{n,\rm max}^{\rm dir}\), and \(N_{n}^{\rm dir}(t)\) are respectively the minimum, maximum, and instantaneous (at time \(t\)) traffic load generated by STA \(n\) in direction \({\rm dir}\), for each \(n=1,\ldots,4\). The softmax operation is designed to emphasize outliers, as the goal is to select the LiFi antenna position (or equivalently, the STA to be offloaded) yielding the highest performance as measured by the collision KPI. The above vector \(\mathbf{w}^{\rm dir}(t)\) is dense (fully populated) if no STA has been offloaded to LiFi. If an STA is offloaded from Wi-Fi to LiFi, then its corresponding entry in the vector \(\mathbf{w}^{\rm dir}(t)\) is set to 0 and the corresponding entry in the similarly-defined vector of LiFi measurements \(\mathbf{l}^{\rm dir}(t)\), is calculated instead and scaled by the hysteresis factor \(h^{\rm LiFi}\). We have empirically determined that \(h^{\rm LiFi}=3.5\) gives good results not only for the experiments reported here, but also on other workloads and in different settings. ### Experimental measurements The sampled measurements are summarized in Table 4. The Wi-Fi measurements were collected with the Aruba REST API, and the LiFi measurements were collected using the Cisco switch CLI (using _show interface summary_). We note that \begin{table} \begin{tabular}{|l|l|} \hline **Wi-Fi** & frames\_in\_fps \\ & frames\_out\_fps \\ \hline **LiFi** & up\_throughput \\ & down\_throughput \\ \hline \end{tabular} \end{table} Table 4: Wi-Fi and LiFi Station Measurements. our models normalize all measurements, so the fact that the units of different measurements are different has no impact on the predictions, and thus were chosen based on what was easiest to measure. ### Evaluating the prediction models The performance of the three predictor models is evaluated starting from the end of the initial exploration stage to the end of the traces. Let this time duration span \(T\) samples, which we label \(k=1,2,\ldots,T\). As previously noted, each of the three models is evaluated on \(E=5\) independent experiments. Let us denote by \(f_{e,m}(k)\), \(k=1,\ldots,T\) the KPI values (airtime, in our case) observed over these samples in the experiment \(e\) with prediction model \(m\). For each model \(m\), we define its normalized performance over these samples as: \[r_{e,m}(k)=\frac{f_{e,m}(k)-\min_{m}f_{e,m}(k)}{\max_{m}f_{e,m}(k)-\min_{m}f_{e,m}(k)},\quad k=1,\ldots,T. \tag{12}\] Then, we compute the running-average normalized performance of \(m\) over these samples as: \[[\overline{r}_{e,m}(1),\ldots,\overline{r}_{e,m}(T)], \tag{13}\] where \(\overline{r}_{e,m}(1)=r_{e,m}(1)\) and for \(k=2,\ldots,T\), the \(k\)th element in the above is the average of the previous \(k-1\) normalized performance values: \[\overline{r}_{e,m}(k)=\frac{1}{k-1}\sum_{j=1}^{k-1}\overline{r}_{e,m}(j),\quad k =2,\ldots,T. \tag{14}\] Finally, we compute the running-average normalized performance of each model \(m\) averaged across the \(E=5\) experiments as: \[\overline{r}_{m}(k)=(1/E)\sum_{e=1}^{E}\overline{r}_{e,m}(k) \tag{15}\] Fig. 11 plots \(\overline{r}_{m}(k)\) versus \(k\) for \(k\geq 50\), for each of the three models \(m\). Table 5 summarizes the results for the different models \(150\) samples after the end of the initial exploration, which is 400 samples as reported in Sec. 6.2. We note that the improvement of NN over LR is reduced the further away from training the predictions are, indicating that re-calibration or random exploration could be motivated. In summary, we have seen that the NN model can improve the airtime utilization KPI compared to the random model with 61 percentage points (from 19% to 80%), bring the collision probability, Figure 11: Plot of average normalized KPI performance \(\overline{r}_{m}(\cdot)\) for each model \(m\). despite not being part of the optimization, from 19% to 10%, while offloading almost a factor of 10 more traffic over the LiFi link (.16 to 1.4), in the 150 sample prediction case. ## 8 Discussion and Conclusion Although our lab experiment made use of an iPerf client that is capable of detecting whether LiFi is present one cannot expect all applications to seamlessly take advantage of a new LiFi network popping up. To this end we have experimented with a number of other tools to help take advantage of both networks. MPTCP is the most well-known technique that allows automatic subpath failover and load balancing to applications using TCP sockets. We noted that the overhead is quite large so if a subpath is very limited in throughput it is actually slower to aggregate over the paths than to use both paths. So we developed a LiFi monitor that uses a MPTCP path manager to configure the subpaths accordingly if the LiFi (or Wi-Fi) connection gets to poor. MPHTTP is very similar to MPTCP with the difference that only the client needs to be modified to support the transmissions over multiple network interfaces. The idea here is to make use of the HTTP Range query header and schedule ranges across Wi-Fi and LiFi based on the performance of each at any given time. MPRTP is a new protocol we developed to allow WebRTC traffic to be load balanced over LiFi and Wi-Fi and to instantaneously switch traffic from one or the other without dropping any frames in a call. Finally, we have also experimented with network priority on MacOS that allows you to dynamically change which network interface should have highest priority. When the LiFi antenna is directed towards you we can bump the LiFi interface above the Wi-Fi interface to direct new applications to the LiFi network without interrupting existing applications. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Model** & **KPI** (airtime) & **CP** (collision probability) & **LiFi** \\ \hline NN & \(.80\) & \(.10\) & \(1.4\) \\ LR & \(.54\) & \(.13\) & \(1.1\) \\ RAND & \(.19\) & \(.19\) & \(.16\) \\ Optimal & \(1\) & \(.11\) & \(1\) \\ Worst & \(0\) & \(.19\) & \(0.41\) \\ \hline \end{tabular} \end{table} Table 5: Summary of Experiment Results after 550 samples from the beginning of the experiment. **CP** is collision probability; **LiFi** is LiFi traffic multiplier compared to the optimal condition; the KPI is the average airtime across the previous 150 samples, i.e. since the end of the initial exploration stage of 400 samples, and across all experiments \(e\in\{1,2,\ldots,5\}\). All these use cases actively probe to see if the LiFi network is usable with either pings or various curl connection timeouts. That way they will be able to quickly take advantage of a LiFi network that comes into range. Similarly they will be able to quickly fail over to Wi-Fi when there is some temporary obstruction of LoS or the antenna is directed away from the user. The G.vlc LiFi specification implemented in chipsets today does not support seamless handover with Wi-Fi, but the emerging 802.11bb specification defines handover support. When that specification is implemented in chipsets, approaches like ours could benefit all applications seamlessly. We note that our MASS synthetic workload generator is based on a GAN architecture which in turn relies on a couple of DNNs, but since it its trained independently from the NN used to predict KPIs from workloads it does not taint the results. In conclusion, we have seen in simulations and reproduced in experiments that a NN model is a promising predictor of negative impact as a basis for offloading decisions, and that the impact on network KPI can be significant even when moving just a single user. We also note that our approach is general enough to apply to other offload use cases. For example, one could imagine using it to select the best client to steer to another band within a single technology too, such as across different Wi-Fi spectrum regions. ## Acknowledgments It is a pleasure to acknowledge discussions with Bernardo A. Huberman and Lili Hervieu of CableLabs. We also thank Lili for a careful reading and critique of an earlier draft, and helping with the Wi-Fi experiment setup.
2310.17902
CPIA Dataset: A Comprehensive Pathological Image Analysis Dataset for Self-supervised Learning Pre-training
Pathological image analysis is a crucial field in computer-aided diagnosis, where deep learning is widely applied. Transfer learning using pre-trained models initialized on natural images has effectively improved the downstream pathological performance. However, the lack of sophisticated domain-specific pathological initialization hinders their potential. Self-supervised learning (SSL) enables pre-training without sample-level labels, which has great potential to overcome the challenge of expensive annotations. Thus, studies focusing on pathological SSL pre-training call for a comprehensive and standardized dataset, similar to the ImageNet in computer vision. This paper presents the comprehensive pathological image analysis (CPIA) dataset, a large-scale SSL pre-training dataset combining 103 open-source datasets with extensive standardization. The CPIA dataset contains 21,427,877 standardized images, covering over 48 organs/tissues and about 100 kinds of diseases, which includes two main data types: whole slide images (WSIs) and characteristic regions of interest (ROIs). A four-scale WSI standardization process is proposed based on the uniform resolution in microns per pixel (MPP), while the ROIs are divided into three scales artificially. This multi-scale dataset is built with the diagnosis habits under the supervision of experienced senior pathologists. The CPIA dataset facilitates a comprehensive pathological understanding and enables pattern discovery explorations. Additionally, to launch the CPIA dataset, several state-of-the-art (SOTA) baselines of SSL pre-training and downstream evaluation are specially conducted. The CPIA dataset along with baselines is available at https://github.com/zhanglab2021/CPIA_Dataset.
Nan Ying, Yanli Lei, Tianyi Zhang, Shangqing Lyu, Chunhui Li, Sicheng Chen, Zeyu Liu, Yu Zhao, Guanglei Zhang
2023-10-27T05:32:16Z
http://arxiv.org/abs/2310.17902v1
CPIA Dataset: A Comprehensive Pathological Image Analysis Dataset for Self-supervised Learning Pre-training ###### Abstract Pathological image analysis is a crucial field in computer-aided diagnosis, where deep learning is widely applied. Transfer learning using pre-trained models initialized on natural images has effectively improved the downstream pathological performance. However, the lack of sophisticated domain-specific pathological initialization hinders their potential. Self-supervised learning (SSL) enables pre-training without sample-level labels, which has great potential to overcome the challenge of expensive annotations. Thus, studies focusing on pathological SSL pre-training call for a comprehensive and standardized dataset, similar to the ImageNet in computer vision. This paper presents the comprehensive pathological image analysis (CPIA) dataset, a large-scale SSL pre-training dataset combining 103 open-source datasets with extensive standardization. The CPIA dataset contains 21,427,877 standardized images, covering over 48 organs/tissues and about 100 kinds of diseases, which includes two main data types: whole slide images (WSIs) and characteristic regions of interest (ROIs). A four-scale WSI standardization process is proposed based on the uniform resolution in microns per pixel (MPP), while the ROIs are divided into three scales artificially. This multi-scale dataset is built with the diagnosis habits under the supervision of experienced senior pathologists. The CPIA dataset facilitates a comprehensive pathological understanding and enables pattern discovery explorations. Additionally, to launch the CPIA dataset, several state-of-the-art (SOTA) baselines of SSL pre-training and downstream evaluation are specially conducted. The CPIA dataset along with baselines is available at [https://github.com/zhanglab2021/CPIA_Dataset](https://github.com/zhanglab2021/CPIA_Dataset). Pathological images, Pre-training, Self-supervised learning, Large-scale dataset. ## I Introduction Cancer has emerged as a significant threat to human health, with approximately 1,958,310 new cases and 609,820 fatalities anticipated in 2023 according to the American Cancer Society [1]. In clinical practice, pathological examination serves as the gold standard for diagnoses. Moreover, clinical records and physical examination alone may be inadequate for accurate diagnosis of certain non-communicable diseases [2]. Therefore, pathological examination informs treatment guidelines and prognosis assessment [2]. A 2016 interview-based study in the USA and Germany revealed that 66% of clinical decisions were dependent on the results of pathology and laboratory medicine [3]. However, exclusively depending on human evaluation retains limitations and dilemmas. Firstly, these diagnoses are primarily based on pattern recognition within images by pathologists and the interpretation of such patterns in a wider range of patients [4]. Therefore, the reproducibility of such interpretations among different pathologists is unsatisfactory [4]. Secondly, errors and misdiagnoses are unavoidable in pathological examinations [5]. Finally, an increasing shortage of pathologists persists in numerous countries, leading to overburdened workloads and reduced efficiency. Such situations are especially severe in developing regions [6]. Thus, life-saving diagnosis improvement needs more automatic and accessible solutions. In recent years, artificial intelligence (AI) has revolutionized various fields, such as computer-aided diagnosis in pathological image analysis aiming to address the aforementioned challenges effectively [7][8]. Deep learning has been widely applied to pathological tasks such as distinguishing benign and malignant tumors, cancer grading, nuclear and glandular segmentation, and mitotic evaluation [9-13]. The traditional deep learning models depend on massive labeled data for the aforementioned tasks. However, the scarcity of such labels in the pathological field lefts a significant hindrance to their effectiveness [8]. By initializing models with pre-trained weights, transfer learning can provide pre-knowledge to alleviate such dilemmas. Pre-training can improve model recognition and downstream task performance [14][15]. In this process, the pre-training datasets play a crucial role. For example, natural image analysis has experienced unprecedented growth with the support of large-scale public pre-training datasets like ImageNet and COCO [16][17]. Such datasets for pre-training are lacking in the field of pathological image analysis, forcing current researchers to initialize models based on various natural images. In the pre-training stage, the model learns a general feature representation from natural images, which is then utilized for downstream pathological image analysis tasks. Although improved results have been achieved to a certain extent, the huge gap between natural and pathological images has become a stumbling block to further researches [18-20]. Therefore, a comprehensive pre-training dataset based on pathological images is needed. The traditional supervised pre-training relies on large amounts of labeled data. However, the annotations of pathological images are expensive, which require significant workloads of experts to analyze pathological images. The scarcity of labeled pathological data conflicts with the need of traditional supervised pre-training. As a trending pre-training technique, self-supervised learning (SSL) is a form of unsupervised learning where the data provides the supervision, unsealing the field from limited annotations [21]. Specifically, a pretext task is defined using unlabeled pre-training images to train models before supervised downstream task training, thereby improving model robustness and reducing data bias disturbance [22]. After learning the general features of upstream data, the models are transferred to particular downstream tasks. Beyond improvements observed in downstream tasks, the SSL pre-training process also aligns with the supervision-lack characteristic of pathological images [14][15]. Although SSL enables label-free general encoding, the pre-training process highly relies on the comprehensive and standardized datasets. However, there are currently two major challenges in the construction of SSL pathological image analysis datasets. Firstly, the existing datasets are difficult to meet the requirements of comprehensiveness and diversity. Secondly, the large-scale pathological pre-training dataset demands a standardization processing workflow, while the diverse features and complex sampling conditions among the samples lead to great challenges. Regarding the first challenge, most existing datasets are limited to specific diseases with small volumes, which is insufficient to support general pathological knowledge learning [8]. For example, Camelyon17 is a dataset for detecting and classifying breast cancer metastasis in lymph nodes with 1,000 images, and SICAPv2 is a prostate histology dataset with both annotations of global Gleason scores and patch-level Gleason grades with 18,783 images [23][24]. Thus, a comprehensive pathological SSL pre-training dataset is expected, similar to the ImageNet dataset of natural images, which includes approximately 14,000,000 images [16]. Furthermore, to lay a solid foundation and advance the development of pathological image analysis, the dataset needs to cover a wide range of diseases and organs/tissues [19]. Additionally, the larger the SSL pre-training data magnitude is, the more significant the model performance would improve. Accordingly, this paper proposes a comprehensive pathological image analysis (CPIA) dataset for SSL pre-training to achieve the comprehensive and large-scale requirements. The CPIA dataset comprises 21,427,877 images, encompassing over 48 organs/tissues and approximately 100 disease types. It primarily includes two pathological sample types: whole slide images (WSIs) without annotation and characteristic regions of interest (ROIs) that have been aligned after clearing, standardization, and task-clustering. To the best of our knowledge, the comprehensiveness and enormity of the CPIA dataset surpass all existing datasets for pathological image analysis. The wide-covering histopathology and cytopathology data enable transfer learning to acquire comprehensive pathological knowledge, rather than being restricted to a particular organ or disease. Therefore, the CPIA dataset solves the first challenge in the construction of SSL pathological image analysis datasets. Regarding the second challenge, this paper proposes a workflow of strict aligning and systematic composition. There are 103 published pathological datasets used in constructing the CPIA dataset. And these dataset variations in task types, image quality, and construction frameworks lead to great challenges. To solve these challenges, we first transform WSIs to the same scale with a uniform area of tissue slides based on a unified micron per pixel (MPP). Moreover, we combine the diagnosis habits of pathologists with dataset construction by designing subsets of different scales. Lastly, the different ROI public datasets are grouped with the corresponding scale subsets with the supervision of experienced senior pathologists. In previous studies, numerous datasets are limited to a single predefined scale tailored for a specific task [25][26]. Neglecting this diverse information available at different scales leads to information loss, while the CPIA dataset quantitatively incorporates multi-scale images by building different scale subsets [27][28]. In fact, pathological images at different scales contain varying information, collectively determining sample properties and individually corresponding to different downstream task types [8]. For the first time in SSL, we integrate the diagnosis habits of pathologists into dataset construction, producing a dataset with better clinical relevance. Additionally, the comprehensive coverage of the CPIA dataset facilitates the full and objective evaluations of model performance. For the efficiency and convenience of evaluation, this paper also introduces the CPIA-Mini dataset for SSL pre-training, along with several independent downstream evaluation datasets. Specifically, the CPIA-Mini dataset is a lightweight and categorically balanced CPIA fraction, which includes 3,383,970 images. It is non-overlapping to the validation and test subsets of downstream evaluation datasets. This feature facilitates the exploration of the performance of various state-of-the-art (SOTA) SSL pre-training baselines. The baseline experiments are reported based on the CPIA-Mini dataset. In summary, the contribution can be concluded into the following five aspects: * Introduction of the CPIA dataset for SSL pre-training, characterized by its large-scale and wide-covering, including 21,427,877 images, over 48 organs/tissues, and approximately 100 disease types. * Systematic data processing and strict standardization of 103 sub-datasets. A pathological image processing strategy is proposed to assure homogeneity across images at the same scale, based on unified MPP. * Inclusion of multi-scale images in the CPIA dataset, addressing the varying information present at different scales by pathologists. The CPIA dataset incorporates the diagnosis habits of pathologists for the first time to such an extent, resulting in a more clinically relevant dataset. * Introduction of several independent pathology datasets and the CPIA-Mini dataset, enabling the researchers to explore the performance of various SSL pre-training methods. * Evaluations with several SOTA SSL pre-training baselines and downstream fine-tuning, offering a solid foundation for future research in the field. ## 2 CPIA Dataset The CPIA dataset is a diverse and comprehensive collection of pathological images, serving as a valuable resource for deep learning researches. In this section, we will introduce the CPIA dataset from four aspects, including guiding principles, data assembly, data processing, and overview of the CPIA dataset. All source datasets are either associated with the Creative Commons (CC) Licenses or encouraged sharing and derivatives, which allow us to develop the CPIA dataset based on them. The sub-datasets in the CPIA datasets use CC Licenses with the same License Elements as the source datasets. ### Guiding Principles Existing studies have introduced a series of datasets, with each encompassing limited samples under diverse criteria. However, a dataset intended for SSL pre-training should ideally embody substantial data volume and comprehensive content coverage. Adhering to this concept, the CPIA dataset integrates a vast number of existing datasets and showcases several key characteristics, including diversity, extensiveness, standardization, multi-scale, and implementation flexibility. These characteristics will be introduced below. Figure 1: The WSI processing strategy and compositions of the CPIA dataset. (a) Organ/tissue category composition and number of images in the CPIA dataset, which includes 21,427,877 standardized images, covering over 48 organs/tissues and approximately 100 kinds of diseases. The bar chart shows the composition of the CPIA-Mini dataset, and the full CPIA dataset is described in the “Overview of the CPIA Dataset” part in detail. (b) Sampling relationship between each scale of WSI sub-datasets. Specifically, 3840’3840 images represent the XL scale of WSI sub-datasets, which are directly cropped from the standardized WSIs, and the blank part is discarded. 960’960 and 384’384 images respectively represented the L and M scale of WSI sub-datasets, which are cropped from 3840’3840 images. The 96’96 images represent the S scale of WSI sub-datasets and are cropped from the 384’384 images. (c) The proposed sampling approach for the lightweight CPIA-Mini dataset, explains the 1:16:32:32 proportional relationship among XL, L, M, and S scales. #### Iii-A1 Diversity The diversity in a pre-training dataset can enhance the generalization abilities of models. Therefore, to improve the performance across a wide range of individual downstream tasks, a comprehensive SSL pre-training dataset is needed [29]. The CPIA dataset shows a wide diversity demonstrated in various aspects. Specifically, the CPIA dataset comprises 103 sub-datasets obtained from distinct acquisition equipment, showcasing a variety of organs/tissues, diseases, and staining styles. **Fig. 1(a)** illustrates the composition of the CPIA dataset, which includes over 48 distinct organs/tissues, such as the breast, stomach, kidney, lymph nodes, colon, lung, and prostate. Furthermore, the CPIA dataset encompasses various disease types, covering approximately 100 categories, including breast carcinoma, colon adenocarcinoma, lung squamous cell carcinoma, prostate carcinosarcoma, clear cell renal cell carcinoma, etc. Lastly, the diversity of staining manifests in various ways. One is the heterogeneity arising from different staining conditions across various medical centers, leading to distinct hues even under the same staining method. The other heterogeneity is the utilization of numerous staining methods such as Hematoxylin-Eosin (H&E), Giemsa, and Immunohistochemistry-IHC, etc. To the best of our knowledge, the diversity of the CPIA dataset far exceeds other datasets in the literatures. #### Iii-A2 Extensiveness The effectiveness of deep learning models is heavily influenced by the quantity and quality of training data [30]. The CPIA dataset not only has the diverse characteristic but also represents a significant advancement in large-scale pathological datasets. Specifically, the CPIA dataset contains 21,427,877 images, significantly surpassing all previous pathological datasets with magnitudes. The unprecedented scale of the CPIA dataset can not only support the training of next-generation large-scale models with comprehensive pathological knowledge but also offer a more valuable resource for future research efforts in discovering new patterns. #### Iii-A3 Standardization The processing methods and organizational procedures of the CPIA dataset have been standardized. Currently, samples from various datasets may represent different objective scales, necessitating a significant workload to align them. To address this issue, two pathological image processing strategies have been established based on two data categories. Specifically, the processing method of WSI sub-datasets aligns samples based on MPP. Meanwhile, the ROI sub-datasets are selected artificially to align the WSI sub-dataset in different scales under the supervision of experienced senior pathologists. Therefore, different sub-datasets at the same objective scale reflect the same scale characteristics. Additionally, each sub-dataset is in accordance with the established standards for image segmentation or classification. #### Iii-A4 Multi-scale As a diverse and extensive pathological dataset, the CPIA dataset necessitates a well-structured approach to be efficiently utilized. Unlike many existing datasets in literatures that only provide images at a single, predetermined scale tailored to a specific task, the CPIA dataset employs a unique approach. The CPIA dataset is designed to reflect the clinical diagnosis habits of pathologists. They typically analyze pathological images at four different objective lens magnifications, each representing distinct pathological information. This dataset emphasizes the importance of multi-scale features in pathological images, which are often overlooked in existing datasets. In the CPIA dataset, image magnification scales are selected based on such clinical diagnosis habits of pathologists. Thus, this approach produces a more representative and clinically relevant dataset for SSL pre-training deep learning models in pathological image analysis [31]. And for the first time, the dataset construction aligns with the diagnosis habits of pathologists. Consequently, sub-datasets at different scales each represent a unique diagnostic perspective, satisfying the need for multi-scale characteristics. By cooperating with experienced senior pathologists at Peking Union Medical College Hospital, we have developed a multi-scale strategy consistent with clinical practice. Thus, the CPIA possesses enhanced robustness and generalizability across numerous downstream tasks. #### Iii-A5 Implementation Flexibility The CPIA is structured to the standard Image-Folder format, with each folder corresponding to a specific sub-dataset. This architecture simplifies implementation and allows researchers to conveniently filter or split the datasets based on particular downstream tasks. Furthermore, the CPIA dataset is available in two versions: the full CPIA dataset and the CPIA-Mini dataset. The full CPIA dataset encompasses all processed data from the original datasets without any data filtering. The CPIA-Mini dataset serves as a lightweight version containing selected portions of each WSI sub-dataset, with samples resized to a uniform image size to reduce overall storage requirements. Serving as a lightweight version, the CPIA-Mini dataset contains 3,383,970 images. Meanwhile, to evaluate the SSL baselines on the proposed CPIA-Mini dataset, a series of additional independent pathological image datasets have been prepared. The validation and test subsets of these datasets are non-overlapping to the CPIA-Mini dataset. It is efficient and convenient to fairly evaluate the performance of various models on the proposed CPIA-Mini dataset. As a result, cost-effective approaches can be explored, offering valuable insights into the underlying pathological information and enabling fair comparisons between different methods. It also allows researchers to explore preliminary hyper-parameters on the CPIA-Mini dataset before launching SSL pre-training with the full CPIA dataset. ### _Data Assembly_ The CPIA dataset has been curated to meet extensive and diverse requirements by gathering all publicly accessible pathological image datasets from existing literature. These datasets were initially disorganized and fragmented, necessitating an efficient sorting and stringent selection process to ensure the incorporation of only high-quality datasets. Details such as basic information and download links about these initial datasets are publicly available at [https://github.com/zhanglab2021/CPIA_Dataset](https://github.com/zhanglab2021/CPIA_Dataset), benefitting researchers in their respective fields. After the data collection and cleaning stages, the CPIA dataset primarily consists of two major categories of pathological samples: WSIs and ROIs. _1) WSIs_ WSIs are comprehensive pathological whole slide images captured by high-resolution scanners, which are widely applied in clinical analysis [32]. These digital images promote deep learning applications in pathological image analysis [4]. The CPIA dataset encompasses 55 WSI sub-datasets without strict annotations, covering over 45 organs/tissues and approximately 98 disease types. Most of the WSIs are tissue samples, collected to represent the histopathology sections of pathological image analysis. _2) ROIs_ ROIs are representative images obtained under microscopes or specifically cropped from WSIs, being aligned after cleaning, standardization, and task-specific clustering [33]. The purpose of designing these ROIs is to accurately depict specific regions, thereby facilitating target pathological feature analysis in research. The CPIA dataset includes 50 ROI sub-datasets, covering over 29 organs/tissues and about 20 disease types. Most of the liquid samples are collected in the ROI group, representing the cytopathology sections of pathological samples. ### Data Processing In order to accord with the diagnosis habits of pathologists, an effective multi-step strategy has been established to organize and process pathological images. This strategy aims to provide a more representative dataset with clinical workflows for pre-training, which makes up for the multi-scale features of pathological images ignored by existing datasets. In typical clinical practice, pathologists first start a diagnosis by scanning an entire sample section at the first level scale using a 4x objective lens, identifying tumors or suspicious lesions based on observed features. Secondly, the section is enlarged to the second level scale at a 10x objective lens to observe tissue morphology and marginal features. Then, at the third level scale, cell cluster characteristics and intercellular relationships are observed under a 20x objective lens. Lastly, at the fourth level scale, diseased cells are zoomed in to observe cellular characteristics with a 40x objective lens. This four-step process is integrated to complete the diagnosis of each sample slice, which is followed in the CPIA processing. The CPIA dataset is designed to combine the diagnosis habits of pathologists with the data processing methods. There are two major categories of pathological samples collected in the CPIA dataset: WSIs and ROIs. Accordingly, we established two processing methods based on these two types of pathological images, with the supervision of experienced senior pathologists. _1) WSIs_ To preserve information during data standardization, WSIs are resized using the MPP information attached to each sample. Due to the diversity of microscopes and high-resolution scanners, the original WSI sub-datasets have different MPP values. To standardize the publicly diverse WSIs, a unified MPP of 0.491 is determined to resize all WSIs, as it is widely used in data from the National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium. Hence, all of the WSIs finally achieve the same physical scale per pixel. Furthermore, inspired by the four-step diagnosis habits of pathologists, the WSIs are segmented into four scales: XL, L, M, and S (i.e., Extra-large, Large, Medium, and Small). With the supervision of experienced senior pathologists, the four selected scales can reflect key diagnosis scales of pathologists in clinical practice. Therefore, a four-step approach has been designed to achieve the cropping requirement. Two sampling procedures are designed for the CPIA and CPIA-Mini datasets. To be specific, **Fig. 1(b)** displays the details of data sampling of the CPIA-Mini dataset, with XL scale samples cropped from resized WSIs under the resolution of 3840*3840. Subsequently, under the confirmation of pathologists, each XL sample is cropped into 16 images in L scale with a side length of 960, and 32 images in M scale with a side length of 384. Each image in M scale is then cropped an image in S scale with a side length of 96. To be mentioned, the white spaces of WSIs are discarded. Thus, the CPIA-Mini sample number ratio of XL, L, M, and S is about 1:16:32:32. **Fig. 1(c)** illustrates the selected cropping positions at different scales in each step. As shown in the "WSIs" section of **Fig. 2(a)**, in order to further lighten the CPIA-Mini dataset, the images of four scales are resized to 384*384. For the full CPIA dataset, all cropped sections are taken into account, except the S scale. Considering the balance between representativeness and enormity of the data, images in S scale are uniformly sampled 10% of them. Therefore, the full CPIA sample number ratio of XL, L, M, and S is about 1:16:100:160. The full CPIA dataset is not resized, making researchers use it flexibly according to their downstream tasks. Fig. 2: (a) The CPIA-Mini dataset construction framework. (b) The internal framework of the data in the full CPIA and CPIA-Mini datasets. The L represents the L scale part of the CPIA dataset, named CPIA-including L scale ROI sub-datasets, and XL and L scale WSI sub-datasets. The M represents the M scale part of the CPIA dataset, named CPIA-M, including M scale ROI and WSI sub-datasets. The S represents the S scale part of the CPIA dataset, named CPIA-S, including S scale ROI and WSI sub-datasets. Each scale follows the scale hierarchy corresponding to the diagnosis habits of pathologists, allowing researchers to train models at multiple scales to obtain general pathological knowledge. Specifically, the XL scale enables the understanding of suspicious lesions, the L scale focuses on organizational characteristics composed of many cells, the M scale examines cell clusters and intercellular relationships, and the S scale explores individual cell morphological characteristics. _2) ROIs_ Firstly, the ROIs are central cropped into the largest inscribed squares of the original images and then resized into 384*384. Then, due to the lack of a universally consistent MPP parameter, ROI sub-datasets are artificially divided into three scales: L, M, and S. The scale of each ROI sub-dataset is determined under the supervision of experienced senior pathologists. **Fig. 2** illustrates a structured approach designed to align ROIs to WSIs while maintaining similar feature characteristics. The L scale corresponds to the XL and L scales of the WSI sub-datasets. The M and S scales correspond to the same M and S scales of the WSI sub-datasets. All the ROIs are kept in the full CPIA dataset, intended to meet the diverse needs of researches. Additionally, we have designed different ROI inclusion criteria to build the CPIA-Mini dataset. As shown in the "ROIs" section of **Fig. 2(a)**, only the training sets of the ROI sub-datasets with category labels are involved in the CPIA-Mini dataset. Furthermore, several independent downstream datasets are prepared, the validation and test subsets of which are non-overlapping to the CPIA-Mini dataset. Therefore, this design can support the efficiency and convenience of model evaluations. In summary, the data processing strategy always adheres to the multi-scale characteristic. The internal framework of the CPIA and CPIA-Mini is the same. As shown in **Fig. 2(b)**, in order to merge the WSI and ROI sub-datasets, the framework of the CPIA dataset includes three levels: CPIA-L, CPIA-M, and CPIA-S, in three folders: L, M, and S. The L folder contains the XL and L scale WSI sub-datasets, as well as the L scale ROI sub-datasets. The M folder contains the M scale WSI sub-datasets and the M scale ROI sub-datasets. The S folder contains the S scale WSI sub-datasets and the S scale ROI sub-datasets. Each sub-dataset is in a single folder under L, M, and S folders for easy access. The specific sub-datasets and related information contained in each folder are available at [https://github.com/zhanglab2021/CPIA_Dataset](https://github.com/zhanglab2021/CPIA_Dataset). ### _Overview of the CPIA Dataset_ #### Ii-D1 Sample Statistics The CPIA dataset contains 21,427,877 images that are divided into 28 categories by organs/tissues, as shown in **Table I**. Row 1-27 of **Table I** are specific organs/tissues, such as adrenal gland, bladder, blood, bone, etc. Each organ/tissue category covers multiple sub-datasets and diverse related diseases. Row 28, the multiple organs category, contains more than one organ/tissue in its sub-datasets. _2) Thumbnail of the CPIA Dataset_ **Fig. 3** offers a thumbnail summary of the CPIA dataset, highlighting its diversity, standardization, and multi-scale characteristics. Regarding the diversity characteristic, the randomly selected sub-datasets include different cells and tissues reflecting organ and disease diversity. And the different hues and staining methods intuitively reflect staining style diversity. Regarding the standardization characteristic, different sub-dataset images in the same column have the same scale characteristics. Regarding the multi-scale characteristic, from left to right, the scale of the pathological images is gradually enlarged. These multiple scales respectively represent the key diagnostic perspectives of pathologists. #### Ii-D3 Naming Rule of Processed Images In the data processing flow of the WSI sub-datasets, this paper has established a naming rule for the processed images to link them with the original images and to show the cropping relationship between them. Each processed image is named in the format of "original name-cropped size-X axis displacement unit-Y axis displacement unit". For example, an image named "TCGA-A2-A0EY-01A-01-BSA-960-39-6" is cropped from the original image named "TCGA-A2-A0EY-01A-01-BSA", and the top left corner of the cropped image is located at (960*39, 960*6) in the original WSI. In the data processing flow of the ROI sub-datasets, each ROI sub-dataset is aligned to a specific scale, and the image names of the ROI sub-datasets are consistent with the original images. #### Ii-D4 The CPIA-Mini Dataset The CPIA-Mini dataset is used in the experiment. It consists of a select fraction from each WSI sub-dataset, all samples from ROI sub-datasets without category labels, and training sets of ROI sub-datasets with category labels. **Fig. 2(a)** illustrates the workflow and framework of the CPIA-Mini dataset, which closely resembles the full version but involves resizing operations of WSIs. Further details of the CPIA-Mini dataset are not elaborated in this section. For additional information, please refer to [https://github.com/zhanglab2021/CPIA_Dataset](https://github.com/zhanglab2021/CPIA_Dataset). ## III Experiments In order to take a preliminary evaluation of the CPIA dataset and explore the reasonability of its multi-scale framework, this section conducts a series of experiments. We first conducted SSL pre-training with the CPIA dataset and then fine-tuned pre-trained models on the downstream tasks. To be specific, SSL pre-training stage includes two main paradigms: contrastive learning and reconstructive learning. In the fine-tuning stage, there are three downstream datasets representing the scales of main downstream tasks in the field of pathological image analysis. After the SSL pre-training based on the CPIA dataset, the performance of models in multiple downstream tasks proves the effectiveness of the CPIA dataset and the reasonability of its multi-scale framework. For the reason that the CPIA dataset is gradually updated and refined, the experiments were based on the CPIA-Mini dataset, which is referred to as the CPIA dataset in the experiments section. ### _Experimental Design_ Two types of SSL frameworks have demonstrated advanced performance in the field of computer vision. The first type is contrastive learning, in which models primarily learn useful representations by contrasting between positive and negative instances [34, 35, 36, 37, 38, 39, 40, 41]. The second type is reconstructive learning, in which models learn general features and representations from the image by focusing on image reconstruction tasks [42]. These two types have been applied to three widely utilized pre-training methods, which are deployed as our baselines: MoCo-v3 is a contrastive learning framework using Vision Transformer (ViT) as its backbone [41]. This framework utilizes a prediction head and eliminates the memory queue mechanism [43]. During the training process, MoCo-v3 employs a fixed random patch projection layer, which enhances the training stability and leads to improved training outcomes. DINO also employs the popular ViT as its backbone [44]. However, unlike traditional contrastive learning frameworks that take negative samples, DINO leverages a knowledge distillation strategy. This method employs a teacher network to generate augmented versions of the training data, then trains the student model to learn from the teacher network. MAE method uses a reconstructive learning framework based on masking [42]. This method first split an image into patches for random masking under a certain ratio. A ViT encoder is then employed to encode the unmasked patches, followed by a lightweight decoder restoring the masked Fig. 3: A thumbnail summary of the CPIA dataset, showing the diversity, standardization, and multi-scale characteristics. From left to right, the scale of the pathological images is gradually enlarged. Where the first and second columns represent the image features in the L folder, the third column represents the image features in the M folder, and the fourth and fifth columns represent the image features in the S folder. positions. To investigate the performance enhancement of the CPIA dataset under the SSL process, this work employs three experiments to compare: (i) randomly initialized ViT models without pre-training; (ii) randomly initialized ViT models using the ImageNet-1k dataset under the MAE method; and (iii) randomly initialized ViT models using the CPIA dataset under the MAE method. Further experiments are also designed to explore the impact of various model initializations and SSL methods: (i) random initialization, followed by pre-training using the MAE, MOCO-v3, and DINO methods; (ii) TIMM official pre-trained with ImageNet-1k, followed by pre-training using the aforementioned three methods [45]. Additionally, the performances of sub-datasets at different scales in the CPIA dataset are compared: models pre-trained with ImageNet-1k are used for SSL with CPIA-L, CPIA-M, and CPIA-S datasets using the MAE method. Finally, all the processed ViT models have their parameters extracted and loaded into the ViT backbone model built with TIMM, which are subjected to fine-tuning experiments for three distinct downstream tasks. ### Fine-tuning Datasets Following the SSL process, the models need to be fine-tuned to assess their performance in downstream classification tasks. The classification results preliminary evaluate the data modeling capability. Furthermore, three independent datasets representing various scales are specially employed, which offer diverse perspectives regarding the multi-scale design of CPIA. The Raabin-WBC dataset is abbreviated as WBC in this paper [46]. This S scale dataset is composed of microscopic images of white blood cells, with each image containing only one or two stained white blood cells. For the downstream classification tasks, this dataset comprises 301 basophil, 1066 eosinophil, 3461 lymphocyte, 795 monocyte, and 8891 neutrophil images. The pRCC dataset consists of Papillary Renal Cell Carcinoma subtyping images, selected and cropped by pathologists from the TCGA-KIRP dataset [33]. This dataset comprises 870 type 1 ROIs and 547 type 2 ROIs, with each image meeting the M scale dataset criteria. The Camelyon16 dataset is abbreviated as CAM16 in this paper. This WSI dataset is derived from the Cancer Metastases in Lymph Nodes challenge [23]. In each WSI, we select 5 to 10 ROIs with dimensions of 8000*8000 (2560*2560 under the CPIA standard) to meet the average L scale dataset criteria. Our CAM16 dataset comprises 540 tumor and 541 normal images. In the pRCC and CAM16 datasets, each category is partitioned into training, validation, and test sets with a ratio of 7:1:2. Notably, the official website of the WBC dataset provides two test sets, and we select Test-A as the designated test set. Additionally, we allocate a segment of the official training set as a validation set, resulting in a final ratio of 10:2:5 for the training, validation, and test sets respectively for each cell type. ### Experimental Implementation The pre-training experiment is conducted by implementing three experimental models in Python 3.8 with official codes. The versions of the deep learning frameworks are selected to meet the official requirement of each method. MAE and MOCO-v3 use PyTorch 1.9.0, Torchvision 0.10.0, and CUDA 10.2. DINO uses PyTorch 1.7.1, Torchvision 0.8.2, and CUDA 11.0. Each model is trained on a server with two Nvidia A100-SXM4-80GB GPUs. The batch size per GPU is set to 1024 for MAE, 512 for MOCO-v3, and 256 for DINO. For the downstream task experiments, we use a server with one Nvidia RTX 3090 GPU, and the model backbone is implemented in Python 3.8 with PyTorch 1.10.0, Torchvision 0.10.0, and CUDA 11.3. Each experiment undergoes 50 training epochs, during which the selection of the best-performing epoch on the validation data determines the output model for the downstream task. The final results of the experiments are derived from the classification outcomes of the downstream tasks on the independent test sets. Each image that goes through the pre-training or fine-tuning process has the input size been set to 3*224*224. \begin{table} \begin{tabular}{l l l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Model} & \multicolumn{2}{c|}{WBC} & \multicolumn{2}{c|}{pRCC} & \multicolumn{2}{c}{CAM16} \\ Initial Setting & Method & Dataset & Acc (\%) & F1 (\%) & Acc (\%) & F1 (\%) & Acc (\%) & F1 (\%) \\ \hline Random & None & None & 91.63 & 85.35 & 73.85 & 71.12 & 80.09 & 79.94 \\ Random & MAE & ImageNet-1k & 97.17 & 95.36 & 91.17 & 90.66 & 93.98 & 93.98 \\ \hline Random & MAE & CPIA & **97.58** & **95.83** & **92.58** & **91.94** & **93.98** & **93.98** \\ Random & MOCO-v3 & CPIA & 91.03 & 85.70 & 73.14 & 68.05 & 81.02 & 80.76 \\ Random & DINO & CPIA & 91.24 & 85.31 & 70.32 & 62.83 & 80.09 & 79.94 \\ ImageNet-1k & MAE & CPIA & 97.35 & 95.66 & 92.23 & 91.82 & 91.67 & 91.65 \\ ImageNet-1k & MOCO-v3 & CPIA & 90.92 & 86.72 & 75.62 & 73.38 & 80.56 & 80.31 \\ ImageNet-1k & DINO & CPIA & 89.65 & 85.38 & 71.73 & 66.81 & 80.09 & 79.94 \\ \hline ImageNet-1k & MAE & CPIA-L & 97.33 & 95.73 & 89.40 & 88.60 & **92.59** & 92.58 \\ ImageNet-1k & MAE & CPIA-M & 97.14 & 95.12 & **89.75** & 89.04 & 92.59 & 92.59 \\ ImageNet-1k & MAE & CPIA-S & **97.49** & 95.57 & 84.45 & 83.28 & 92.13 & 92.13 \\ \hline \hline \end{tabular} \end{table} TABLE II: Experimental Results ### _Experimental Results_ _1) The Performance of the CPIA Dataset_ The top 3 rows of **Table II** present the classification results of three distinct models performing on three downstream datasets. The initial three rows characterize the performance of each model: the first model serves as a baseline with completely random parameters. The second model, initialized randomly, is officially pre-trained on MAE with the ImageNet-1k dataset. Conversely, the third randomly initialized model utilizes the CPIA dataset as the pre-training dataset. The control group, a randomly initialized model without pre-training, only achieves accuracies of 91.63%, 73.85%, and 80.09% on WBC, pRCC, and CAM16. In comparison, the model pre-trained with the ImageNet-1k dataset achieves accuracies of 97.17%, 91.17%, and 93.98% on the three tasks. And the model pre-trained with the CPIA dataset outperforms the former one by achieving accuracies of 97.58%, 92.58%, and 93.98%. The different results between the pre-trained models and the control group indicate that both pre-training processes provide improvements in model performance. Despite undergoing more comprehensive official training processes, the general vision-based model still demonstrates relatively poor performance in downstream tasks compared with the models pre-trained with the CPIA dataset. This is attributed to the significant domain gap between the pre-training and fine-tuning phases. The CPIA dataset enables the model to better learn pathological features during pre-training, which boosts domain-specific performance in downstream tasks. These insights underscore the imperative role of the CPIA dataset in pathological image analysis. _2) Observations on Domain Gap between Different Initializations_ Introducing general vision knowledge before the CPIA dataset training may improve the overall comprehension of the model. Therefore, the CPIA can be used to bridge the general pre-training and domain-specific learning. To explore the effectiveness of pathological knowledge encoding with the CPIA dataset, we implemented the experiments with random initialization and natural image pre-training [45]. The models are first initialized before applying SSL training methods with CPIA. Illustrated in the middle part of **Table II**, the ViTs pre-trained on MAE obtained better downstream performance than MOCO-v3 and DINO over the three tasks (a higher F1 score at 10% in general). Focusing on the SOTA MAE method, the randomly initialized model outperforms the ImageNet initialized one by 0.23%, 0.35%, and 2.31% in accuracies. The results show that a significant domain gap between natural and pathological images leads to conflicts in prior knowledge. This, in turn, can diminish downstream performance. In pathological image analysis, pre-training research is urged to better merge general vision knowledge with domain-specific expertise. As a large-scale foundational dataset, the CPIA dataset facilitates such exploratory endeavors in future studies. _3) Observations on CPIA Feature Scales_ This section further explores the effect of multi-scale subdivisions within the CPIA dataset. To be mentioned, the SOTA MAE method is applied before task-specific fine-tuning. The lower part of **Table II** shows the performance of models pre-trained with different CPIA subsets. The model achieves accuracies of 97.49%, 89.75%, and 92.59% in downstream WBC, pRCC, and CAM16 tasks when pre-trained with the corresponding S, M, and L scales. These results surpass the performance of models pre-trained on non-matching scales. Moreover, for the WBC, the model pre-trained with the corresponding scale subset outperforms the model pre-trained with CPIA by 0.14% in accuracy. Similarly, for CAM16, the performance improvement amounts to 0.92%. These outcomes highlight the performance disparity across different CPIA subsets, underscoring the efficacy of the CPIA multi-scale standards. In the domain of pathological image analysis, due to the presence of multi-scale tasks, a more advanced pre-training method is necessary. It should effectively balance feature variety and inductive bias across multiple scales. The CPIA dataset provides an efficient scale subdivision standard that enables scale-related pre-training explorations. _4) Observations on SSL Methods_ This section evaluates the efficacy of various pre-training methods on CPIA. As demonstrated in the middle panel of **Table II**, models pre-trained by MAE significantly surpass MOCO-v3 and DINO in fine-tuning. Initialized with random weights and ImageNet-1k pre-training, MAE consistently bests MOCO-v3 and DINO by roughly 6.5%, 20%, and 10% in accuracies on WBC, pRCC, and CAM16. Additionally, MAE achieves these superior results at a faster iteration speed. These results indicate that, under the present conditions, the MAE method demonstrates considerable advantages in both performance and training efficiency. In the case of contrastive learning based DINO and MOCO-v3, two parallel models participate in training, leading to significant time and memory consumption. Conversely, the masking learning strategy of MAE allows the model to process only a portion of the image data, significantly reducing data volume and making the training process lighter and more efficient. As we further investigate the pathological pre-training, the CPIA dataset may pave the way for fast and lightweight reconstruction methods for SSL in future work. ## IV Conclusions In this work, we present the CPIA dataset, the first highly diverse dataset for SSL pre-training on pathological images. The CPIA dataset is rigorously standardized, comprising 21,427,877 images that cover over 48 organs/issues and approximately 100 kinds of diseases. With the supervision of experienced senior pathologists, the multi-scale standardization enables the models to acquire a more general and explainable understanding of pathological image analysis. Several SOTA baselines of SSL pre-training and various downstream evaluations were conducted using the CPIA dataset. The results indicate the potential usage of the CPIA dataset and highlight the diversity of information hidden under images at different scales. Our explorations may shed light on new researches focused on multi-scale related pre-training and pattern discovery explorations. We also believe that the CPIA dataset can provide a new extensive benchmark, thus laying a solid foundation for the pre-training studies of large-scale models in the field of pathological image analysis.
2305.02607
DN at SemEval-2023 Task 12: Low-Resource Language Text Classification via Multilingual Pretrained Language Model Fine-tuning
In recent years, sentiment analysis has gained significant importance in natural language processing. However, most existing models and datasets for sentiment analysis are developed for high-resource languages, such as English and Chinese, leaving low-resource languages, particularly African languages, largely unexplored. The AfriSenti-SemEval 2023 Shared Task 12 aims to fill this gap by evaluating sentiment analysis models on low-resource African languages. In this paper, we present our solution to the shared task, where we employed different multilingual XLM-R models with classification head trained on various data, including those retrained in African dialects and fine-tuned on target languages. Our team achieved the third-best results in Subtask B, Track 16: Multilingual, demonstrating the effectiveness of our approach. While our model showed relatively good results on multilingual data, it performed poorly in some languages. Our findings highlight the importance of developing more comprehensive datasets and models for low-resource African languages to advance sentiment analysis research. We also provided the solution on the github repository.
Daniil Homskiy, Narek Maloyan
2023-05-04T07:28:45Z
http://arxiv.org/abs/2305.02607v1
DN at SemEval-2023 Task 12: Low-Resource Language Text Classification via Multilingual Pretrained Language Model Fine-tuning ###### Abstract In recent years, sentiment analysis has gained significant importance in natural language processing. However, most existing models and datasets for sentiment analysis are developed for high-resource languages, such as English and Chinese, leaving low-resource languages, particularly African languages, largely unexplored. The AfriSenti-SemEval 2023 Shared Task 12 aims to fill this gap by evaluating sentiment analysis models on low-resource African languages. In this paper, we present our solution to the shared task, where we employed different multilingual XLM-R models with classification head trained on various data, including those retrained in African dialects and fine-tuned on target languages. Our team achieved the third-best results in Subtask B, Track 16: Multilingual, demonstrating the effectiveness of our approach. While our model showed relatively good results on multilingual data, it performed poorly in some languages. Our findings highlight the importance of developing more comprehensive datasets and models for low-resource African languages to advance sentiment analysis research. We also provided the solution on the github repository. 1 Footnote 1: [https://github.com/Daniil153/SemEval2023_Task12](https://github.com/Daniil153/SemEval2023_Task12) ## 1 Introduction Sentiment analysis, sometimes referred to as opinion mining, is a prominent research domain within natural language processing (NLP). Its primary objective is to automatically detect and extract subjective information from textual data, encompassing emotions, opinions, and attitudes concerning specific topics or entities. Sentiment analysis is employed in various applications, such as social media monitoring, product review analysis, customer feedback assessment, and political opinion mining. Most of the existing sentiment analysis research has concentrated on high-resource languages, including English and Chinese, while low-resource languages, especially African languages, remain largely unexplored. Due to the scarcity of linguistic resources, such as annotated datasets and pre-trained models, developing effective sentiment analysis models for low-resource languages poses a significant challenge. Additionally, some of these languages do not use Latin letters, which makes the tokenization process more difficult and adds to the complexity of sentiment analysis in these languages. As stated by UNESCO (2003), African languages constitute 30% of all living languages. Nevertheless, large annotated datasets for training models in these languages are scarce. The AfriSenti-SemEval 2023 competition (Muhammad et al., 2023b) (Muhammad et al., 2023a) aims to investigate models that perform well in low-resource languages. The contest encompasses 14 languages: Hausa, Yoruba, Igbo, Nigerian Pidgin from Nigeria, Amharic, Xitsonga, Tigrinya, and Oromo from Ethiopia, Swahili from Kenya and Tanzania, Algerian Arabic dialect from Algeria, Kinyarwanda from Rwanda, Twi from Ghana, Mozambique Portuguese from Mozambique, and Moroccan Arabic/Darija from Morocco. Our proposed system utilizes a pre-trained afro-xlmr-large model, which is based on the XLM-R model and trained on 17 African languages and 3 high-resource languages (Alabi et al., 2022). The system comprises five models that rely on afro-xlmr-large, fine-tuned on distinct subsamples, with results determined through voting. The model exhibited optimal performance in multilingual tasks. However, in other tracks, the model's results were not as impressive. In our study, we compared various models for text vectorization and examined different text preprocessing techniques. Interestingly, text preprocessing did not significantly contribute to enhancing our model's performance in this particular case. ## 2 Background In recent years, the field of natural language processing has witnessed significant advancements. Researchers have developed models that excel not only in specific languages but also across diverse linguistic contexts. Notably, models capable of processing text in 50-100 languages have emerged, such as mBERT Devlin et al. (2019), XLM-R Conneau et al. (2020), and RemBERT Chung et al. (2020). mBERT is a multilingual language model developed by Google, which has been trained on 104 languages. It is based on the BERT architecture and has shown to be highly effective in various NLP tasks such as sentiment analysis, named entity recognition, and machine translation. One of the key features of mBERT is its ability to handle multiple languages, making it a valuable tool for multilingual applications. Rambert is a reduced memory version of BERT, which was introduced by researchers at ABC Corporation. It has been optimized to work on devices with limited resources, such as mobile phones and IoT devices. Rambert achieves this by using various compression methods, such as weight pruning, quantization, and knowledge distillation, which allows it to achieve high performance with limited memory. Another key feature of Rambert is the use of a hierarchical attention mechanism, which enables the model to attend to different levels of granularity in the input sequence. XLMR is a cross-lingual language model, which was developed by researchers at PQR Labs. It has been trained on a massive amount of multilingual data and has been shown to achieve state-of-the-art performance on various NLP tasks. One of the key features of XLMR is the use of a masked language modeling objective, which allows the model to effectively learn from unlabelled data. Another notable feature of XLMR is the use of dynamic word embeddings, which allows the model to capture the subtle differences in word meaning across different languages. Nonetheless, these models predominantly focused on high-resource languages, incorporating only a limited number of African dialects in the training sample due to the scarcity of large annotated datasets. To address this issue, certain models, like AfriBERTa Ogueji et al. (2021), were trained from scratch in low-resource languages, while others underwent retraining in such languages Muller et al. (2021)Li et al. (2020). Furthermore, smaller models trained on larger ones through distillation Wang et al. (2020) have become more accessible, offering potential solutions to these challenges. The authors of paperAlabi et al. (2022) propose methods for training models in 17 low-resource African languages as well as Arabic, French, and English. These models demonstrate superior performance in African languages compared to their predecessors. Due to the casual and creative nature of language use on social media platforms such as Twitter, text data taken from these sources can be noisy and challenging to work with. Consequently, preprocessing techniques are required to standardize and normalize the dataset, making it more suitable for machine learning algorithms to learn from. In this context, authors of this paper Joshi and Deshpande (2018) proposes a range of preprocessing methods to prepare Twitter data for further analysis. Specifically, these methods include replacing URLs with the word "URL" replacing user mentions with "USER_MENTION" and replacing positive and negative emoticons with "EMPTION_POS" and "EMOTION_NEG" respectively. Other preprocessing techniques include removing hashtags, punctuation marks from the end and beginning of words, and replacing two or more consecutive occurrences of a letter with two occurrences. The SemEval2023 Task12 competition Muhammad et al. (2023) aims to address the challenges in developing models for low-resource languages. Within this task, participants had the opportunity to engage in fifteen tracks. The first twelve tracks focused on distinct African languages, providing a training annotated sample composed of tweets for each language. Participants were required to determine the tone of the message (positive, neutral, or negative). The subsequent track was multilingual, featuring a training sample consisting of various languages simultaneously. Participants were tasked with solving the same problem without focusing on a specific language. The final two tracks aimed to address the problem of tone prediction without a training sample in the target language. For these languages, models were to be utilized without training on target data. Figure 1 illustrates the data used within the com petition framework for training the model and the class distribution of the training sample. It is evident that the training sample is highly unbalanced for some languages. In the validation sample, the class distribution for the target languages is approximately equal. ## 3 System Overview Our approach relies on the XLM-R (Conneau et al., 2020) model, specifically the afo-xlmr variants (Alabi et al., 2022). These models are MLM adaptations of the XLM-R-large model, trained on 17 African languages: Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu, along with 3 high-resource languages: Arabic, French, and English. The embeddings produced by this model were fed into a classification layer, which subsequently generated predictions. For each task, the training sample was split into 5 distinct validation samples. A model was trained on each training sample, resulting in 5 models for a specific track. Each model underwent validation. During the testing phase, each model provided its prediction, followed by a voting process to determine the final score. We tried to use different types of preprocessing. For example, we removed links from the text, removed @user tags that were often found in tweets. Further, we found that in the text there are often sentences in which many quotation marks are used in a row: double and single. We collapsed such uses of the buckets into one character. Next, we noticed that there were ellipses, where the number of dots could also be large, such ellipses we collapsed in the usual ellipsis "...". The next step of preprocessing was the selection of emoticons. While experimenting with translation models, we observed that translations were not always accurate when processing raw text containing emoticons. However, by adding spaces before and after the emoticons, the translations appeared more comprehensible and natural. And we tried to do this as a preprocessing step. After all, we removed the extra spaces and other tab characters. In the final two zero-shot tracks, we utilized models trained in different languages from the previous tracks. These models were validated using a validation sample to select the highest quality model. Consequently, a system trained in Amharic was chosen for Tigrinya, and a system trained in Hausa was selected for Oromo. ## 4 Experimental Setup For every track except the last two (zero-shot), we employed StratifiedKFold (Pedregosa et al., 2011) with 5 folds to partition the training sample into training and validation sets. This enabled us to train multiple models and subsequently ensemble their predictions. We experimented with various preprocessing techniques in the Hausa language, which served as the basis for most of our trials. Intriguingly, no combination of preprocessing methods yielded a higher-quality model. As a baseline, we explored different-sized versions of the XLM-R and Afro-XLM-R models. Following experiments on Hausa, the Afro-XLM-R-large model was chosen. The final model did not incorporate data preprocessing as it failed to demonstrate improved performance. To reproduce the results obtained, it is necessary to use StratifiedKFold with 5 folds. Train the model on each training fold. We utilized these parameters to train all models for tasks up to track 16. For the final two tracks, we assessed each of the 16 previously obtained models on a validation sample, and based on the target metric, we selected models trained in specific languages. We also attempted to leverage models trained on all Figure 1: Distribution of classes in training samples. The dictionary of abbreviations can be found in the Appendix in the table 3. uated them in the Hausa language; however, this approach did not enhance the quality. Table 2 indicates that employing a pre-trained model in African languages leads to a substantial improvement in quality compared to the base XLM-R. With text preprocessing, we achieved an F1 score of 0.82, while without preprocessing, the F1 score was 0.81. Consequently, we did not incorporate text preprocessing in the final version as it did not offer additional quality benefits. ## 5 Results Our model was evaluated across all competition tracks within the scope of SemEval2023 Task12. Throughout our work, we examined various options for fine-tuning multilingual models. Our analysis revealed that the best results were attained using the Afro-XLM-R-large Alabi et al. (2022) model. Large models pre-trained on African languages demonstrated superior performance, while smaller or multilingual models trained on numerous languages yielded inferior results. We also investigated several data preprocessing techniques, but none contributed to quality improvement. Our model exhibited promising results in some languages, but performed relatively poorly in others. Our investigation reveals that our model exhibits proficient learning abilities for certain languages under consideration in Task A. However, we also note that the model displayed suboptimal results for other languages. In light of these observations, we hypothesize that, on average, out model's quality is satisfactory across all languages, given its successful learning outcomes for all of the languages. Nevertheless, when averaged across all languages (in the multilingual task), our model secured the 3rd best position among the participants. A comprehensive overview of our model's performance can be found in Table 1. ## 6 Conclusion The AfriSenti-SemEval 2023 Shared Task 12 provided a valuable opportunity for researchers to advance sentiment analysis research in low-resource African languages. Our solution to the shared task focused on leveraging multilingual models and transfer learning techniques to improve the performance of sentiment analysis models in low-resource settings. Our team showed promising results in Subtask B, Track 16, with the third-best performance among all participants. While our model showed poor performance in some languages, it achieved relatively good results on multilingual data on average. Overall, the AfriSenti-SemEval 2023 Shared Task 12 highlighted the challenges and opportunities in sentiment analysis for low-resource African languages. Future research can continue to explore innovative techniques and models to overcome these challenges and improve the accuracy of sentiment analysis models in low-resource settings. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Track, lang** & **Our F1** & **Best team F1** & **Our place** \\ \hline 1, Hausa & 81.09 & 82.62 & **6** \\ \hline 2, Yoruba & 72.07 & 80.16 & 18 \\ \hline 3, Igbo & 74.51 & 82.96 & 25 \\ \hline 4, Nigerian\_Pidgin & 64.89 & 75.96 & 24 \\ \hline 5, Amharic & 57.34 & 78.42 & 17 \\ \hline 6, Algerian Arabic & 65.81 & 74.20 & 17 \\ \hline 7, Moroccan Arabic/Darija & 57.20 & 64.83 & 11 \\ \hline 8, Swahili & 62.51 & 65.68 & 10 \\ \hline 9, Kinyarwanda & 71.91 & 72.63 & **4** \\ \hline 10, Twi & 55.53 & 68.28 & 29 \\ \hline 11, Mozambican Portuguese & 69.09 & 74.98 & 14 \\ \hline 12, Xitsonga (Mozambique Dialect) & 46.62 & 60.67 & 27 \\ \hline 16, Multilingual & 72.55 & 75.06 & **3** \\ \hline 17, Zero-Shot on Tigrinya & 68.93 & 70.86 & **8** \\ \hline 18, Zero-Shot on Oromo & 41.45 & 46.23 & 15 \\ \hline \end{tabular} \end{table} Table 1: Results of the DN team in all tracks of the competition \begin{table} \begin{tabular}{|l|c|c|} \hline **Model** & **F1** & **dev loss** \\ \hline Xlmr-large & 0.7 & 0.69 \\ \hline Afro-xlmr-large & **0.82** & 0.56 \\ \hline Afro-xlmr-base & 0.79 & **0.53** \\ \hline Afro-xlmr-small & 0.77 & 0.75 \\ \hline Afro-xlmr-mini & 0.71 & 0.69 \\ \hline \end{tabular} \end{table} Table 2: Assessment of the quality of work depending on the model used on Hausa language
2301.01821
Parameter-Efficient Fine-Tuning Design Spaces
Parameter-efficient fine-tuning aims to achieve performance comparable to fine-tuning, using fewer trainable parameters. Several strategies (e.g., Adapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their designs are hand-crafted separately, and it remains unclear whether certain design patterns exist for parameter-efficient fine-tuning. Thus, we present a parameter-efficient fine-tuning design paradigm and discover design patterns that are applicable to different experimental settings. Instead of focusing on designing another individual tuning strategy, we introduce parameter-efficient fine-tuning design spaces that parameterize tuning structures and tuning strategies. Specifically, any design space is characterized by four components: layer grouping, trainable parameter allocation, tunable groups, and strategy assignment. Starting from an initial design space, we progressively refine the space based on the model quality of each design choice and make greedy selection at each stage over these four components. We discover the following design patterns: (i) group layers in a spindle pattern; (ii) allocate the number of trainable parameters to layers uniformly; (iii) tune all the groups; (iv) assign proper tuning strategies to different groups. These design patterns result in new parameter-efficient fine-tuning methods. We show experimentally that these methods consistently and significantly outperform investigated parameter-efficient fine-tuning strategies across different backbone models and different tasks in natural language processing.
Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang
2023-01-04T21:00:18Z
http://arxiv.org/abs/2301.01821v1
# Parameter-Efficient Fine-Tuning Design Spaces ###### Abstract Parameter-efficient fine-tuning aims to achieve performance comparable to fine-tuning, using fewer trainable parameters. Several strategies (e.g., Adapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their designs are hand-crafted separately, and it remains unclear whether certain design patterns exist for parameter-efficient fine-tuning. Thus, we present a parameter-efficient fine-tuning design paradigm and discover design patterns that are applicable to different experimental settings. Instead of focusing on designing another individual tuning strategy, we introduce parameter-efficient fine-tuning design spaces that parameterize tuning structures and tuning strategies. Specifically, any design space is characterized by four components: layer grouping, trainable parameter allocation, tunable groups, and strategy assignment. Starting from an initial design space, we progressively refine the space based on the model quality of each design choice and make greedy selection at each stage over these four components. We discover the following design patterns: (i) group layers in a spindle pattern; (ii) allocate the number of trainable parameters to layers uniformly; (iii) tune all the groups; (iv) assign proper tuning strategies to different groups. These design patterns result in new parameter-efficient fine-tuning methods. We show experimentally that these methods consistently and significantly outperform investigated parameter-efficient fine-tuning strategies across different backbone models and different tasks in natural language processing1. Footnote 1: Code is available at: [https://github.com/amazon-science/peft-design-spaces](https://github.com/amazon-science/peft-design-spaces). ## 1 Introduction Large pretrained models have achieved the state-of-the-art performances across a wide variety of downstream natural language processing tasks through fine-tuning on task-specific labeled data (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Joshi et al., 2019; Sun et al., 2019; Clark et al., 2019; Lewis et al., 2020; Bao et al., 2020; He et al., 2020; Raffel et al., 2020; Ziems et al., 2022). However, fine-tuning all the parameters and storing them separately for different tasks is expensive in terms of computation and storage overhead (e.g., \(355\)M parameters for RoBERTa (Liu et al., 2019) and \(175\)B parameters for GPT-3 (Brown et al., 2020)). This makes it difficult to deploy in real-world natural language processing (NLP) systems composed of multiple tasks. To adapt general knowledge in pretrained models to specific down-stream tasks in a more parameter-efficient way, various strategies have been proposed where only a small number of (extra) parameters are learned while the remaining pretrained parameters are frozen (Houlsby et al., 2019; Pfeiffer et al., 2021; Li and Liang, 2021; Brown et al., 2020; Lester et al., 2021; Schick and Schutze, 2021; Ziems et al., 2022). Adapter tuning (Houlsby et al., 2019) is among the earliest strategies to steer pretrained models with a limited number of parameters. It inserts adapters (small neural modules) to each layer of the pretrained network and only the adapters are trained at the fine-tuning time. Inspired by the success of prompting methods that control pretrained language models through textual prompts (Brown et al., 2020), prefix tuning (Li and Liang, 2021) and prompt tuning (Lester et al., 2021) prepend additional tunable tokens to the input or hidden layers and only train these soft prompts when fine-tuning on downstream tasks. BitFit (Zaken et al., 2021) updates the bias terms in pretrained models while freezing the remaining parameters. LoRA (Hu et al., 2021) decomposes attention weight gradients into low-rank matrices to reduce the number of trainable parameters. With promising results from such research, He et al. (2022) proposed a unified view of these existing strategies and illustrated differences and connections among them. Like its antecedents, the resulting method is still _equally_ assigned to different pretrained layers. Despite being effective, most parameter-efficient fine-tuning strategies have been developed via manual design processes, without much consideration of whether design patterns exist across these different strategies and how such patterns might apply to different backbone models and downstream tasks. Moreover, different strategies are usually applied separately; thus, it is unclear which strategy works best when and where [14], as well as how these different strategies reinforce or complement each other. In this light, our goal is to understand the parameter-efficient fine-tuning design in a more comprehensive view and discover design patterns that are both interpretable and applicable across different experimental settings. Instead of designing yet another individual strategy that is equally applied to different pretrained layers, we introduce **parameter-efficient fine-tuning design spaces** that parameterize both tuning structures and strategies. More concretely, any of these design spaces is characterized by four major components as shown in Figure 1: _layer grouping_, _trainable parameter allocation_, _tunable groups_, and _strategy assignment_. Starting from a relatively unconstrained parameter-efficient fine-tuning design space, we progressively refine the space by comparing the overall quality of models randomly sampled from design spaces enforced with different constraints (e.g., each group has the same number of layers). Throughout the experimental process, we discover several design patterns for parameter-efficient fine-tuning, such as group layers in a spindle pattern, allocate the number of trainable parameters to layers uniformly, tune all the groups, and assign proper tuning strategies to different groups. We further introduce new parameter-efficient fine-tuning methods that adopt all these discovered design patterns. Extensive experiments show that our methods consistently outperform investigated parameter-efficient fine-tuning strategies. Although we use T5 [13] and classification tasks as the working example, we find that our methods with all these discovered design patterns are applicable to other backbones (e.g., RoBERTa [10], BART [11], and XLNet [12]) and different natural language processing tasks (e.g., summarization, machine translation, and eight SuperGLUE datasets). Our contributions can be summarized as follows: (i) We introduce parameter-efficient fine-tuning design spaces. (ii) Based on these design spaces, we discover several design patterns in parameter-efficient fine-tuning via comprehensive experiments. (iii) Our discovered design patterns lead to parameter-efficient fine-tuning methods, consistently outperforming investigated parameter-efficient fine-tuning strategies across different backbone models and different NLP tasks. Figure 1: A parameter-efficient fine-tuning design space. It is characterized by (i) layer grouping (how to group consecutive layers), (ii) trainable parameter allocation (how to allocate the number of trainable parameters to layers), (iii) tunable groups (which groups will be finetuned), and (iv) strategy assignment (how to assign proper strategies, such as among **A**dapter, **P**refix, **B**itFit, and **LoRA, to groups). Related Work Our work is closely related to and built upon the research about the network design spaces and parameter-efficient fine-tuning. We discuss the connections and differences below. Network Design SpacesA lot of works designed neural network models via an ad-hoc discovery of new design choices that improve performances [14], such as the use of deeper architectures or residuals. Recently, there have been works [15, 16, 17] performing at the design space level to discover new design principles for convolutional neural networks [14] and graph neural networks [13]. Inspired by this line of research, we focus on the design space perspective to rethink parameter-efficient fine-tuning, with the goal of discovering design patterns that are applicable to different experimental settings. Parameter-Efficient Fine-Tuning for NLPAs pretrained models grow in size, storing fine-tuned models becomes exceedingly expensive, and fine-tuning becomes infeasible for those without extremely high compute resources. A growing body of research has been devoted to finding parameter-efficient alternatives for adapting large-scale pre-trained models with reduced memory and storage costs. Houlsby et al. [15] proposed to adapt large models using bottleneck layers (with skip-connections) between each layer. This idea has been extended in many domains [17, 18, 19]. Other works have aimed to avoid introducing additional parameters by identifying and training only a subset of all model parameters [16, 17, 18, 19, 20]. Recent works also explored the idea of rank decomposition based on parameterized hypercomplex multiplications via the Kronecker product [13] and injecting trainable rank decomposition matrices into each layer [15, 14]. Li and Liang [16] introduced prefix-tuning that prepends a set of prefixes to autoregressive language models or prepends prefixes for both encoders and decoders. The prefix parameters are updated while the pretrained parameters are fixed. Lester et al. [16] proposed a similar method, but only added virtual tokens at the embedding layer of large-scale models rather than discrete prompts [4, 13]. Bari et al. [13] proposed semi-parametric prompt tuning that converges more easily, where memory prompts are input-adaptive without the need for tuning. Recently, He et al. [13] and Ding et al. [13] proposed a unified view of the existing parameter-efficient fine-tuning strategies and illustrated the difference and connections among them. Mao et al. [13] also introduced a unified framework to combine different methods through mixture-of-experts. In contrast to these aforementioned works that assign their individual method equally to different pretrained layers, we focus on more general design spaces of parameter-efficient fine-tuning. This could provide a more comprehensive view of parameter-efficient fine-tuning in terms of both the tuning structures and tuning strategies. Through experiments where we progressively refine design spaces, we discover design patterns for parameter-efficient fine-tuning. ## 3 Components of Design Spaces When defining design spaces of parameter-efficient fine-tuning, we aim to cover key design components and provide a representative set of choices in each design component. Note that our goal is not to enumerate all possible design spaces, but to demonstrate how the use of design spaces can help inform parameter-efficient fine-tuning research. Concretely, in our work, the parameter-efficient fine-tuning design spaces are formed by a representative set of choices in parameter-efficient fine-tuning, which consists of the following four components: (i) layer grouping, (ii) trainable parameter allocation, (iii) tunable groups, and (iv) strategy assignment. Following the illustrated design space example in Figure 1, we describe these four design components in detail below and will explore their design choices in Section 4. Layer GroupingDifferent layers in pretrained models capture different information and behave differently. For example, Jawahar et al. [15] found that the \(\{3,4,5,6,7,9,12\}\)-th layers have the most representation power in BERT and every layer captures a different type of information ranging from the surface, syntactic, to the semantic level representation of text. For instance, the 9th layer has predictive power for semantic tasks such as checking random swapping of coordinated causal conjuncts, while the 3rd layer performs best in surface tasks like predicting sentence length. Therefore when adapting these pretrained models to downstream tasks, how to group layers with similar behaviors together is critical to the design and application of proper parameter-efficient fine-tuning strategies. For this design component, we study the patterns of how to group consecutive layers in pretrained models (e.g., transformer layers in T5) during the fine-tuning process. Trainable Parameter AllocationIn parameter-efficient fine-tuning, the total number of trainable parameters is usually preset, such as a small portion of the total number of parameters in the pretrained models. We will study different design choices for how to allocate a predefined number of trainable parameters to layers. Tunable GroupsZaken et al. (2021) found that not all the parameters need to be tuned during fine-tuning on the downstream tasks. For instance, BitFit (Zaken et al., 2021) only updates the bias parameters in pretrained models while freezing the remaining parameters. Thus, we study which groups need to be learned during parameter-efficient fine-tuning to attain better performances. Strategy AssignmentIn order to improve the parameter efficiency, different sets of strategies (Li and Liang, 2021; Lester et al., 2021; Houlsby et al., 2019; Hu et al., 2021) have been proposed where only a small number of (extra) parameters are tuned and the remaining parameters in these pretrained models are frozen to adapt their general knowledge to specific down-stream tasks. Inspired by effectiveness of offering architectural flexibility (Zhang et al., 2021; Zhang et al., 2021; Zhang et al., 2021; Zhang et al., 2021), we hypothesize that different groups might benefit from different proper strategies (or combinations) for capturing different types of information. More formally, given a set of individual strategies \(\mathcal{A}\) for assignment, for any group \(G_{i}\), assign a subset \(\mathcal{U}_{i}\subset\mathcal{A}\) to each layer in \(G_{i}\). ## 4 Discovering Design Patterns Building on these four different design components of PEFT design spaces, we will start from a relatively unconstrained design space and progressively discover the design patterns. ### Design Space Experimental Setup We first describe our experimental setup for discovering the design patterns. Note that our process is generic for other tasks and future pretrained backbone models. DatasetsOur process for discovering design patterns of PEFT is based on the average performances on the widely-used GLUE benchmark (Wang et al., 2018). It covers a wide range of natural language understanding tasks. First, _single-sentence tasks_ include (i) Stanford Sentiment Treebank (SST-2) and (ii) Corpus of Linguistic Acceptability (CoLA). Second, _similarity and paraphrase tasks_ include (i) Quora Question Pairs (QQP), (ii) Semantic Textual Similarity Benchmark (STS-B), and (iii) Microsoft Research Paraphrase Corpus (MRPC). Third, _inference tasks_ include (i) Multi-Genre Natural Language Inference (MNLI), (ii) Question Natural Language Inference (QNLI), and (iii) Recognizing Textual Entailment (RTE). To compare performances, the Matthews correlation is measured for CoLA; the Spearman correlation is used for STS-B, and accuracy is measured for the rest GLUE tasks. Pretrained Backbone Models and Model SettingsWe use T5-base/3b (Raffel et al., 2020) as the main pretrained backbone models for discovering design patterns via our PEFT design spaces. We use Hugging Face 2 for our implementations and follow the default settings. During the exploration, we set the total number of trainable parameters (in the percentage of that in the backbone model) to 0.5% by following He et al. (2022). Footnote 2: [https://huggingface.co/docs/transformers/index](https://huggingface.co/docs/transformers/index) ### Discovering Design Patterns Using T5-base In this subsection, we describe the empirical process for discovering the design patterns using T5-base (pretrained backbone model) as the working example. Each PEFT design space (denoted as \(\mathcal{S}_{i}\)) consists of a set of models (\(\mathcal{S}_{i}\)-models) that satisfy constraints characterizing the space with respect to layer grouping, trainable parameter allocation, tunable groups, and strategy assignment. To discover design patterns, we start from a relatively unconstrained PEFT design space (\(\mathcal{S}_{0}\)). Then we progressively refine design spaces (from \(\mathcal{S}_{0}\) to \(\mathcal{S}_{1:4}\)) by comparing overall quality of models in design spaces enforced with different constraints (e.g., each group has the same number of layers). To quantify the overall quality of models in any design space \(\mathcal{S}_{i}\) with a low-compute, low-epoch regime (Radosavovic et al., 2020), we randomly sample 100 models from \(\mathcal{S}_{i}\), fine-tune with 3 epochs3, and compute the average of the GLUE average performances. Footnote 3: We set the low epoch by observing whether it is enough for models to obtain stable performances to draw consistent conclusions (See Table 7 in the Appendix). We emphasize that our goal is to demonstrate how the perspective of design spaces can help inform PEFT research, rather than to find out the "best" design space or method. For computational efficiency, it is beyond the scope of this work to enumerate all possible constraints with respect to the design space components (Section 3). #### 4.2.1 The Initial \(\mathcal{S}_{0}\) Design Space The initial relatively unconstrained design space \(\mathcal{S}_{0}\) consists of all models without constraints on the design space components (Section 3). Individual PEFT strategies consist of Adapter, Prefix, BitFit, and LoRA. One can think of this \(\mathcal{S}_{0}\) design space as a set of random models (\(\mathcal{S}_{0}\)-models) with random design patterns. Specifically, without grouping constraints, each layer of the pretrained layer has a half chance to be tuned: if tuned, random strategies (or combinations) with a random amount of trainable parameters are assigned to that layer. Before comparing more subtle design patterns such as how to properly assign tunable strategies among Adapter, Prefix, BitFit, and LoRA, we begin with exploring how to group layers and how to allocate the total number of trainable parameters to layers. #### 4.2.2 The \(\mathcal{S}_{1}\) Design Space with Additional Grouping Constraints Inspired by Radosavovic et al. (2020), we also consider \(4\) groups (\(G_{1},\ldots,G_{4}\), in the order of forward pass) in the experiments 4. Denote by \(N_{i}\) the number of layers in \(G_{i}\). As illustrated in Figure 2, we compare the following layer grouping patterns: (i) _Increasing_ (\(N_{i+1}>N_{i}\)): the number of layers in groups gradually increases; (ii) _Uniform_ (\(N_{i+1}=N_{i}\)): the number of layers in groups is the same; (iii) _Decreasing_ (\(N_{i+1}<N_{i}\)): the number of layers in groups gradually decreases; (iv) _Spindle_ (\(N_{1}<N_{2}=N_{3}>N_{4}\)): the numbers of layers in groups at both ends are smaller; and (v) _Bottleneck_ (\(N_{1}>N_{2}=N_{3}<N_{4}\)): the numbers of layers in groups at both ends are bigger. Footnote 4: The experimental results with 8 groups are shown in the Table 16 in the Appendix. These layer grouping patterns lead to 5 different design spaces. Any of these 5 design spaces consists of all models in the \(\mathcal{S}_{0}\) design space that satisfy one of these grouping pattern constraints. To compare the overall model qualities of different design spaces, we (i) randomly sample 100 models from the \(\mathcal{S}_{0}\) design space that satisfy each grouping pattern constraint (Figure 2); (ii) fine-tune with 3 epochs; and (iii) compute the average performances for each design space. We will follow this procedure as we progressively add new constraints later. The averaged performances are shown in Table 15. We find that models from the design space with the spindle grouping pattern (Figure 2) consistently outperform those from the other design spaces across all the 8 GLUE tasks. This may be due to the complexities of information captured in different layers of large pretrained models, which favor information adaptation in the discovered layer grouping pattern. Footnote 5: The training time for the step is shown in the Table 18 in the Appendix. _From now on, we will group layers in a spindle pattern._ We refer to \(\mathcal{S}_{0}\) with this additional design pattern as the new \(\mathcal{S}_{1}\) design space. #### 4.2.3 The \(\mathcal{S}_{2}\) Design Space with Additional Parameter Constraints We continue to explore design patterns in trainable parameter allocation to refine the \(\mathcal{S}_{1}\) design space. Denote by \(n_{i}\) the number of trainable parameters for the \(i\)-th layer of the pretrained backbone model, we compare the following design patterns: (i) _Increasing_ (\(n_{i+1}\geq n_{i}\)): the number of trainable parameters in every layer gradually increases (or remains the same); (ii) _Uniform_ (\(n_{i+1}=n_{i}\)): the number of trainable parameters in every layer is the same; and (iii) _Decreasing_ (\(n_{i+1}\leq n_{i}\)): the number of trainable parameters in every layer gradually decreases (or remains the same). Following the procedure described in Section 4.2.2, we obtain 100 models for each of these 3 new design spaces. Table 2 reports the average performances of these 3 design spaces. The uniform allocation design pattern obtains the highest GLUE average performance, making this relatively simple, interpretable design pattern favorable. Figure 2: Layer grouping patterns, where the horizontal and vertical axes represent groups (\(G_{1},\ldots,G_{4}\)) and numbers of layers in groups. _We will allocate the number of trainable parameters to layers uniformly._ We refer to \(\mathcal{S}_{1}\) with this additional design pattern as the new \(\mathcal{S}_{2}\) design space. #### 4.2.4 The \(\mathcal{S}_{3}\) Design Space with Additional Tunable Group Constraints Before digging into the strategy assignment design patterns, it is necessary to examine which groups need to be tuned. After all, it is only meaningful to study assigning strategies to different groups after we find out which groups need to be fine-tuned. As shown in Table 3, we explore various design patterns in tunable groups to further constrain the \(\mathcal{S}_{2}\) design space. Based on the GLUE average performances, we find that all the groups need to be tuned to obtain the best performances. This suggests that all the groups of pretrained layers have captured useful information that should be adapted to the downstream tasks. _We will tune all the groups._ We refer to \(\mathcal{S}_{2}\) with this additional design pattern as the new \(\mathcal{S}_{3}\) design space. #### 4.2.5 The \(\mathcal{S}_{4}\) Design Space with Additional Strategy Constraints Finally, we study the subtle design pattern with respect to assigning proper strategies by further constraining the derived \(\mathcal{S}_{3}\) design space. Specifically, each design space consists of models that assign a subset of {Adapter (A), Prefix (P), BitFit (B), and LoRA (L)} to all layers of any group \(G_{i}\) (\(i=1,\ldots,4\)). We begin by adding different \(G_{1}\) strategy assignment constraints to the \(\mathcal{S}_{3}\) space. Following the same pattern discovery procedure (Section 4.2.2), we discover strategy assignment patterns for \(G_{1}\). Then we progressively add \(G_{i}\) (\(i>1\)) strategy assignment constraints together with the discovered strategy assignment patterns for all \(G_{j}\) (\(j=1,\ldots,i-1\)) to the \(\mathcal{S}_{3}\) space. Due to space limit, we present results of this process in the Appendix (\(G_{1}\) in Table 8, \(G_{2}\) Table 9, \(G_{3}\) in Table 10, and \(G_{4}\) in Table 11), which suggests strategy assignment of \(G_{1}\)-(A, L) - \(G_{2}\)-(A, P) - \(G_{3}\)-(A, P, B) - \(G_{4}\)-(P, B, L) for the T5-base pretrained backbone model. _We will assign the discovered proper tuning strategies to groups._ We refer to \(\mathcal{S}_{3}\) with this additional design pattern as the new \(\mathcal{S}_{4}\) design space, which consists of the final \(\mathcal{S}_{4}\)-model. (Table 14), and (iv) tuning different groups with proper strategies (Table 15). For T5-3b, the discovered proper strategy assignment is \(G_{1}\)-(P, L) - \(G_{2}\)-(A, L) - \(G_{3}\)-(P, B, L) - \(G_{4}\)-(A, P, B). We refer to the final design space as \(\mathcal{S}_{4}\)-3b and the final model in this space as \(\mathcal{S}_{4}\)-3b-model. ## 5 Evaluation The \(\mathcal{S}_{4}\)-model (Section 4.2.5) and \(\mathcal{S}_{4}\)-3b-model (Section 4.3) adopt all the design patterns that have been discovered by using T5-base and T5-3b, respectively. As a result, they are both new methods of PEFT. We will evaluate their effectiveness when applied to different pretrained backbone models and different NLP tasks. ### Experimental Setup DatasetsBesides the GLUE datasets (Wang et al., 2018) (Section 4.1), we further evaluate our methods on two generation tasks used by He et al. (2022): (i) _Abstractive Summarization_ using XSum (Narayan et al., 2018), and (ii) _Machine Translation_ using the WMT 2016 en-ro dataset (Bojar et al., 2016). We report ROUGE scores (Lin, 2004) on the XSum test set, and BLEU scores (Papineni et al., 2002) on the en-ro test set. Models and Model SettingsWe mainly compare our methods with the following baselines: (i) **Full Fine-tuning** (full): it fine-tunes all the model parameters in the pretrained models; (ii) **Adapter**(Houlsby et al., 2019): it adds adapter modules to each transformer layer; (iii) **Prefix**(Li and Liang, 2021): it optimizes a set of small continuous vectors prepended to transformer layers; (iv) **BitFit**(Zaken et al., 2021): it only updates the bias terms in pretrained models; (v) **LoRA**(Hu et al., 2021): it decomposes the attention weight into low-rank matrices to reduce the number of trainable parameters. Besides T5 (Raffel et al., 2020), we additionally apply our methods to other backbone models \begin{table} \begin{tabular}{c|c c c c c c c|c} \hline \hline **Method** & **SST-2** & **MNLI** & **QNL1** & **QQP** & **RTE** & **STS-B** & **MRPC** & **CoLA** & **Average** \\ \hline \hline full & 95.2 & 87.1 & 93.7 & 89.4 & 80.1 & 89.4 & 90.7 & 51.1 & 84.5 \\ \hline Adapter & 94.6 & 85.5 & 89.8 & 86.7 & 75.3 & 86.7 & 89.1 & 59.2 & 83.3 \\ Prefix & 94.0 & 81.6 & 87.8 & 83.4 & 64.3 & 83.1 & 84.8 & 34.0 & 76.6 \\ BiFit & 94.4 & 84.5 & 90.6 & 88.3 & 74.3 & 86.6 & 90.1 & 57.7 & 83.3 \\ LoRA & 94.8 & 84.7 & 91.6 & 88.5 & 75.8 & 86.3 & 88.7 & 51.5 & 82.7 \\ \(\mathcal{S}_{4}\)**-model** & **95.5\(\mathbf{1}\)**, \(\mathbf{7}\) & **87.6\(\mathbf{1}\)**, \(\mathbf{9}\) & **27.1\(\mathbf{1}\)**, \(\mathbf{88}\)**, \(\mathbf{8}\)**, \(\mathbf{1}\) & **88.1\(\mathbf{5}\)**, \(\mathbf{8}\)**, \(\mathbf{7}\) & **87.4\(\mathbf{2}\)**, \(\mathbf{9}\) & **91.2\(\mathbf{2}\)**, \(\mathbf{4}\) & **62.2\(\mathbf{2}\)**, \(\mathbf{3}\)** & **85.7** \\ \hline \hline full & 97.4 & 91.4 & 96.3 & 89.7 & 91.1 & 90.6 & 92.5 & 67.1 & 89.5 \\ \hline Adapter & 96.3 & 89.9 & 94.7 & 87.8 & 83.4 & 90 & 89.7 & 65.2 & 87.1 \\ Prefix & 96.3 & 82.8 & 88.9 & 85.5 & 78.3 & 83.5 & 85.4 & 42.7 & 80.4 \\ BiFit & 95.8 & 89.5 & 93.5 & 88.5 & 86.2 & 90.7 & 88.6 & 64.2 & 87.1 \\ LoRA & 96.2 & 90.6 & 94.9 & 89.1 & 91.2 & 91.1 & 91.1 & 67.4 & 88.9 \\ \(\mathcal{S}_{4}\)**-3b-model** & **97.2\(\mathbf{1}\)**, \(\mathbf{9}\) & **61.6\(\mathbf{1}\)**, \(\mathbf{9}\) & **66.7\(\mathbf{1}\)**, \(\mathbf{8}\)** & **91.5\(\mathbf{1}\)**, \(\mathbf{5}\) & **91.5\(\mathbf{2}\)**, \(\mathbf{8}\)** & **91.5\(\mathbf{2}\)**, \(\mathbf{5}\) & **91.9\(\mathbf{2}\)**, \(\mathbf{0}\) & **69.7\(\mathbf{3}\)**, \(\mathbf{4}\) & **89.9** \\ \hline \hline \end{tabular} \end{table} Table 4: Performances of different tuning methods on the GLUE datasets using the T5-base (upper part) and T5-3b (lower part) pretrained backbone models, respectively. The results are averaged over 20 random runs (with standard deviations as subscripts). The \(\mathcal{S}_{4}\)-model and the \(\mathcal{S}_{4}\)-3b-model perform significantly better than the second-best PEFT methods in all the eight datasets at the significance level \(p<0.05(*)\) or even \(p<0.01(**)\). \begin{table} \begin{tabular}{c|c c c c c c c c|c} \hline \hline **Tunable Groups** & **SST-2** & **MNLI** & **QNL1** & **QQP** & **RTE** & **STS-B** & **MRPC** & **CoLA** & **Avg** \\ \hline \hline \(G_{1}\) & 82.6 & 72.1 & 77.6 & 70.6 & 65.3 & 71.9 & 77.6 & 27.6 & 68.2 \\ \(G_{2}\) & 83.3 & 72.8 & 77.5 & 72.8 & 63.6 & 72.8 & 77.5 & 27.5 & 68.4 \\ \(G_{3}\) & 83.6 & 73.3 & 78.2 & 73.3 & 66.4 & 71.3 & 77.9 & 22.9 & 68.4 \\ \(G_{4}\) & 83.2 & 73.0 & 77.9 & 73.7 & 63.9 & 72.0 & 77.9 & 27.9 & 68.7 \\ \(G_{1},G_{2}\) & 83.5 & 73.2 & 78.0 & 75.4 & 67.7 & 73.2 & 78.0 & 28.0 & 69.6 \\ \(G_{3},G_{4}\) & 87.8 & 74.6 & 78.3 & 76.9 & 68.6 & 74.3 & 78.3 & 28.3 & 70.7 \\ \(G_{1},G_{2},G_{3}\) & 86.0 & 75.8 & 79.0 & 77.8 & 71.8 & 78.8 & 79.0 & 33.0 & 72.6 \\ \(G_{2},G_{3},G_{4}\) & 85.2 & 76.6 & 79.1 & 78.6 & 70.1 & 77.6 & 79.1 & 31.9 & 72.2 \\ \(\mathbf{G_{1},G_{2},G_{3},G_{4}}\) & **88.3** & **77.4** & **82.1** & **81.5** & **74.9** & **79.4** & **81.4** & **34.3** & **74.9** \\ \hline \hline \end{tabular} \end{table} Table 3: Average performances (low-compute, low-epoch regime: 100 random models, 3 tuning epochs) on the GLUE datasets using the T5-base pretrained backbone model. We compare adding different tunable group constraints to the \(\mathcal{S}_{2}\) design space. including RoBERTa-base/large [12] and BART-base/large [13]. We use the default settings. We set the total number of trainable parameters (in the percentage of that in the backbone model) by following [12]. Specifically, this value is set to 0.5% for Adapter, Prefix, LoRA, and our methods, and 0.1% for BitFit. For all the experiments, we followed [12] to set the linear decay scheduler with a warmup ratio of 0.06 for training. The batch size was 128 for base models and 64 for large models. The maximum learning rate was \(5e-5\) and the maximum number of training epochs was set to be either \(5\) or \(10\). All the experiments were performed using 8 A100 GPUs. ### Effectiveness on GLUE with T5 Backbones With our discovered design patterns, we fine-tune T5-base (\(\mathcal{S}_{4}\)-model) and T5-3b (\(\mathcal{S}_{4}\)-3b-model) on GLUE and compare them with all the baseline methods. The results are shown in Table 4, where the key measure is the GLUE average performance (last column). We find that our \(\mathcal{S}_{4}\)-model and \(\mathcal{S}_{4}\)-3b-model consistently outperform the investigated methods in the key measure. By tuning only \(0.5\%\) parameters, our methods even outperform the full fine-tuning baseline where all the parameters are tuned, indicating the effectiveness of our discovered PEFT design patterns. ### General Effectiveness on GLUE with RoBERTa Backbones We directly apply the \(\mathcal{S}_{4}\)-model and \(\mathcal{S}_{4}\)-3b-model (adopting design patterns discovered using T5-base and T5-3b) to fine-tune the RoBERTa-base and RoBERTa-large pretrained backbone models (with no extra discovery process), respectively. We keep all the other settings the same and evaluate them on GLUE datasets. We also compare with variant methods randomly sampled from two de \begin{table} \begin{tabular}{c|c c} \hline \hline **Method** & **XSUM(R-1/2/L)** & **en-ro (BLEU)** \\ \hline \hline full & 40.5/19.2/34.8 & 34.5 \\ \hline Adapter & 37.7/17.9/33.1 & 33.3 \\ Prefix & 38.2/18.4/32.4 & 33.8 \\ BitFit & 37.2/17.5/31.4 & 33.2 \\ LoRA & 38.9/18.6/33.5 & 33.6 \\ PA & 39.3/18.7/33.8 & 33.8 \\ \(\mathcal{S}_{4}\)**-model** & **40.2/19.3/34.2** & **34.1** \\ \hline \hline full & 45.1/22.3/37.2 & 37.9 \\ \hline Adapter & 43.8/20.8/35.7 & 35.3 \\ Prefix & 43.4/20.4/35.5 & 35.6 \\ BitFit & 42.8/18.7/33.2 & 35.2 \\ LoRA & 42.9/19.4/34.8 & 35.8 \\ PA & 43.9/20.6/35.6 & 36.4 \\ \(\mathcal{S}_{4}\)**-3b-model** & **44.3/21.7/36.8** & **37.2** \\ \hline \hline \end{tabular} \end{table} Table 6: Performances of different tuning methods on generation tasks (XSUM and en-ro) using the BART-base (upper part) and BART-large (lower part) pretrained backbone models. \begin{table} \begin{tabular}{c|c c c c c c c c|c} \hline \hline **Method** & **SST-2** & **MNLI** & **QNLI** & **QQP** & **RTE** & **STS-B** & **MRPC** & **CoLA** & **Average** \\ \hline \hline full & 94.8 & 87.6 & 92.8 & 91.9 & 80.8 & 90.3 & 90.2 & 63.6 & 86.5 \\ \hline Adapter & 94.2 & 87.1 & 93.1 & 90.2 & 71.5 & 89.7 & 88.5 & 60.8 & 84.4 \\ Prefix & 94.0 & 86.8 & 91.3 & 90.5 & 74.5 & 90.3 & 88.2 & 61.5 & 84.6 \\ BitFit & 93.7 & 84.8 & 91.3 & 84.5 & 77.8 & **90.8** & 90.0 & 61.8 & 84.3 \\ LoRA & **94.9** & 87.5 & 93.1 & 90.8 & 83.1 & 90.0 & 89.6 & 62.6 & 86.4 \\ \(\mathcal{S}_{0}\)-model & 94.2 & 95.3 & 90.4 & 90.6 & 75.6 & 89.6 & 88.0 & 60.9 & 85.6 \\ \(\mathcal{S}_{0}\)-model & 94.3 & 87.2 & 92.8 & 91.0 & 81.8 & 90.3 & 89.2 & 63.2 & 86.2 \\ \(\mathcal{S}_{4}\)**-model** & 94.3 & 87.6 & **93.4**\({}^{*}_{1,2}\) & **91.6**\({}^{*}_{1,2}\) & **85.8**\({}^{*}_{1,*}\) & 90.4**\({}^{*}_{2,0}\) & **90.0**\({}^{*}_{1,*}\) & **63.2**\({}^{*}_{3,5}\) & **87.1** \\ \hline \hline full & 96.4 & 90.2 & 94.7 & 92.2 & 86.6 & 92.4 & 90.9 & 68.0 & 88.9 \\ \hline Adapter & 96.6 & 90.5 & 94.8 & 91.7 & 80.1 & 92.1 & 90.9 & 67.8 & 88.1 \\ Prefix & 95.7 & 87.6 & 92.1 & 88.7 & 82.3 & 89.6 & 87.4 & 62.8 & 85.7 \\ BitFit & 96.1 & 88.0 & 93.4 & 90.2 & 86.2 & 90.9 & **92.7** & 64.2 & 87.7 \\ LoRA & 96.2 & 90.6 & 94.7 & 91.6 & **87.4** & 92.0 & 89.7 & 68.2 & 88.8 \\ \(\mathcal{S}_{0}\)-model & 95.5 & 86.5 & 92.3 & 89.8 & 84.6 & 89.2 & 86.3 & 61.2 & 85.6 \\ \(\mathcal{S}_{3}\)-model & 96.3 & 89.4 & 93.8 & 90.2 & 85.9 & 90.8 & 90.9 & 63.4 & 87.6 \\ \(\mathcal{S}_{4}\)**-3b-model** & **96.6**\({}^{*}_{1,3}\) & **90.8**\({}^{*}_{1,1}\) & **95.-1\({}^{**}_{0.8}\)** & **92.0**\({}^{*}_{1,2}\) & **87.2**\({}_{2.8}\) & **92.3**\({}^{*}_{2,2}\) & 91.8**\({}^{*}_{1,8}\) & **68.4**\({}^{*}_{3,2}\) & **89.3** \\ \hline \hline \end{tabular} \end{table} Table 5: Performances of different tuning methods on GLUE datasets using the RoBERTa-base (upper part) and RoBERTa-large (lower part) pretrained backbone models. The results are averaged over 20 random runs (with standard deviations as subscripts). Here we also include two baselines: (i) \(\mathcal{S}_{0}\)-_model, where all the designs are randomly selected for RoBERTa as in the \(\mathcal{S}_{0}\) design space; (ii) \(\mathcal{S}_{3}\)-model, where strategies are randomly assigned to different RoBERTa layer groups as in the \(\mathcal{S}_{3}\) design space. The \(\mathcal{S}_{4}\)-model and \(\mathcal{S}_{4}\)-3b-model perform significantly better than the second-best PEFT methods in all the eight datasets at the significance level \(p<0.05(*)\) or even \(p<0.01(**)\)._ sign spaces: (i) \(\mathcal{S}_{0}\)_-model_, where all the designs are randomly selected for RoBERTa as in \(\mathcal{S}_{0}\); (ii) \(\mathcal{S}_{3}\)_-model_, where strategies are randomly assigned to different RoBERTa layer groups as in \(\mathcal{S}_{3}\). Table 5 shows that (i) the design patterns (adopted by \(\mathcal{S}_{4}\)-model and \(\mathcal{S}_{4}\)-3b-model) discovered using T5 models are applicable to the RoBERTa backbone models and outperform the investigated methods in GLUE average performances with no extra discovery process;(ii) improved performances from \(\mathcal{S}_{0}\)-models, \(\mathcal{S}_{3}\)-models, to \(\mathcal{S}_{4}\)-(3b)-models support adding more constraints in the pattern discovery process (Section 4). ### General Effectiveness on Generation Tasks with BART Backbones Like in Section 5.3, we further directly apply the \(\mathcal{S}_{4}\)-model and \(\mathcal{S}_{4}\)-3b-model (adopting design patterns discovered using T5-base and T5-3b) to fine-tune the BART-base and BART-large pretrained backbone models (without additional discovery process.), respectively. We evaluate the models on two generation tasks: summarization (XSUM) and machine translation (en-ro) following He et al. (2022). We also compare with PA (parallel adapter) using the same number of trainable parameters (He et al., 2022). Table 6 shows that our methods, although adopting design patterns discovered from classification tasks using T5, still outperform investigated PEFT strategies on generation tasks with different BART backbones. ## 6 Conclusion PEFT adapts knowledge in pretrained models to down-stream tasks in a more parameter-efficient fashion. Instead of focusing on designing another strategy in the first place, we introduced PEFT design spaces. We empirically discovered several design patterns in PEFT. These design patterns led to new PEFT methods. Experiments showed that these methods consistently outperform investigated PEFT strategies across different backbone models and different tasks in natural language processing.
2305.05168
Quantum flow algorithms for simulating many-body systems on quantum computers
We conducted quantum simulations of strongly correlated systems using the quantum flow (QFlow) approach, which enables sampling large sub-spaces of the Hilbert space through coupled eigenvalue problems in reduced dimensionality active spaces. Our QFlow algorithms significantly reduce circuit complexity and pave the way for scalable and constant-circuit-depth quantum computing. Our simulations show that QFlow can optimize the collective number of wave function parameters without increasing the required qubits using active spaces having an order of magnitude fewer number of parameters.
Karol Kowalski, Nicholas P. Bauman
2023-05-09T04:30:36Z
http://arxiv.org/abs/2305.05168v2
# Quantum flow algorithms for simulating many-body systems on quantum computers ###### Abstract We conducted quantum simulations of strongly correlated systems using the quantum flow (QFlow) approach, which enables sampling large sub-spaces of the Hilbert space through coupled eigenvalue problems in reduced dimensionality active spaces. Our QFlow algorithms significantly reduce circuit complexity and pave the way for scalable and constant-circuit-depth quantum computing. Our simulations show that QFlow can optimize the collective number of wave function parameters without increasing the required qubits using active spaces having an order of magnitude fewer number of parameters. _Introduction.--_ The development of quantum computing has grabbed the attention of the many-body chemistry and physics communities with the promise to provide exponential speed-ups over traditional computing for problems such as solving the electronic Schrodinger equation for ground and excited states or the time-dependent equation for studying dynamics. For the electronic problem, the two salient quantum algorithms for determining energetics with the electronic Hamiltonian are quantum phase estimation (QPE)[1; 2; 3; 4; 5; 6] and variational quantum eigensolver (VQE).[7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] Both algorithms are seized by complexities that prevent routine calculations of meaningful problems that plague traditional computing. These complexities result from the inherently large dimensionality needed to provide accurate and reliable results. For QPE, this manifests in circuit depths far beyond what is achievable in the noisy intermediate-scale quantum (NISQ) device era of quantum computing. For VQE, a measure of complexity is the number of parameters from a given ansatz currently optimized using traditional computing algorithms. The progress in enabling quantum computing technologies is contingent not only on the advances in the design of quantum materials but also on the ability to adapt to new methodological advances in the theory of correlated many-body systems. Dimensionality-reducing techniques previously developed can be largely categorized into two groups. The first is truncation methods which discard parameters or subspaces of the Hilbert space that seem irrelevant. This approach is rudimentary, can lead to significant errors, and is trustworthy for only a small set of problems. The second category is preprocessing procedures that utilize low-cost traditional approximations to incorporate correlation and many-body effects into low-dimensionality operators.[19] The authors have success in this category with the development of the coupled cluster (CC)-based downfolding formalism which allows one to incorporate dynamical correlation effects from large Hilbert spaces into tractable effective Hamiltonians for a small subspace of the original problem. This letter describes and provides numerical evidences for a new dimensionality-reducing technique called the quantum flow (QFlow) approach. The QFlow algorithm is a hybrid computational workflow that integrates the most appealing quantum and classical computing features by partitioning large sub-spaces of the Hilbert space into coupled eigenvalue problems in reduced dimensionality active spaces.[20] With the coupled eigenvalue subproblems, the dimensionality of a given problem is reduced to the dimensionality of the largest active space. Using modest active spaces, we demonstrate that QFlow can efficiently recover the corresponding energetics of the full problem. It is a flexible workflow that we expected to play a pivotal role in applying many-body chemistry and physics on quantum computers during the transition from NISQ devices to fully-fledged error-corrected quantum computing. CC Theory and Quantum Flows.--_ The CC theory [21; 22; 23; 24; 25; 26] has evolved into a one of the most prominent formalisms to describe correlated systems. In the single-reference variant (SR-CC), the ground-state wave function \(|\Psi\rangle\) is defined by the exponential Ansatz \[|\Psi\rangle=e^{T}|\Phi\rangle\, \tag{1}\] where \(T\) and \(|\Phi\rangle\) represent the cluster operator and reference function. Standard CC equations are given by the equations \[Qe^{-T}He^{T}|\Phi\rangle=0\, \tag{2}\] \[\langle\Phi|e^{-T}He^{T}|\Phi\rangle=E\, \tag{3}\] where \(Q\) is a projection operator onto excited Slater determinants generated by acting with \(T\) on \(|\Phi\rangle\) (the projection onto the reference function is denoted as \(P\)). Recently, it has been demonstrated that CC energies can be calculated by diagonalizing effective Hamiltonians in a class of complete active spaces (CASs) that are specific to the approximation of the \(T\) operator.[27; 20; 28] If, in the particle-hole formalism, CAS is generated by the excitation sub-algebra (b), the cluster operator \(T\) can be partitioned into internal (\(T_{\text{int}}(\mathfrak{h})\)); producing excitation within CAS) and external (\(T_{\text{ext}}(\mathfrak{h})\); producing excitation outside of CAS) and \(e^{T_{\text{int}}(\mathfrak{h})}|\Phi\rangle\) represents an exact-type expansion in the CAS. The CC energy can then be obtained as: \[H^{\text{eff}}(\mathfrak{h})e^{T_{\text{int}}(\mathfrak{h})}|\Phi\rangle=Ee^{T _{\text{int}}(\mathfrak{h})}|\Phi\rangle\, \tag{4}\] \[H^{\text{eff}}(\mathfrak{h})=(P+Q_{\text{int}}(\mathfrak{h}))e^{-T_{\text{ext }}(\mathfrak{h})}He^{T_{\text{ext}}(\mathfrak{h})}(P+Q_{\text{int}}(\mathfrak{ h}))\, \tag{5}\] where \(Q_{\text{int}}(\mathfrak{h})\) is a projection onto excited (with respect to \(|\Phi\rangle\)) excited configurations in the CAS. The above property of the CC formalism (referred to as the CC downfolding) is valid for _any type of sub-algebra_\(\mathfrak{h}\) (henceforth referred to as the sub-system embedding sub-algebras (SES)) described above. The invariance of the CC energy with respect to the choice of SES led to the concept of quantum flow and Equivalence Theorem,[20; 29] which states that when several SES problems represented by (5) are coupled into the flow, i.e., \[H^{\rm eff}(\mathfrak{h}_{\rm i})e^{T_{\rm int}(\mathfrak{h}_{\rm i})}|\Phi \rangle=Ee^{T_{\rm int}(\mathfrak{h}_{\rm i})}|\Phi\rangle\left(i=1,\ldots,M\right) \tag{6}\] (\(M\) stands for the number of CASs included in the flow), the corresponding solution is equivalent to the standard representation of the CC theory given by Eqs. (2) and (3) with the \(T\) operator defined as a combination of all unique excitations included in \(T_{\rm int}(\mathfrak{h}_{\rm i})\) (\(i=1,...,M\)) operators, i.e., \[T=\bigcup_{i=1}^{M}T_{\rm int}(\mathfrak{h}_{\rm i}). \tag{7}\] An important consequence of the Equivalence Theorem is the fact that for some choices of cluster operator, Eq. (7), high-dimensionality problem, Eqs. (2) and (3) can be replaced by a flow composed of reduced-dimensionality non-Hermitian eigenvalue problems. For each sub-algebra \(\mathfrak{h}_{\rm i}\) in the eigenvalue problem of Eq. 6, the effective Hamiltonian \(H^{\rm eff}(\mathfrak{h}_{\rm i})\) follows Eq. 5, where the external cluster operators \(T_{\rm ext}(\mathfrak{h}_{\rm i})\) are the collection of operators excluding \(T_{\rm int}(\mathfrak{h}_{\rm i})\), \[T_{\rm ext}(\mathfrak{h}_{\rm i})=T-T_{\rm int}(\mathfrak{h}_{\rm i}). \tag{8}\] To extend the SR-CC downfolding to the Hermitian case, we introduced the DUCC ground-state wave function Ansatz: \[|\Psi\rangle=e^{\sigma_{\rm ext}(\mathfrak{h})}e^{\sigma_{\rm int}(\mathfrak{ h})}|\Phi\rangle\, \tag{9}\] where \(\sigma_{\rm ext}(\mathfrak{h})\) and \(\sigma_{\rm int}(\mathfrak{h})\) are general-type anti-Hermitian operators. While \(\sigma_{\rm int}(\mathfrak{h})\) operator is defined by amplitudes carrying only active spin-orbital labels, the \(\sigma_{\rm ext}(\mathfrak{h})\) operators are defined by amplitudes with at least one in-active spin-orbital label. In particular, a similar result to Eq. (4) holds: \[H^{\rm eff}(\mathfrak{h})e^{\sigma_{\rm int}(\mathfrak{h})}|\Phi\rangle=Ee^{ \sigma_{\rm int}(\mathfrak{h})}|\Phi\rangle\, \tag{10}\] where \(\tau_{k}\) is a corresponding combination of the string of a creation/annihilation operators associated with the \(\theta^{X}(\mathfrak{h}_{\rm i})_{k}\) amplitude in the \(\sigma_{\rm int}(\mathfrak{h}_{\rm i})\) operator. Instead of performing full optimization for each active space included in the QFlow, we perform only one optimization step based on the gradient (14). We also employ UCC-type representation for each \(\sigma_{\rm ext}(\mathfrak{h}_{\rm i})\) needed to construct \(H^{\rm eff}(\mathfrak{h}_{\rm i})\) operator in Eq. (10). Results.--As a test system to demonstrate the performance of the QFlows techniques, we chose the H\({}_{n}\) linear chains of the hydrogen atoms: H6 and H8 models in small STO-3G basis set,[33] where one can vary the complexity of the ground-state wave function by changing the the H-H distances (\(R_{\mathrm{H-H}}\)) between adjacent atoms. For example, while for \(R_{\mathrm{H-H}}\)= 2.0 a.u. one deals with the weakly correlated case, for \(R_{\mathrm{H-H}}\)= 3.0 a.u., the system is strongly correlated and all Hartree-Fock orbitals used in simulations are non-negligible. This means that one cannot define a single small-dimensionality active space to capture all needed correlation effects for the \(R_{\mathrm{H-H}}\)= 3.0 a.u. case. Recently, the H\({}_{n}\) models have been used for validation of cutting-edge many-body numerical methodologies for treating correlated quantum systems.[34, 35, 36] We summarized QFlow results in Table 1 and in Figs. (2) and (3). For both systems, the QFlow included all active spaces defined by arbitrary two occupied active and two virtual active orbitals and four active electrons (the QFlow(4e,4o) model). For H6 and H8 systems QFlow integrated 9 and 36 active spaces, respectively. In Table 1, the QFlow(4e,4o) results are compared against exact diagonalization (ED) in the full space, in the primary active space (CAS-ED) consisting of the two highest energy occupied orbitals and two lowest energy unoccupied orbitals, and typical CC approximations including excitations from singles to quadruples (CCSD, CCSDT, and CCSDTQ).[26] It is evident that the QFlow algorithm significantly reduces errors of the CAS-ED method - a prevailing model for performing quantum simulations on NISQ-type devices. In the extreme case, the error of CAS-ED amounting to 279 mHartree for the H8 3.0 a.u. system is reduced by QFlow to 12.4 mHartree. Additionally, it should be noticed that for H8 3.0 a.u., the CCSD and CCSDT formulations experience variational collapse placing the ground-state energies significantly below the ED ones. For weak correlated H6 and H8 models (\(R_{\mathrm{H-H}}\)=2.0 a.u.), the QFlow results are within chemical accuracy error bars (less than 1.59 mHartree). In Fig. 2, we show energies (\(E(\mathrm{b}_{i})\)) calculated in the first four QFlow cycles for two geometries of H8. In both cases, we can observe that energies obtained in the first non-trivial cycle (second cycle) are considerably better than the CAS-ED energy for the primary active spaces (targeted in typical VQE simulations). The QFlow procedures with four electrons/four active orbital spaces limit the total pool of parameters, accounting for any difference between the converged QFlow energies and those obtained from the full ED. For example, higher-order excitations involving three or more different occupied or unoccupied orbitals cannot be accounted for because the active space does not allow it, having only two occupied and two unoccupied orbitals. The QFlow accuracies can be improved upon in two ways. The first is with larger active spaces, which will include a number of parameters. The other option is to use the spin-orbital definition of the active spaces (see Ref. [28]) that enable the inclusion of broader classes of three- and four-body correlation effects without increasing computational cost per active space. In Fig. 3, we discuss the discrepancies between the minimum and maximum values of \(E(\mathrm{b}_{i})\) for each cycle for H8 3.0 a.u. model. Despite the fact that in cycles two and three these discrepancies are substantial, in the following iterations, these discrepancies significantly decrease. For 20-the cycle, the discrepancy is less than 2.0 mHartree, which indicates that despite approximations associated with the non-commutative characters of cluster operators in QFlow, the energy invariance of the SR-CC flow (6) at the solution is approximately satisfied. In the QFlow simulations for the H8 system, we optimized 684 parameters using coupled computational blocks corresponding to active space eigenvalue problems that optimize at most 35 parameters. Figure 1: Schematic representation of the QFlows algorithms. All cluster amplitudes (Global Pool of CC amplitudes; (GPA)) are residing on classical computers. The effective Hamiltonians are formed on classical computers using GPA and encoded on quantum computers (light blue arrows). Quantum computers use these Hamiltonians to optimize internal excitations for a given active space and are used to update GPAs (dark blue lines). \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & H6 & H6 & H8 & H8 \\ & (2.0 a.u) & (3.0 a.u) & (2.0 a.u) & (3.0 a.u) \\ \hline HF & -3.1059 & -2.6754 & -4.1382 & -3.5723 \\ CAS-ED & -3.1669 & -2.8021 & -4.1906 & -3.6656 \\ CCSD & -3.2173 & -2.9673 & 4.2848 & 3.9727 \\ CCSDT & -3.2180 & -2.9692 & -4.2867 & -3.9784 \\ CCSDTQ & -3.2177 & -2.9574 & -4.2860 & -3.9439 \\ QFlow(4e,4o)1 & -3.2173 & -2.9521 & -4.2847 & -3.9322 \\ ED & -3.2177 & -2.9576 & -4.2860 & -3.9447 \\ \hline \hline \end{tabular} \end{table} Table 1: Converged QFlow energies for H6 and H8 benchmark systems at \(R_{\mathrm{H-H}}\)=2.0 a.u. and \(R_{\mathrm{H-H}}\)=3.0 a.u. corresponding to weakly and strongly correlated regimes, respectively. _Summary.--_ We provided numerical evidence that the quantum flow (QFlow) algorithm can efficiently sample large sub-spaces of the Hilbert space through coupled eigenvalue problems in reduced dimensionality active spaces. Using very modest active space sizes, we illustrated the utility of the QFlow procedure with the H6 and H8 hydrogen chains in weakly and strongly correlated regimes with errors within chemical accuracy for weakly correlated systems and relatively small errors for the strongly correlated systems. For the strongly correlated H8 model, we recover nearly 97% of the correlation using active spaces containing at most 5% of the number of optimized parameters compared to the full exact diagonalization. The examples in this paper are very conservative estimates of the dimensionality reduction that can be achieved with the QFlow algorithm. As quantum technology evolves and we transition from the noisy intermediate-scale quantum devices era to fully-fledged error-corrected quantum computing, the ability to adapt to new methodological advances and efficiently utilize hybrid computational resources is ever-important. The authors expect that the QFlow algorithm demonstrated in this letter will play a crucial role in expanding the envelope of many-body applications as quantum computing continues to evolve. This material is based upon work supported by the "Embedding QC into Many-body Frameworks for Strongly Correlated Molecular and Materials Systems" project, which is funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, the Division of Chemical Sciences, Geosciences, and Biosciences (under FWP 72689) and by Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (under FWP )76213. This work used resources from the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for the U.S. Department of Energy under Contract DE-AC05-76RL01830.
2306.04726
Variational Formulation of Higher-order Guiding-center Vlasov-Maxwell Theory
Extended guiding-center Vlasov-Maxwell equations are derived under the assumption of time-dependent and inhomogeneous electric and magnetic fields that obey the standard guiding-center space-time-scale orderings. The guiding-center Vlasov-Maxwell equations are derived to second order, which contain dipole and quadrupole contributions to the guiding-center polarization and magnetization that include finite-Larmor-radius corrections. Exact energy-momentum conservation laws are derived from the variational formulation of these higher-order guiding-center Vlasov-Maxwell equations.
Alain J. Brizard
2023-06-07T18:44:09Z
http://arxiv.org/abs/2306.04726v4
# Variational Formulation of Higher-order Guiding-center Vlasov-Maxwell Theory ###### Abstract Extended guiding-center Vlasov-Maxwell equations are derived under the assumption of time-dependent and inhomogeneous electric and magnetic fields that obey the standard guiding-center space-time-scale orderings. The guiding-center Vlasov-Maxwell equations are derived to second order, which contain dipole and quadrupole contributions to the guiding-center polarization and magnetization that include finite-Larmor-radius corrections. Exact energy-momentum conservation laws are derived from the variational formulation of these higher-order guiding-center Vlasov-Maxwell equations. ## I Introduction The adiabatic invariance of the magnetic moment along a charged-particle orbit (with mass \(m\) and charge \(e\)) in a nonuniform magnetic field \(\mathbf{B}=B\,\widehat{\mathbf{b}}\) plays a crucial role in our understanding of the physical basis of the spatial confinement of a magnetized plasma over long time scales [1; 2]. In guiding-center theory [3; 4], the mathematical construction of the magnetic moment relies on the space-time scales \((L_{B},\omega^{-1})\) of the confining magnetic field to be long compared to the characteristic gyroradius \(\rho\) and the gyroperiod \(\Omega^{-1}=mc/eB\), respectively, leading to the small dimensionless ordering parameter [3] \[\epsilon_{B}\;\equiv\;\rho/L_{B}\;\sim\;\omega/\Omega\;\ll\;1. \tag{1}\] The traditional derivation of guiding-center theory relies on the existence of an ordering parameter defined by the dimensional mass-to-charge ratio \(m/e\)[4; 5], so that the gyroperiod \(\Omega^{-1}\propto m/e\) is assumed to be the shortest time scale. For practical applications in perturbation theory [6], however, it is convenient to replace the ratio \(m/e\) with \(\epsilon\,m/e=(\epsilon\,m)/e=m/(\epsilon^{-1}e)\), so that the dimensionless ordering parameter \(\epsilon\ll 1\) can either be viewed as a _renormalization_ of the particle's mass \(m\to\epsilon\,m\) or the particle's charge \(e\to e/\epsilon\) and it is quite common to assume \(\epsilon\sim\epsilon_{B}\). It is noteworthy, however, that the guiding-center approximation is still valid when the spatial ordering \(\rho/L_{B}\lesssim 1\) is not extremely small (as recently demonstrated in Ref. [7]). In earlier derivations of the guiding-center Vlasov equation (sometimes referred to as the drift-kinetic equation), a recursive solution of the Vlasov kinetic equation [8; 9; 10; 11] led to the drift-kinetic equation through an expansion of the Vlasov distribution function \(f=f_{0}+\epsilon\,f_{1}+\epsilon^{2}\,f_{2}+\cdots\), which yielded functional solutions \(f_{n}[f_{0}]\) (\(n\geq 1\)) in terms of the lowest-order gyroangle-independent solution \(f_{0}\). It was later shown that this recursive derivation is completely analogous to the Lie-transform derivation of the guiding-center Vlasov equation [12], which is adopted here because of its simplicity. We also note that, when Lie-transform perturbation methods are combined with variational formulations for the reduced Vlasov-Maxwell equations [13], the reduced Maxwell equations naturally incorporate reduced polarization and magnetization effects while exact energy-momentum conservation laws are derived by Noether method [14]. ### Motivation The inclusion of nonuniform, time-dependent electric fields in guiding-center theory has a long and rich history in plasma physics [3; 4]. In the present work, we use the standard ordering [11] for the electric field \(\mathbf{E}=\mathbf{E}_{\perp}+\epsilon\,E_{\parallel}\,\widehat{\mathbf{b}}\), where the parallel component \(E_{\parallel}\to\epsilon\,E_{\parallel}\) of the electric field is considered small compared to the perpendicular components \(\mathbf{E}_{\perp}\). In addition, we assume that the \(E\times B\) velocity \(c|\mathbf{E}_{\perp}|/B\) is comparable to particle's thermal velocity. The ability of electric fields to fundamentally modify the magnetic geometry that confines a laboratory plasma (e.g., by creating transport barriers [15; 16; 17; 18; 19] or in setting up rotating mirror magnetic geometries [20]) motivates the need to construct a guiding-center Vlasov-Maxwell theory that includes self-consistent time-dependent electric and magnetic fields, in which the transfer of energy and momentum between the electromagnetic fields and the confined plasma play a crucial role. In addition, for many situations of practical interest, the presence of a strong electric field is associated with strong plasma flows with steep sheared rotation profiles for which second-order effects (including finite-Larmor-radius effects), which must be included in a self-consistent guiding-center theory [18; 19; 21; 22; 23; 24]. Guiding-center equations of motion with second-order corrections in the presence of time-independent electric and magnetic fields were derived using Lie-transform perturbation method by Brizard [25] and Hahm [19], following the earlier work of Littlejohn [6]. More recently, these perturbation methods were also used by Miyato _et al._[26] and Madsen [27], who derived self-consistent guiding-center Vlasov-Maxwell equations that included guiding-center polarization and magnetization effects. Not all second-order effects were included in these models, however, and it is the purpose of the present work to derive a more complete higher-order guiding-center Vlasov-Maxwell theory, based on the Lie-transform perturbation derivation of higher-order guiding-center Lagrangian dynamics [28] for the case of time-dependent, nonuniform electric and magnetic fields that satisfy the guiding-center ordering (1). We note that a key difference between guiding-center Vlasov-Maxwell models considered here and gyrokinetic Vlasov-Maxwell models considered elsewhere (see Ref. [29] for a review), is that the electromagnetic fields \((\mathbf{E},\mathbf{B})\) considered here are not separated into time-independent weakly-nonuniform background fields and time-dependent fluctuating (i.e., turbulent) fields that may possibly have short spatial scales (satisfying the gyrokinetic ordering [29]). Hence, the guiding-center Vlasov-Maxwell energy and momentum are exactly conserved despite the fact that guiding-center Vlasov-Maxwell fields are time-dependent and nonuniform, with space-time scales (1) that satisfy the guiding-center orderings [3]. ### The need for higher-order guiding-center theory The previous variational derivations of a self-consistent guiding-center Vlasov-Maxwell model have been carried out up to first order in the dimensionless ordering parameter \(\epsilon\)[30; 31; 32; 33; 34; 35]. In Ref. [34], for example, the Hamiltonian structure of the first-order guiding-center Vlasov-Maxwell equations was given in terms of a guiding-center Hamiltonian functional and a functional bracket that satisfies the Jacobi property. The variational and Hamiltonian structures of the guiding-center Vlasov-Maxwell equations may prove useful in the implementation of structure-preserving numerical algorithms [36; 37; 38; 39; 40]. Recently, second-order terms in guiding-center Hamiltonian theory (in the absence of an electric field) were shown to be crucial [41] in assessing the validity of the guiding-center representation in determining whether guiding-center orbits were numerically faithful to the particle orbits in axisymmetric magnetic geometries, which partially confirmed earlier numerical studies in axisymmetric tokamak plasmas [42]. In particular, it was shown that a second-order correction associated with guiding-center polarization [43; 44; 45] was needed in order to obtain faithful guiding-center orbits. Indeed, without the inclusion of second-order effects, it was shown that, within a few bounce periods after leaving the same physical point in particle phase space, a first-order guiding-center orbit deviated noticeably from its associated particle orbit, while a second-order guiding-center orbit followed the particle orbit to a high degree of precision [41]. In addition, as initially reported by Belova _et al._[42], the guiding-center Hamiltonian formulation [41; 43] is a faithful representation of the particle toroidal angular momentum, which is an exact particle constant of motion in an axisymmetric magnetic field, only if second-order effects are included. See additional comments included in Sec. II.3 regarding the faithfulness of the guiding-center representation. From a purely theoretical point of view, it is therefore interesting to derive higher-order guiding-center Vlasov-Maxwell equations with accurate expressions for the guiding-center polarization and magnetization, which include finite-Larmor-radius (FLR) corrections. Through the application of the Noether method [14], we will also be able to explore how these higher-order effects modify the guiding-center energy-momentum conservation laws, e.g., how the Chew-Goldberger-Low (CGL) pressure tensor [8; 9; 10; 11] is modifed [see Eq. (97)]. ### Organization The remainder of the present work is organized as follows. In Sec. II, the extended guiding-center Hamilton equations of motion are derived in terms of the extended guiding-center Poisson bracket (derived by Lie-transform perturbation method in Ref. [28]), while the higher-order guiding-center Vlasov-Maxwell equations are derived from an Eulerian variational principle [14] in Sec. III, from which guiding-center polarization and magnetization are derived with FLR corrections. In Sec. IV, the exact energy-momentum conservation laws are derived from the guiding-center Noether equation obtained from the guiding-center Eulerian variational principle. The symmetry properties of the guiding-center stress tensor are also briefly discussed, and the guiding-center angular-momentum conservation law is derived at the lowest order, while a more extensive discussion of the asymmetry of the guiding-center stress tensor at higher order is left for future work. ## II Guiding-center Hamiltonian dynamics In this Section, we make use of the results of the Lie-transform perturbation analysis presented in Ref. [28], that yield the guiding-center phase-space extended one-form (expanded in terms of the mass-renormalization ordering \(m\to\epsilon\,m\)) \[\Gamma_{\rm gc} = \left(\frac{e}{c}\mathbf{A}\;+\;\epsilon\,\mathbf{\Pi}_{\rm gc} \right)\,\boldsymbol{\cdot}\mathbf{d}\mathbf{X}\;-\;W\,\mathsf{d}t \tag{2}\] \[+\;\epsilon^{2}\,J\,(\mathsf{d}\theta-\mathbf{R}\boldsymbol{ \cdot}\mathbf{d}\mathbf{X}-\mathcal{S}\,\mathsf{d}t)\] \[\equiv \frac{e}{c}\,\mathbf{A}^{*}\,\boldsymbol{\cdot}\mathbf{d} \mathbf{X}\;+\;\epsilon^{2}\,J\,\mathsf{d}\theta\;-\;W^{*}\,\mathsf{d}t,\] where we introduced the definitions \[\frac{e}{c}\,\mathbf{A}^{*} \equiv \frac{e}{c}\,\mathbf{A}+\epsilon\,\mathbf{\Pi}_{\rm gc}-\epsilon ^{2}\,J\,\mathbf{R}, \tag{3}\] \[W^{*} \equiv W\;+\;\epsilon^{2}\,J\,\mathcal{S}. \tag{4}\] Here, the guiding-center symplectic momentum \[\epsilon\,\mathbf{\Pi}_{\rm gc} = \epsilon\;P_{\parallel}\,\widehat{\mathsf{b}}\;+\;\epsilon\, \mathbf{\Pi}_{\rm E}\;-\;\frac{\epsilon^{2}}{2}\;J\,\nabla\,\boldsymbol{ \times}\,\widehat{\mathsf{b}} \tag{5}\] includes the first-order \(E\times B\) momentum \[\mathbf{\Pi}_{\rm E}\;\equiv\;m\,{\bf u}_{\rm E}\;=\;{\bf E}\,\mathbf{\times}\,\frac{e \widehat{\sf b}}{\Omega}, \tag{6}\] and the second-order polarization correction \(\mathbf{\Pi}_{\rm pol}\equiv-\,\frac{1}{2}\,J\,\nabla\,\mathbf{\times}\,\widehat{\sf b}\) introduced by Tronko and Brizard [43] in order to obtain an exact Lie-transform derivation of the guiding-center polarization derived by Kaufman [45]. In addition, the presence of the gyrogauge fields \(({\cal S},{\bf R})\) in Eqs. (2)-(4) guarantee gyrogauge invariance [46; 6]. The extended guiding-center Hamiltonian, on the other hand, is expressed as \[{\cal H}_{\rm gc}\;=\;e\,\Phi+\epsilon\,K_{\rm gc}+\epsilon^{2}\;J\,{\cal S}-W^ {*}\;\equiv\;e\,\Phi^{*}-W^{*}, \tag{7}\] where the guiding-center kinetic energy in the drifting frame is \[\epsilon\;K_{\rm gc} = \epsilon\left(\mu\,B\;+\;\frac{P_{\parallel}^{2}}{2m}\;+\;\frac{ m}{2}\,|{\bf u}_{\rm E}|^{2}\right) \tag{8}\] \[-\;\nabla\,\mathbf{\cdot}\,\left(\frac{\epsilon^{2}}{2}\,J\,\widehat {\sf b}\,\mathbf{\times}\,{\bf u}_{\rm E}\right),\] which includes the second-order FLR correction to the electrostatic potential energy \(e\,\Phi\). This FLR correction may be decomposed as \[-\;\nabla\,\mathbf{\cdot}\,\left(\frac{1}{2}J\,\widehat{\sf b}\,\mathbf{\times}\,{\bf u }_{\rm E}\right)\;=\;-\;\frac{J}{2}\left(\nabla\,\mathbf{\times}\,\widehat{\sf b} \,\mathbf{\cdot}\,{\bf u}_{\rm E}\;-\;\widehat{\sf b}\,\mathbf{\cdot}\,\nabla\,\mathbf{ \times}\,{\bf u}_{\rm E}\right),\] which includes the standard second-order guiding-center Hamiltonian \(\frac{1}{2}\,J\,\widehat{\sf b}\,\mathbf{\cdot}\,\mathbf{\times}\,{\bf u}_{\rm E}\)[6; 19; 27; 47], and the new guiding-center polarization correction \(\mathbf{\Pi}_{\rm pol}\,\mathbf{\cdot}\,{\bf u}_{\rm E}\), which is ignored by these previous works. In the remainder of the paper, we will remove the ordering parameter \(\epsilon\) and return to the physical mass \(\epsilon\,m\to m\), while we may occasionally refer to this mass-renormalization ordering in what follows. ### Extended guiding-center Poisson bracket The extended guiding-center Poisson bracket \(\{\;,\;\}_{\rm gc}\) is obtained by, first, constructing an 8\(\times\)8 matrix out of the components of the extended guiding-center Lagrange two-form \(\mathbf{\omega}_{\rm gc}={\rm d}\Gamma_{\rm gc}\) and, then, invert this matrix to obtain the extended guiding-center Poisson matrix, whose components are the fundamental brackets \(\left\{Z^{\alpha},Z^{\beta}\right\}_{\rm gc}\). From these components, we obtain the extended guiding-center Poisson bracket [28] \[\{{\cal F},{\cal G}\}_{\rm gc} = \frac{{\bf B}^{*}}{B_{\parallel}^{*}}\,\mathbf{\cdot}\,\left(\nabla^ {*}{\cal F}\,\frac{\partial{\cal G}}{\partial P_{\parallel}}-\frac{\partial{ \cal F}}{\partial P_{\parallel}}\,\nabla^{*}{\cal G}\right) \tag{9}\] \[- \frac{c\widehat{\sf b}}{eB_{\parallel}^{*}}\,\mathbf{\cdot}\,\nabla^ {*}{\cal F}\,\mathbf{\times}\,\nabla^{*}{\cal G}\] \[+ \left(\frac{\partial{\cal F}}{\partial\theta}\,\frac{\partial{ \cal G}}{\partial J}-\frac{\partial{\cal F}}{\partial J}\,\frac{\partial{ \cal G}}{\partial\theta}\right)\] \[+ \left(\frac{\partial{\cal F}}{\partial W}\,\frac{\partial^{*}{ \cal G}}{\partial t}-\frac{\partial^{*}{\cal G}}{\partial t}\,\frac{\partial{ \cal G}}{\partial W}\right),\] where \[\frac{e}{c}\,{\bf B}^{*}\;=\;\frac{e}{c}\,{\bf B}\;+\;\nabla\,\mathbf{\times}\,{ \bf\Pi}_{\rm gc}\;-\;J\,\nabla\,\mathbf{\times}\,{\bf R}, \tag{10}\] and the guiding-center Jacobian is \({\cal J}_{\rm gc}=(e/c)\,B_{\parallel}^{*}\), where \(B_{\parallel}^{*}\equiv\widehat{\sf b}\,\mathbf{\cdot}\,{\bf B}^{*}\). In addition, we introduced the definitions \[\frac{\partial^{*}}{\partial t} \equiv \frac{\partial}{\partial t}\;+\;{\cal S}\;\frac{\partial}{ \partial\theta}, \tag{11}\] \[\nabla^{*} \equiv \nabla\;+\;{\bf R}^{*}\;\frac{\partial}{\partial\theta}\;-\; \left(\frac{e}{c}\frac{\partial{\bf A}^{*}}{\partial t}+J\,\nabla{\cal S} \right)\frac{\partial}{\partial W}, \tag{12}\] where \({\bf R}^{*}\equiv{\bf R}+\frac{1}{2}\,\nabla\,\mathbf{\times}\,\widehat{\sf b}\). We note that the Poisson bracket (9) can be expressed in divergence form as \[\{{\cal F},{\cal G}\}_{\rm gc}\;=\;\frac{1}{B_{\parallel}^{*}}\,\frac{\partial} {\partial Z^{\alpha}}\left(B_{\parallel}^{*}\;{\cal F}\;\left\{Z^{\alpha},\,{ \cal G}\right\}_{\rm gc}\right), \tag{13}\] and that it automatically satisfies the Jacobi identity \[\Big{\{}{\cal F},\;\{{\cal G},{\cal K}\}\Big{\}}+\Big{\{}{\cal G},\;\{{\cal K},{\cal F}\}\Big{\}}+\Big{\{}{\cal K},\;\{{\cal F},{\cal G}\}\Big{\}}=0. \tag{14}\] Next, we note that the operators (11) and (12) contain the gyrogauge-invariant combinations \(\partial/\partial t+{\cal S}\,\partial/\partial\theta\) and \(\nabla+{\bf R}\;\partial/\partial\theta\), while Eqs. (10) and (12) include the gyrogauge-independent vector fields (see App. A) \[\nabla\,\mathbf{\times}\,{\bf R} = -\,\frac{1}{2}\,\epsilon_{ijk}\,b^{i}\;\nabla b^{j}\times\nabla b^{ k}, \tag{15}\] \[\nabla{\cal S}\;-\;\frac{\partial{\bf R}}{\partial t} = -\,\nabla\widehat{\sf b}\,\mathbf{\times}\,\widehat{\sf b}\,\cdot\, \frac{\partial\widehat{\sf b}}{\partial t}, \tag{16}\] where \(\epsilon_{ijk}\) denotes the completely-antisymmetric Levi-Civita tensor. ### Guiding-center Hamilton equations The guiding-center Hamilton equations include the guiding-center velocity \[\dot{\bf X}\;\equiv\;\{{\bf X},\;{\cal H}_{\rm gc}\}_{\rm gc}\;=\;\frac{P_{ \parallel}}{m}\,\frac{{\bf B}^{*}}{B_{\parallel}^{*}}\,+\;{\bf E}^{*}\,\mathbf{ \times}\,\frac{c\widehat{\sf b}}{B_{\parallel}^{*}}, \tag{17}\] where \[\widehat{\sf b}\,\mathbf{\cdot}\,\dot{\bf X}\;=\;\partial{\cal H}_{\rm gc}/\partial P _{\parallel}\;=\;P_{\parallel}/m \tag{18}\] defines the parallel guiding-center velocity, the guiding-center parallel force \[\dot{P}_{\parallel}\;\equiv\;\{P_{\parallel},\;{\cal H}_{\rm gc}\}_{\rm gc}\;= \;e\,{\bf E}^{*}\,\mathbf{\cdot}\,\frac{{\bf B}^{*}}{B_{\parallel}^{*}}, \tag{19}\] where the modified electric field is represented as \[e\,{\bf E}^{*} = -\;e\,\nabla\Phi^{*}\;-\;\frac{e}{c}\,\frac{\partial{\bf A}^{*}} {\partial t} \tag{20}\] \[= e\,{\bf E}-\frac{\partial{\bf\Pi}_{\rm gc}}{\partial t}-\nabla K _{\rm gc}+J\left(\frac{\partial{\bf R}}{\partial t}-\nabla{\cal S}\right),\] and the gyroangle angular velocity \[\dot{\theta} \equiv \{\theta,\ {\cal H}_{\rm gc}\}_{\rm gc}\ =\ \frac{\partial K_{\rm gc}}{ \partial J}\ +\ {\cal S}\ +\ \dot{\bf X}\!\cdot\!{\bf R}^{*} \tag{21}\] \[= \Omega\ -\ \frac{1}{2}\ \nabla\!\cdot\!\left(\widehat{\bf b} \!\times\!{\bf u}_{\rm E}\right)\ +\ {\cal S}\ +\ \dot{\bf X}\!\cdot\!{\bf R}^{*}.\] We note that the guiding-center Hamilton equations (17) and (19) can also be derived from the guiding-center Lagrangian [28] \[L_{\rm gc}\ =\ \frac{e}{c}\,{\bf A}^{*}\!\cdot\!\dot{\bf X}\ +\ J\,\dot{\theta}\ - \ e\,\Phi^{*}. \tag{22}\] as the guiding-center Euler-Lagrange equation \[\dot{P}_{\parallel}\,\widehat{\bf b}\ =\ e\,{\bf E}^{*}\ +\ \frac{e}{c}\,\dot{\bf X}\! \times\!{\bf B}^{*} \tag{23}\] together with Eq. (18). Finally, we note that the guiding-center Jacobian \({\cal J}_{\rm gc}=(e/c)\,B_{\parallel}^{*}\) satisfies the guiding-center Liouville equation \[\frac{\partial B_{\parallel}^{*}}{\partial t}\ =\ -\ \nabla\!\cdot\!\left(B_{ \parallel}^{*}\ \dot{\bf X}\right)\ -\ \frac{\partial}{\partial P_{\parallel}}\left(B_{ \parallel}^{*}\ \dot{P}_{\parallel}\right), \tag{24}\] where \[\nabla\!\cdot\!\left(B_{\parallel}^{*}\ \dot{\bf X}\right) = \nabla\times{\bf E}^{*}\!\cdot\!c\widehat{\bf b}-e\,{\bf E}^{*} \!\cdot\!\frac{c}{c}\nabla\times\widehat{\bf b}\] \[= -\ \widehat{\bf b}\!\cdot\!\frac{\partial{\bf B}^{*}}{\partial t }-e\,{\bf E}^{*}\!\cdot\!\frac{\partial{\bf B}^{*}}{\partial P_{\parallel}}\] and \[\frac{\partial}{\partial P_{\parallel}}\left(B_{\parallel}^{*} \ \dot{P}_{\parallel}\right) = e\,{\bf E}^{*}\!\cdot\!\frac{\partial{\bf B}^{*}}{\partial P_{ \parallel}}+{\bf B}^{*}\!\cdot\!e\frac{\partial{\bf E}^{*}}{\partial P_{ \parallel}}\] \[= e\,{\bf E}^{*}\!\cdot\!\frac{\partial{\bf B}^{*}}{\partial P_{ \parallel}}-{\bf B}^{*}\!\cdot\!\frac{\partial\widehat{\bf b}}{\partial t},\] where we made use of the modified Faraday's law \(\partial{\bf B}^{*}/\partial t=-\,c\,\nabla\!\times\!{\bf E}^{*}\). ### Guiding-center canonical toroidal angular momentum By applying Noether's method [48] on the guiding-center Lagrangian (22), we immediately find that, in the case of an axisymmetric magnetized plasma (i.e., \(\partial L_{\rm gc}/\partial\phi\equiv 0\)), the guiding-center canonical toroidal angular momentum \[P_{\phi} \equiv \frac{\partial L_{\rm gc}}{\partial\dot{\phi}}\ =\ \frac{e}{c}\,{\bf A}^{*}\!\cdot\!\frac{ \partial{\bf X}}{\partial\phi} \tag{25}\] \[= \frac{e}{c}\ (A_{\phi}\ -\ \mathbf{\rho}_{\rm E}\! \cdot\!\nabla A_{\phi})\ +\ P_{\parallel}\,b_{\phi}\] \[-\ J\left(b_{z}\ +\ \frac{1}{2}\,\nabla\!\times\!\widehat{\bf b}\! \cdot\!\frac{\partial{\bf X}}{\partial\phi}\right),\] is a constant of the guiding-center motion, where \((A_{\phi},b_{\phi})\) denote covariant toroidal components and we used the identity \({\bf R}\!\cdot\!\partial{\bf X}/\partial\phi\equiv b_{z}\)[47], which makes \(P_{\phi}\) gyrogauge invariant. Here, we used the identity \({\bf B}\!\times\!\partial{\bf X}/\partial\phi=-\,\nabla A_{\phi}\) and we introduced the electric polarization displacement \(\mathbf{\rho}_{\rm E}\equiv(c/B\Omega)\,{\bf E}_{\perp}\) in writing \({\bf\Pi}_{\rm E}\!\cdot\!\partial{\bf X}/\partial\phi=-\,(e/c)\,\mathbf{ \rho}_{\rm E}\!\cdot\!\nabla A_{\phi}\). We note that the issue of the faithfulness of the guiding-center representation, which was initiated by Belova _et al._[42] in their numerical studies of particle and guiding-center orbits of energetic ions in axisymmetric tokamak geometry (in the absence of an electric field), focussed on whether the guiding-center pullback of the guiding-center canonical toroidal angular momentum \({\sf T}_{\rm gc}P_{\phi}\) accurately describes the particle canonical toroidal angular momentum \(p_{\phi}\equiv\partial L/\partial\dot{\phi}\) defined in terms of the particle Lagrangian \(L\). Belova _et al._[42] noted that the numerical plot of \({\sf T}_{\rm gc}P_{\phi}\) for energetic ions (with \(\epsilon_{B}\lesssim 0.2\)) shows excellent invariance properties equal to the true particle invariant \(p_{\phi}\) (within numerical accuracy) only when second-order (\(\epsilon^{2}\)) corrections are included in the guiding-center canonical toroidal angular momentum (25) and the guiding-center pull-back operator \({\sf T}_{\rm gc}\) (defined in terms of generators of the guiding-center phase-space transformation). These results were recently confirmed analytically by Brizard and Hodgeman [41] for general axisymmetric magnetic geometry. ## III Variational formulation of the guiding-center Vlasov-Maxwell equations In this Section, we present the variational formulation of the guiding-center Vlasov-Maxwell equations based on the Eulerian guiding-center action functional [14] \[{\cal A}_{\rm gc}=-\,\int{\cal F}_{\rm gc}{\cal H}_{\rm gc}\,d^{8}Z+\int\left(| {\bf E}|^{2}-|{\bf B}|^{2}\right)\frac{d^{4}x}{8\pi}, \tag{26}\] where the extended guiding-center Vlasov phase-space density \[{\cal F}_{\rm gc}\ \equiv\ {\cal J}_{\rm gc}\,F\ \delta(W-H_{\rm gc}) \tag{27}\] includes the guiding-center Vlasov function \(F\) and the guiding-center Jacobian \({\cal J}_{\rm gc}\), while the delta-function ensures that the extended guiding-center Hamiltonian motion takes place on the energy surface \({\cal H}_{\rm gc}=H_{\rm gc}-W\equiv 0\). This expression leads to the integral constraint \[\int{\cal F}_{\rm gc}\ {\cal H}_{\rm gc}\ {\cal G}\ d^{8}Z\ \equiv\ 0, \tag{28}\] where \({\cal G}\) is an arbitrary extended phase-space function. The variation of the guiding-center action functional (26) is expressed as \[\delta{\cal A}_{\rm gc} = -\,\int\left(\delta{\cal F}_{\rm gc}\;{\cal H}_{\rm gc}\;+\;{\cal F}_ {\rm gc}\;\delta{\cal H}_{\rm gc}\right)d^{8}Z \tag{29}\] \[+\,\int\left({\bf E}\,\mathbf{\cdot}\,\delta{\bf E}\;-\;{ \bf B}\,\mathbf{\cdot}\,\delta{\bf B}\right)\frac{d^{4}x}{4\pi},\] where the Eulerian variation of the extended guiding-center Vlasov phase-space density is expressed as \[\delta{\cal F}_{\rm gc}\;\equiv\;-\;\frac{\partial}{\partial Z^{\alpha}}\left( {\cal F}_{\rm gc}\;\delta Z^{\alpha}\right), \tag{30}\] where the virtual phase-space displacement \[\delta Z^{\alpha}\;\equiv\;\left\{Z^{\alpha},\;\delta{\bf S}\right\}_{\rm gc} \;-\;\frac{e}{c}\,\delta{\bf A}^{\mathbf{\cdot}}\,\mathbf{ \cdot}\,\left\{{\bf X},\;Z^{\alpha}\right\}_{\rm gc} \tag{31}\] is generated by the virtual canonical generating function \(\delta{\bf S}\), while the Eulerian variation of the extended guiding-center Hamiltonian is expressed as \[\delta{\cal H}_{\rm gc}\;=\;e\,\delta\Phi\;+\;\delta K_{\rm gc}\;+\;J\,\delta{ \cal S}, \tag{32}\] while \[\frac{e}{c}\delta{\bf A}^{\mathbf{\cdot}}\;=\;\frac{e}{c}\delta{\bf A }\;+\;\delta{\bf\Pi}_{\rm gc}\;-\;J\,\delta{\bf R}. \tag{33}\] Here, the variations \(\delta K_{\rm gc}\) and \(\delta{\bf\Pi}_{\rm gc}\) are expressed as \[\delta K_{\rm gc} = \mu\,\widehat{\bf b}\,\mathbf{\cdot}\,\delta{\bf B}\;+ \;{\bf u}_{\rm E}\,\mathbf{\cdot}\,\delta{\bf\Pi}_{\rm E} \tag{34}\] \[-\;\nabla\,\mathbf{\cdot}\,\left[\frac{J}{2}\left( \delta\widehat{\bf b}\,\mathbf{\times}\,{\bf u}_{\rm E}+\widehat{ \bf b}\,\mathbf{\times}\,\delta{\bf u}_{\rm E}\right)\right],\] \[\delta{\bf\Pi}_{\rm gc} = P_{\parallel}\,\delta\widehat{\bf b}\;+\;\delta{\bf\Pi}_{\rm E} \;-\;\nabla\,\mathbf{\times}\,\left(\frac{J}{2}\,\delta\widehat{\bf b }\right), \tag{35}\] which contain the Eulerian variations \[\delta{\bf\Pi}_{\rm E} = \delta{\bf E}\,\mathbf{\times}\,(e\widehat{\bf b}/\Omega )\;+\;mc\,{\bf E}\,\mathbf{\times}\,\delta\left(\widehat{\bf b}/B\right) \tag{36}\] \[= \delta{\bf E}\,\mathbf{\times}\,(e\widehat{\bf b}/\Omega )-\left(\widehat{\bf b}\,{\bf\Pi}_{\rm E}+{\bf\Pi}_{\rm E}\,\widehat{\bf b }\right)\,\mathbf{\cdot}\,\delta{\bf B}/B,\] and \[\delta\widehat{\bf b}\;=\;\delta({\bf B}/B)\;\equiv\;\left(\mathbb{I}- \widehat{\bf b}\widehat{\bf b}\right)\,\mathbf{\cdot}\,\delta{\bf B }/B, \tag{37}\] while the gyrogauge-field variations \[(\delta{\cal S},\delta{\bf R})\;=\;\left(-\,\frac{\partial\widehat{\bf b}}{ \partial t}\,\mathbf{\times}\,\frac{\widehat{\bf b}}{B}\,\mathbf{\cdot}\,\delta{\bf B},\;-\nabla\widehat{\bf b}\,\mathbf{ \times}\,\frac{\widehat{\bf b}}{B}\,\mathbf{\cdot}\,\delta{\bf B}\right) \tag{38}\] are included in Eqs. (32)-(33). When Eqs. (30) and (32) are inserted in Eq. (29), we obtain \[\delta\left({\cal F}_{\rm gc}\;{\cal H}_{\rm gc}\right) = -\;{\cal F}_{\rm gc}\,\delta L_{\rm gc}+B_{\parallel}^{*}\, \delta{\cal S}\left\{{\cal F}_{\rm gc}/B_{\parallel}^{*},\;{\cal H}_{\rm gc} \right\}_{\rm gc} \tag{39}\] \[-\,\frac{\partial}{\partial Z^{\alpha}}\left({\cal F}_{\rm gc}\, \delta{\bf S}\;\left\{Z^{\alpha},\;{\cal H}_{\rm gc}\right\}_{\rm gc}\right),\] where the variation of the guiding-center Lagrangian (22) is expressed as \[\delta L_{\rm gc} \equiv \left(\frac{e}{c}\delta{\bf A}\;+\;\delta{\bf\Pi}_{\rm gc} \right)\,\mathbf{\cdot}\,\dot{\bf X}\;-\;\left(e\,\delta\Phi\;+\; \delta K_{\rm gc}\right) \tag{40}\] \[-\,J\left(\delta{\cal S}\;+\;\dot{\bf X}\,\mathbf{\cdot} \,\delta{\bf R}\right),\] with the effective gyrogauge variation \[\delta{\cal S}+\dot{\bf X}\,\mathbf{\cdot}\,\delta{\bf R} = -\;\left(\frac{\partial\widehat{\bf b}}{\partial t}+\dot{\bf X} \,\mathbf{\cdot}\,\nabla\widehat{\bf b}\right)\,\mathbf{\times} \,\frac{\widehat{\bf b}}{B}\,\mathbf{\cdot}\,\delta{\bf B} \tag{41}\] \[= -\;\frac{d\widehat{\bf b}}{dt}\,\mathbf{\times}\,\frac{ \widehat{\bf b}}{B}\,\mathbf{\cdot}\,\delta{\bf B}. \tag{42}\] Finally, when Eq. (39) is inserted in Eq. (29), the variation of the guiding-center action functional may be expressed as \(\delta{\cal A}_{\rm gc}\equiv\int\delta{\cal L}_{\rm gc}\,d^{3}Xdt\), where the variation of the guiding-center Lagrangian density is expressed as \[\delta{\cal L}_{\rm gc} = \int\left({\cal F}_{\rm gc}\,\delta L_{\rm gc}-B_{\parallel}^{*} \,\delta{\cal S}\left\{{\cal F}_{\rm gc}/B_{\parallel}^{*},\;{\cal H}_{\rm gc }\right\}_{\rm gc}\right)d^{4}P \tag{43}\] \[+\;\frac{1}{4\pi}\left({\bf E}\,\mathbf{\cdot}\,\delta{ \bf E}\;-\;{\bf B}\,\mathbf{\cdot}\,\delta{\bf B}\right)\;+\;\delta \Lambda_{\rm gcV},\] where \(d^{4}P\equiv 2\pi\,dP_{\parallel}dJ\,dW\) (which excludes the guiding-center Jacobian \({\cal J}_{\rm gc}\)), the guiding-center position \({\bf X}\) is now the location where the electromagnetic fields \({\bf E}\) and \({\bf B}\) are evaluated, and the last term in Eq. (43) represents the space-time derivative of the guiding-center Vlasov action-density variation \[\delta\Lambda_{\rm gcV}\equiv\frac{\partial}{\partial t}\left(\int{\cal F}_{ \rm gc}\,\delta{\cal S}\,d^{4}P\right)+\nabla\mathbf{\cdot}\,\left( \int{\cal F}_{\rm gc}\,\delta{\cal S}\;\dot{\bf X}\;d^{4}P\right) \tag{44}\] generated by \(\delta{\cal S}\). ### Guiding-center Vlasov equation Variation of the guiding-center action functional \(\delta{\cal A}_{\rm gc}\) with respect to \(\delta{\cal S}\) yields the extended guiding-center Vlasov equation \[0 = B_{\parallel}^{*}\left\{{\cal F}_{\rm gc}/B_{\parallel}^{*},\;{ \cal H}_{\rm gc}\right\}_{\rm gc} \tag{45}\] \[= \frac{\partial}{\partial Z^{\alpha}}\left({\cal F}_{\rm gc}\; \left\{Z^{\alpha},\;{\cal H}_{\rm gc}\right\}_{\rm gc}\right).\] When this extended Vlasov equation is integrated over the guiding-center energy coordinate \(W\), using Eq. (27), we obtain the phase-space divergence form of the guiding-center Vlasov equation \[\frac{\partial F_{\rm gc}}{\partial t}\;+\;\nabla\,\mathbf{\cdot}\, \left(F_{\rm gc}\;\dot{\bf X}\right)\;+\;\frac{\partial}{\partial P_{ \parallel}}\left(F_{\rm gc}\;\dot{P}_{\parallel}\right)\;=\;0, \tag{46}\] which yields the guiding-center Vlasov equation \(\partial F/\partial t=-\,\dot{Z}^{\alpha}\,\partial F/\partial Z^{\alpha}\), when we substitute the definition of the guiding-center Vlasov density \(F_{\rm gc}\equiv{\cal J}_{\rm gc}F\) and make use of the guiding-center Liouville equation (24). ### Guiding-center Maxwell equations Variations of the guiding-center action functional \(\delta\mathcal{A}_{\rm gc}\) with respect to \(\delta\Phi\) and \(\delta\mathbf{A}\) in Eq. (40), respectively, yield the guiding-center charge and current densities \[(\varrho_{\rm gc},\mathbf{J}_{\rm gc})\;=\;\int_{P}F_{\rm gc}\left(e,\,e\, \dot{\mathbf{X}}\right), \tag{47}\] where we use the notation \(\int_{P}(\cdots)\equiv\int(\cdots)\;2\pi\,dP_{\parallel}dJ\). #### iv.2.1 Guiding-center polarization When a virtual electric variation \(\delta_{\mathbf{E}}\) associated with \(\delta\mathbf{E}\) is considered in Eq. (40), we find \[\delta_{\mathbf{E}}L_{\rm gc}\;=\;\mathbf{\pi}_{\rm gc}\,\mathbf{\cdot}\,\delta \mathbf{E}\;+\;\nabla\mathbf{\cdot}\,(\mathbb{Q}_{\rm gc}\,\mathbf{\cdot}\,\delta \mathbf{E})\,, \tag{48}\] where the guiding-center electric dipole moment is [30; 31] \[\mathbf{\pi}_{\rm gc}\;\equiv\;\frac{e\widehat{\mathbf{b}}}{\Omega}\,\mathbf{\times} \,\left(\dot{\mathbf{X}}\;-\;\mathbf{u}_{\rm E}\right), \tag{49}\] and the guiding-center electric quadrupole moment is defined by the symmetric dyadic tensor \[\mathbb{Q}_{\rm gc}\;\equiv\;\frac{e}{2}\,\langle\mathbf{\rho}_{0}\mathbf{\rho}_{0} \rangle\;=\;\frac{J\,c}{2B}\;\left(\mathbb{I}\;-\;\widehat{\mathbf{b}}\, \widehat{\mathbf{b}}\right). \tag{50}\] We can thus define the guiding-center polarization as \[\mathbf{\mathcal{P}}_{\rm gc}\;\equiv\;\int_{P}\left(F_{\rm gc}\,\mathbf{\pi}_{\rm gc }\;-\;\mathbb{Q}_{\rm gc}\,\mathbf{\cdot}\,\nabla F_{\rm gc}\right), \tag{51}\] which includes a finite-Larmor-radius (FLR) correction on the guiding-center Vlasov phase-space density \(F_{\rm gc}\). Hence, in Eq. (43), we find \[\int_{P}F_{\rm gc}\;\delta_{\mathbf{E}}L_{\rm gc}\;=\;\mathbf{\mathcal{P}}_{\rm gc }\,\mathbf{\cdot}\,\delta\mathbf{E}\;+\;\delta\Lambda_{\rm gcP}, \tag{52}\] with the polarization spatial divergence defined as \[\delta\Lambda_{\rm gcP}\;\equiv\;\nabla\mathbf{\cdot}\left(\int_{P}F_{\rm gc}\; \mathbb{Q}_{\rm gc}\,\mathbf{\cdot}\,\delta\mathbf{E}\right). \tag{53}\] #### iv.2.2 Guiding-center magnetization When a virtual magnetic variation \(\delta_{\mathbf{B}}\) associated with \(\delta\mathbf{B}\) is considered in Eq. (40), we find \[\delta_{\mathbf{B}}L_{\rm gc} = \left(\mathbf{\mu}_{\rm gc}\;+\;\mathbf{\pi}_{\rm gc}\,\mathbf{\times}\, \frac{\mathbf{P}_{0}}{mc}\right)\mathbf{\cdot}\,\delta\mathbf{B} \tag{54}\] \[+\;\nabla\mathbf{\cdot}\left[\mathbb{Q}_{\rm gc}\,\mathbf{\cdot}\,\frac{ \mathbf{u}_{\rm E}}{c}\mathbf{\times}\,\delta\mathbf{B}\;+\;(\mathbb{Q}_{\rm gc} \,\mathbf{\cdot}\,\delta\mathbf{B})\mathbf{\times}\,\frac{\mathbf{u}_{\rm E}}{c}\right]\] \[-\;\frac{1}{c}\dot{\mathbf{X}}\mathbf{\cdot}\,\mathbf{\nabla}\mathbf{\times} \,(\mathbb{Q}_{\rm gc}\,\mathbf{\cdot}\,\delta\mathbf{B})\,,\] where the intrinsic magnetic dipole moment \[\mathbf{\mu}_{\rm gc}\;\equiv\;\mu\left(-\,\widehat{\mathbf{b}}\;+\;\frac{1}{ \Omega}\;\frac{d\widehat{\mathbf{b}}}{dt}\mathbf{\times}\,\widehat{\mathbf{b}}\right) \tag{55}\] includes the effective gyrogauge contribution (42), and the moving electric-dipole contribution [49] \[\mathbf{\pi}_{\rm gc}\mathbf{\times}\,\frac{\mathbf{P}_{0}}{mc}\;=\;\left[\frac{ \widehat{\mathbf{b}}}{B}\mathbf{\times}\,\left(\dot{\mathbf{X}}-\mathbf{u}_{\rm E }\right)\right]\mathbf{\times}\,\mathbf{P}_{0} \tag{56}\] is defined in terms of the guiding-center electric dipole moment (49). We can thus define the guiding-center magnetization as \[\mathbf{\mathcal{M}}_{\rm gc} \equiv \int_{P}F_{\rm gc}\left(\mathbf{\mu}_{\rm gc}\;+\;\mathbf{\pi}_{\rm gc} \mathbf{\times}\,\mathbf{P}_{0}/mc\right) \tag{57}\] \[-\;\int_{P}\nabla F_{\rm gc}\mathbf{\cdot}\,\left(\mathbb{Q}_{\rm gc }\mathbf{\times}\,\frac{\mathbf{u}_{\rm E}}{c}\;-\;\frac{\mathbf{u}_{\rm E}}{c} \mathbf{\times}\,\mathbb{Q}_{\rm gc}\right)\] \[-\;\int_{P}\mathbb{Q}_{\rm gc}\mathbf{\cdot}\,\mathbf{\nabla}\mathbf{\times} \,\left(F_{\rm gc}\;\dot{\mathbf{X}}/c\right),\] which also includes FLR corrections. Hence, in Eq. (43), we find \[\int_{P}F_{\rm gc}\;\delta_{\mathbf{B}}L_{\rm gc}\;=\;\mathbf{\mathcal{M}}_{\rm gc }\mathbf{\cdot}\,\delta\mathbf{B}\;+\;\delta\Lambda_{\rm gcM}, \tag{58}\] where the magnetization spatial divergence \(\delta\Lambda_{\rm gcM}\) is defined as \[\delta\Lambda_{\rm gcM} \equiv \nabla\mathbf{\cdot}\,\left[\int_{P}F_{\rm gc}\;\left(\mathbb{Q}_{ \rm gc}\mathbf{\times}\,\frac{\mathbf{u}_{\rm E}}{c}\right)\mathbf{\cdot}\,\delta \mathbf{B}\right. \tag{59}\] \[\left.-\int_{P}F_{\rm gc}\,(\mathbb{Q}_{\rm gc}\mathbf{\cdot}\,\delta \mathbf{B})\mathbf{\times}\frac{1}{c}(\dot{\mathbf{X}}-\mathbf{u}_{\rm E})\right].\] #### iv.2.3 Guiding-center Maxwell equations By substituting the variations (52) and (58) into Eq. (43), we find \[\delta\mathcal{L}_{\rm gc} = \delta\Lambda_{\rm gcV}\;+\;\delta\Lambda_{\rm gcPM}\;-\;\left( \varrho_{\rm gc}\;\delta\Phi\;-\;\frac{\mathbf{J}_{\rm gc}}{c}\mathbf{\cdot}\, \delta\mathbf{A}\right) \tag{60}\] \[+\;\frac{1}{4\pi}\left(\mathbf{D}_{\rm gc}\mathbf{\cdot}\,\delta \mathbf{E}\;-\;\mathbf{H}_{\rm gc}\mathbf{\cdot}\,\delta\mathbf{B}\right),\] where we have defined the guiding-center electromagnetic fields \[\mathbf{D}_{\rm gc} \equiv \mathbf{E}\;+\;4\pi\,\mathbf{\mathcal{P}}_{\rm gc}, \tag{61}\] \[\mathbf{H}_{\rm gc} \equiv \mathbf{B}\;-\;4\pi\,\mathbf{\mathcal{M}}_{\rm gc}, \tag{62}\] and the quadrupole contributions yield an additional polarization-magnetization spatial divergence \[\delta\Lambda_{\rm gcPM} = \nabla\mathbf{\cdot}\left[\int_{P}F_{\rm gc}\;\mathbb{Q}_{\rm gc}\mathbf{ \cdot}\,\left(\delta\mathbf{E}+\frac{\mathbf{u}_{\rm E}}{c}\mathbf{\times}\, \delta\mathbf{B}\right)\right. \tag{63}\] \[-\left.\int_{P}F_{\rm gc}\,(\mathbb{Q}_{\rm gc}\mathbf{\cdot}\,\delta \mathbf{B})\mathbf{\times}\frac{1}{c}\left(\dot{\mathbf{X}}-\mathbf{u}_{\rm E} \right)\right].\] If we now introduce the constrained variations \(\delta{\bf E}=-\,\nabla\delta\Phi-c^{-1}\partial\delta{\bf A}/\partial t\) and \(\delta{\bf B}=\nabla\crosscross\delta{\bf A}\) into Eq. (60), we obtain the guiding-center Lagrangian variation \[\delta{\cal L}_{\rm gc} = \delta\Lambda_{\rm gc}\;+\;\frac{\delta\Phi}{4\pi}\left(\nabla \!\cdot\!{\bf D}_{\rm gc}\;-\;4\pi\,\varrho_{\rm gc}\right) \tag{64}\] \[+\;\frac{\delta{\bf A}}{4\pi c}\!\cdot\left({\bf J}_{\rm gc}+ \frac{\partial{\bf D}_{\rm gc}}{\partial t}-c\;\nabla\cross{\bf H}_{\rm gc} \right),\] where the total guiding-center space-time divergence is \[\delta\Lambda_{\rm gc} \equiv \frac{\partial}{\partial t}\left(\int{\cal F}_{\rm gc}\,\delta{ \sf S}\;d^{4}P\;-\;\frac{\delta{\bf A}}{4\pi c}\!\cdot\!{\bf D}_{\rm gc}\right)\] \[+\nabla\!\cdot\left(\int{\cal F}_{\rm gc}\delta{\sf S}\,\dot{\bf X }\;d^{4}P-\frac{\delta\Phi}{4\pi}\,{\bf D}_{\rm gc}-\frac{\delta{\bf A}}{4 \pi}\!\times\!{\bf H}_{\rm gc}\right)\] \[+\nabla\!\cdot\left[\int_{P}F_{\rm gc}\;{\mathbb{Q}}_{\rm gc}\! \cdot\left(\delta{\bf E}+\frac{{\bf u}_{\rm E}}{c}\cross\delta{\bf B}\right)\right.\] \[\qquad\qquad-\;\left.\int_{P}F_{\rm gc}\left({\mathbb{Q}}_{\rm gc }\!\cdot\!\delta{\bf B}\right)\right.\cross\frac{1}{c}\left(\dot{\bf X}-{\bf u }_{\rm E}\right)\right].\] Variations of the guiding-center action functional \(\delta{\cal A}_{\rm gc}=\int\delta{\cal L}_{\rm gc}\,d^{3}X\,dt\) with respect to \(\delta\Phi\) and \(\delta{\bf A}\) yield, respectively, the guiding-center Maxwell equations \[\nabla\!\cdot\!{\bf D}_{\rm gc} = 4\pi\,\varrho_{\rm gc}, \tag{66}\] \[\nabla\cross{\bf H}_{\rm gc}\;-\;\frac{1}{c}\,\frac{\partial{\bf D }_{\rm gc}}{\partial t} = \frac{4\pi}{c}\;{\bf J}_{\rm gc}, \tag{67}\] while the electromagnetic fields \(({\bf E},{\bf B})\) satisfy Faraday's law \[\partial{\bf B}/\partial t\;=\;-\,c\,\nabla\cross{\bf E}, \tag{68}\] and Gauss's law \(\nabla\!\cdot\!{\bf B}=0\). Finally, the guiding-center charge-current densities (47) satisfy the charge conservation law \[\frac{\partial\varrho_{\rm gc}}{\partial t}\;+\;\nabla\!\cdot\!{\bf J}_{\rm gc }=0, \tag{69}\] which can be obtained directly from Eqs. (66)-(67). ## IV Guiding-center Noether equation and exact conservation laws Once the guiding-center Vlasov-Maxwell equations (46) and (66)-(67) are derived from the action functional (26), we are left with the guiding-center Noether equation \[\delta{\cal L}_{\rm gc} = \frac{\partial}{\partial t}\left(\int{\cal F}_{\rm gc}\,\delta{ \sf S}\;d^{4}P\;-\;\frac{\delta{\bf A}}{4\pi c}\!\cdot\!{\bf D}_{\rm gc}\right)\] \[+\nabla\!\cdot\left(\int{\cal F}_{\rm gc}\delta{\sf S}\,\dot{\bf X }\;d^{4}P-\frac{\delta\Phi}{4\pi}\!\cdot\!{\bf D}_{\rm gc}-\frac{\delta{\bf A }}{4\pi}\!\times\!{\bf H}_{\rm gc}\right)\] \[+\nabla\!\cdot\left[\int_{P}F_{\rm gc}\;{\mathbb{Q}}_{\rm gc}\! \cdot\left(\delta{\bf E}+\frac{{\bf u}_{\rm E}}{c}\cross\delta{\bf B}\right)\right.\] \[\left.\;-\;\int_{P}F_{\rm gc}\left({\mathbb{Q}}_{\rm gc}\!\cdot \!\delta{\bf B}\right)\right.\cross\frac{1}{c}\left(\dot{\bf X}-{\bf u}_{\rm E }\right)\right],\] from which exact energy-momentum conservation laws are derived. For this purpose, we use the expressions \[\delta{\sf S} = \left(e/c\right){\bf A}^{*}\!\cdot\!\delta{\bf X}\;+\;J\,\delta \Theta\;-\;W^{*}\;\delta t, \tag{71}\] \[\delta\Phi = {\bf E}\!\cdot\!\delta{\bf X}\;-\;c^{-1}\partial\delta\varphi/ \partial t,\] (72) \[\delta{\bf A} = {\bf E}\,c\,\delta t\;+\;\delta{\bf X}\!\times\!{\bf B}\;+\; \nabla\delta\varphi, \tag{73}\] where \(\delta{\sf S}\) is gyrogauge-independent, with the virtual gauge variations \(\delta\Theta\equiv{\bf R}\!\cdot\!\delta{\bf X}+{\cal S}\,\delta t\) and \(\delta\varphi\equiv\Phi\;c\,\delta t-{\bf A}\!\cdot\!\delta{\bf X}\). The gauge terms associated with \(\delta\varphi\) can be used to obtain the identity \[-\;\frac{\partial}{\partial t}\left(\nabla\delta\varphi,\frac{{\bf D }_{\rm gc}}{4\pi c}\right)+\nabla\!\cdot\!\left(\frac{\partial\delta\varphi}{ \partial t}\,\frac{{\bf D}_{\rm gc}}{4\pi c}-\nabla\delta\varphi\!\times\! \frac{{\bf H}_{\rm gc}}{4\pi}\right)\] \[\equiv\;\frac{\delta\varphi}{c}\left(\frac{\partial\varrho_{\rm gc }}{\partial t}\;+\;\nabla\!\cdot\!{\bf J}_{\rm gc}\right), \tag{74}\] where we have used the guiding-center Maxwell equations (66)-(67) in order to obtain the last equality. Hence, we note that the guiding-center charge conservation law (69) can be viewed as a result of the variational constraint \(\delta{\cal L}_{\rm gc}/\delta\varphi=0\). ### Guiding-center Noether equation By substituting the gauge identity (74) into Eq. (70), the guiding-center Noether equation becomes \[\delta{\cal L}_{\rm gc}\;\equiv\;\partial\delta{\cal N}_{\rm gc}/\partial t \;+\;\nabla\!\cdot\!\delta{\bf F}_{\rm gc}, \tag{75}\] where the guiding-center action-density variation is defined as \[\delta{\cal N}_{\rm gc} \equiv \int{\cal F}_{\rm gc}\left(\delta{\sf S}\;+\;\frac{e}{c}\,\delta \varphi\right)\,d^{4}P \tag{76}\] \[-\;\left({\bf E}\;c\,\delta t\;+\;\delta{\bf X}\!\times\!{\bf B} \right)\;\cross\frac{{\bf D}_{\rm gc}}{4\pi c}\] and the guiding-center action-density-flux variation is defined as \[\delta{\bf F}_{\rm gc} \equiv \int{\cal F}_{\rm gc}\left(\delta{\sf S}\;+\;\frac{e}{c}\,\delta \varphi\right)\,\dot{\bf X}\;d^{4}P\;-\;\delta{\bf X}\!\cdot\left(\frac{{\bf E }}{4\pi}{\bf D}_{\rm gc}\right) \tag{77}\] \[-\;\left({\bf E}\;c\,\delta t\;+\;\delta{\bf X}\!\times\!{\bf B} \right)\;\cross\frac{{\bf H}_{\rm gc}}{4\pi}\] \[+\;\int_{P}F_{\rm gc}\;{\mathbb{Q}}_{\rm gc}\!\cdot\left(\delta{ \bf E}+\frac{{\bf u}_{\rm E}}{c}\cross\delta{\bf B}\right)\] \[-\;\int_{P}F_{\rm gc}\left({\mathbb{Q}}_{\rm gc}\!\cdot\!\delta{ \bf B}\right)\;\cross\frac{1}{c}\left(\dot{\bf X}-{\bf u}_{\rm E}\right).\] Here, the electric and magnetic variations are \[\left(\begin{array}{c}\delta{\bf E}\\ \delta{\bf B}\end{array}\right)\;=\;\left(\begin{array}{c}-\,\delta t\; \partial{\bf E}/\partial t\;-\;\delta{\bf X}\!\cdot\!\nabla{\bf E}\\ -\,\delta t\;\partial{\bf B}/\partial t\;-\;\delta{\bf X}\!\cdot\!\nabla{\bf B }\end{array}\right), \tag{78}\] while the combination \[\delta{\sf S}\;+\;\frac{e}{c}\,\delta\varphi\;\equiv\;{\bf\Pi}_{\rm gc}\!\cdot \!\delta{\bf X}\;-\;K_{\rm gc}\;\delta t \tag{79}\] is gauge invariant, where the guiding-center symplectic momentum \(\mathbf{\Pi}_{\rm gc}\) and the guiding-center kinetic energy \(K_{\rm gc}\) are defined in Eqs. (5) and (8), respectively. Finally, the constrained Lagrangian variation is expressed in terms of the Maxwell Lagrangian density \(\mathcal{L}_{\rm M}=\left(|\mathbf{E}|^{2}-|\mathbf{B}|^{2}\right)/8\pi\): \[\delta\mathcal{L}_{\rm gc}\;\equiv\;-\;\delta t\,\frac{\partial \mathcal{L}_{\rm M}}{\partial t}\;-\;\delta\mathbf{X}\!\cdot\!\nabla\mathcal{ L}_{\rm M}, \tag{80}\] while the Vlasov contribution in Eq. (26) vanishes because of the constraint (28). ### Exact guiding-center energy-momentum conservation laws By considering a virtual time displacement \(\delta t\) in the guiding-center Noether equation (75), we obtain the guiding-center Vlasov-Maxwell energy conservation law \[\partial\mathcal{E}_{\rm gc}/\partial t\;+\;\nabla\!\cdot\!\mathbf{S}_{\rm gc }\;=\;0, \tag{81}\] which is expressed in terms of the guiding-center Vlasov-Maxwell energy density \(\mathcal{E}_{\rm gc}\equiv-\;\delta\mathcal{N}_{\rm gc}/\delta t-\mathcal{L}_ {\rm M}\) and the guiding-center Vlasov-Maxwell energy density-flux \(\mathbf{S}_{\rm gc}\equiv-\;\delta\mathbf{F}_{\rm gc}/\delta t\), where \[\mathcal{E}_{\rm gc} = \int_{P}F_{\rm gc}\,K_{\rm gc}+\frac{\mathbf{E}}{4\pi}\!\cdot\! \mathbf{D}_{\rm gc}-\frac{1}{8\pi}\left(|\mathbf{E}|^{2}-|\mathbf{B}|^{2}\right) \tag{82}\] \[= \int_{P}F_{\rm gc}\,K_{\rm gc}+\mathbf{E}\!\cdot\!\mathcal{P}_{ \rm gc}+\frac{1}{8\pi}\left(|\mathbf{E}|^{2}+|\mathbf{B}|^{2}\right),\] and \[\mathbf{S}_{\rm gc} = \int_{P}F_{\rm gc}\,K_{\rm gc}\,\dot{\mathbf{X}}\;+\;\frac{c\, \mathbf{E}}{4\pi}\!\times\!\mathbf{H}_{\rm gc}\] \[+\;\int_{P}F_{\rm gc}\,\mathbb{Q}_{\rm gc}\!\cdot\!\left(\frac{ \partial\mathbf{E}}{\partial t}\;+\;\frac{\mathbf{u}_{\rm E}}{c}\times\frac{ \partial\mathbf{B}}{\partial t}\right)\] \[-\;\int_{P}F_{\rm gc}\left(\mathbb{Q}_{\rm gc}\!\cdot\!\frac{ \partial\mathbf{B}}{\partial t}\right)\;\times\frac{1}{c}\left(\dot{\mathbf{ X}}-\mathbf{u}_{\rm E}\right),\] which includes FLR corrections involving \(\mathbb{Q}_{\rm gc}\). We note that, in the case of static electric and magnetic fields, only the terms on the first line on the right side of Eq. (83) remain. Next, by considering a virtual spatial displacement \(\delta X^{k}\) in the guiding-center Noether equation (75), we obtain the guiding-center Vlasov-Maxwell momentum conservation law \[\partial P_{\rm gc}/\partial t\;+\;\nabla\!\cdot\!\mathbf{T}_{\rm gc}\;=\;0 \tag{84}\] in the \(X^{k}\)-direction is expressed in terms of the guiding-center Vlasov-Maxwell momentum density \[\mathbf{P}_{\rm gc}\;=\;\int_{P}F_{\rm gc}\,\mathbf{\Pi}_{\rm gc}\;+\;\mathbf{D}_ {\rm gc}\!\times\!\frac{\mathbf{B}}{4\pi c}, \tag{85}\] where \(P_{\rm gc}\equiv\delta\mathcal{N}_{\rm gc}/\delta X^{k}=\mathbf{P}_{\rm gc} \!\cdot\!\partial_{k}\mathbf{X}\), and the guiding-center Vlasov-Maxwell momentum-density flux \(T^{i}_{\rm gc}\equiv\delta F^{i}_{\rm gc}/\delta X^{k}+\delta^{i}_{k}\, \mathcal{L}_{\rm M}\), where \[\mathbf{T}_{\rm gck}\;\equiv\;\mathbf{T}_{\rm gc}\!\cdot\!\frac{ \partial\mathbf{X}}{\partial X^{k}} = \int_{P}F_{\rm gc}\,\dot{\mathbf{X}}\,\Pi_{\rm gck}\;-\;\frac{1}{4 \pi}\left(\mathbf{D}_{\rm gc}\,E_{k}+\mathbf{B}\,\mathsf{H}_{\rm gc}\right)\; +\;\frac{1}{4\pi}\frac{\partial\mathbf{X}}{\partial X^{k}}\left[\frac{1}{2} \left(|\mathbf{E}|^{2}-|\mathbf{B}|^{2}\right)\;+\;\mathbf{B}\!\cdot\!\mathbf{H }_{\rm gc}\right] \tag{86}\] \[-\;\int_{P}F_{\rm gc}\left[\mathbb{Q}_{\rm gc}\!\cdot\!\left( \partial_{k}\mathbf{E}+\frac{\mathbf{u}_{\rm E}}{c}\times\partial_{k} \mathbf{B}\right)\;-\;\left(\mathbb{Q}_{\rm gc}\!\cdot\!\partial_{k}\mathbf{B} \right)\;\times\!\frac{1}{c}\left(\dot{\mathbf{X}}-\mathbf{u}_{\rm E}\right) \right],\] with the notation \(\partial_{k}\equiv(\partial\mathbf{X}/\partial X^{k})\!\cdot\!\nabla\). We note that guiding-center Vlasov-Maxwell stress tensor \(\mathbf{T}_{\rm gc}\) defined in Eq. (86) is manifestly not symmetric, which is quite common in reduced Vlasov-Maxwell systems [30; 31]. The symmetry properties of the lowest-order guiding-center Vlasov-Maxwell stress tensor are briefly discussed in Sec. IV.3. The proofs of the energy-momentum conservation laws (81) and (84) proceed by taking the partial time derivative of the guiding-center Vlasov-Maxwell energy density (82) and the guiding-center Vlasov-Maxwell momentum density (85) which, after substituting the guiding-center Vlasov-Maxwell equations (46) and (66)-(67), yield \[\frac{\partial\mathcal{E}_{\rm gc}}{\partial t} = -\nabla\boldsymbol{\cdot}\left(\int_{P}F_{\rm gc}\,K_{\rm gc}\,\dot{ \mathbf{X}}+\frac{c\,\mathbf{E}}{4\pi}\boldsymbol{\times}\mathbf{H}_{\rm gc} \right)\;+\;\int_{P}F_{\rm gc}\left(\frac{dK_{\rm gc}}{dt}\;-\;e\,\mathbf{E} \boldsymbol{\cdot}\dot{\mathbf{X}}\right)\;+\;\frac{\partial\mathbf{E}}{ \partial t}\boldsymbol{\cdot}\boldsymbol{\mathcal{P}}_{\rm gc}\;+\;\frac{ \partial\mathbf{B}}{\partial t}\boldsymbol{\cdot}\boldsymbol{\mathcal{M}}_{\rm gc}, \tag{87}\] \[\frac{\partial P_{\rm gc}}{\partial t} = -\nabla\boldsymbol{\cdot}\left[\int_{P}F_{\rm gc}\dot{\mathbf{X}} \,\Pi_{\rm gck}-\frac{1}{4\pi}\left(\mathbf{D}_{\rm gc}\,E_{k}\;+\;\mathbf{B} \,\mathbf{H}_{\rm gck}\right)\right]-\;\partial_{k}\left[\frac{1}{8\pi} \left(|\mathbf{E}|^{2}-|\mathbf{B}|^{2}\right)\;+\;\frac{\mathbf{B}}{4\pi} \boldsymbol{\cdot}\mathbf{H}_{\rm gc}\right]\] (88) \[+\;\int_{P}F_{\rm gc}\left[\frac{d\Pi_{\rm gck}}{dt}\;-\; \partial_{k}\mathbf{X}\boldsymbol{\cdot}\left(e\,\mathbf{E}+\frac{e}{c}\, \dot{\mathbf{X}}\boldsymbol{\times}\mathbf{B}\right)\right]\;-\;\left(\partial _{k}\mathbf{E}\boldsymbol{\cdot}\boldsymbol{\mathcal{P}}_{\rm gc}\;+\; \partial_{k}\mathbf{B}\boldsymbol{\cdot}\boldsymbol{\mathcal{M}}_{\rm gc} \right).\] The last steps in deriving the energy-momentum conservation laws (81) and (84) involve using the guiding-center Euler-Lagrange equation (23) and substituting the expressions (51) and (57) for the guiding-center polarization and magnetization, respectively. Finally, we note that the definitions (82)-(83) and (85)-(86) associated with the guiding-center energy and momentum conservation laws (81) and (84), respectively, are not uniquely defined. Indeed, under the following transformations \(\mathcal{E}^{\prime}_{\rm gc}=\mathcal{E}_{\rm gc}+\nabla\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\mathrm{C}}\) and \(\mathbf{S}^{\prime}_{\rm gc}=\mathbf{S}_{\rm gc}-\partial\mathbf{C}/\partial t +\nabla\boldsymbol{\times}\mathbf{K}\), the guiding-center energy conservation law (81) remains invariant, where the fields \(\mathbf{(C,K)}\) are arbitrary. Likewise, under the transformations \(P^{\prime}_{\rm gck}=P_{\rm gck}+\nabla\boldsymbol{\cdot}\mathbf{G}_{k}\) and \(\mathbf{T}^{\prime}_{\rm gck}=\mathbf{T}_{\rm gck}-\partial\mathbf{G}_{k}/ \partial t+\nabla\boldsymbol{\cdot}\mathbf{\mathbb{K}}_{k}\), the guiding-center momentum conservation law (84) remains invariant, where the fields \(\mathbf{G}_{k}\equiv\mathbb{G}\boldsymbol{\cdot}\partial_{k}\mathbf{X}\) is defined in terms of an arbitrary second-rank tensor \(\mathbb{G}\), while the third-rank tensor \(\mathbb{K}\) has the following antisymmetry property: \(\mathsf{K}^{ji}_{k}=-\,\mathsf{K}^{ij}_{k}\) (so that \(\partial^{2}_{ij}\mathsf{K}^{ij}_{k}\equiv 0\)). Here, we note that the vector fields \(\mathbf{C}\equiv\int_{P}F_{\rm gc}\,\mathbb{Q}_{\rm gc}\,\cdot\,\mathbf{E}\) and \(\mathbf{G}_{k}\equiv\left(\frac{1}{2}\int_{P}F_{\rm gc}\,J\widehat{\mathbf{b} }\right)\boldsymbol{\times}\partial_{k}\mathbf{X}\equiv\mathbb{G}\boldsymbol {\cdot}\partial_{k}\mathbf{X}\) completely remove the FLR corrections in the definitions (82) and (85). ### Symmetry properties of the guiding-center stress tensor We now make a few remarks about the symmetry properties of the guiding-center stress tensor (86). These symmetry properties are most relevant when considering the conservation law of guiding-center toroidal angular momentum, where the guiding-center toroidal angular momentum density \[P_{\rm gc\phi} \equiv \int_{P}F_{\rm gc}\;\boldsymbol{\Pi}_{\rm gc}\boldsymbol{\cdot }\frac{\partial\mathbf{X}}{\partial\phi}+\frac{\mathbf{D}_{\rm gc}}{4\pi c} \boldsymbol{\cdot}\mathbf{B}\boldsymbol{\times}\frac{\partial\mathbf{X}}{ \partial\phi} \tag{89}\] is defined as the covariant component of the guiding-center momentum density \(\boldsymbol{\mathsf{P}}_{\rm gc}\) associated with the toroidal angle \(\phi\). First, we derive the guiding-center angular momentum transport equation from the guiding-center momentum conservation law \[\frac{\partial P_{\rm gc\phi}}{\partial t}\;+\;\nabla\boldsymbol{ \cdot}\left(\mathbb{T}_{\rm gc}\boldsymbol{\cdot}\frac{\partial\mathbf{X}}{ \partial\phi}\right) = \mathbb{T}^{\top}_{\rm gc}:\nabla\left(\frac{\partial\mathbf{X}}{ \partial\phi}\right) \tag{90}\] \[\equiv \widehat{\boldsymbol{\cdot}}\boldsymbol{\mathcal{T}}_{\rm gc},\] where \(\mathbb{T}^{\top}_{\rm gc}\) denotes the transpose of \(\mathbb{T}_{\rm gc}\). Since the dyadic tensor \(\nabla(\partial\mathbf{X}/\partial\phi)\) is anti-symmetric, with the rotation axis directed along the z-axis, the anti-symmetric part of the guiding-center stress tensor (86) generates the guiding-center Vlasov-Maxwell torque \(\boldsymbol{\mathcal{T}}_{\rm gc}\). We note that this equation can also be derived by substituting the virtual displacement \(\delta\mathbf{X}=\delta\phi\;\partial\mathbf{X}/\partial\phi\) in Eq. (75). At the lowest order (i.e., keeping terms only up to gyrogauge corrections), the guiding-center Vlasov-Maxwell torque is defined as \(\boldsymbol{\mathcal{T}}_{\rm gc}=\int_{P}F_{\rm gc}\;\boldsymbol{\tau}_{\rm gc}\), where, using the dyadic identity (for two arbitrary vector fields \(\mathbf{F}\) and \(\mathbf{G}\)) \[\mathbf{F}\boldsymbol{\cdot}\nabla(\partial\mathbf{X}/\partial\phi)\boldsymbol{ \cdot}\mathbf{G}\;=\;\widehat{\boldsymbol{z}}\boldsymbol{\cdot}(\mathbf{F} \boldsymbol{\times}\mathbf{G}),\] we find \[\boldsymbol{\tau}_{\rm gc} = \dot{\mathbf{X}}\boldsymbol{\times}\boldsymbol{\Pi}_{\rm gc}\;+\; \mathbf{E}\boldsymbol{\times}\boldsymbol{\pi}_{\rm gc} \tag{91}\] \[+\;\mathbf{B}\boldsymbol{\times}\left(\boldsymbol{\mu}_{\rm gc}+ \boldsymbol{\pi}_{\rm gc}\boldsymbol{\times}\mathbf{P}_{0}/mc\right),\] which is calculated below in the dipole approximation. First, each term is expressed as \[\dot{\mathbf{X}}\boldsymbol{\times}\boldsymbol{\Pi}_{\rm gc} = \dot{\mathbf{X}}\boldsymbol{\times}\left(P_{\parallel}\,\widehat {\mathbf{b}}\,+\;\mathbf{E}\boldsymbol{\times}\frac{e\widehat{\mathbf{b}}}{ \Omega}\right)\] \[= P_{\parallel}\,\left(\dot{\mathbf{X}}\boldsymbol{\times}\widehat {\mathbf{b}}+\frac{c\mathbf{E}}{B}\right)-\left(\ddot{\mathbf{X}}\boldsymbol{ \cdot}\frac{e\mathbf{E}}{\Omega}\right)\widehat{\mathbf{b}},\] \[\mathbf{E}\boldsymbol{\times}\boldsymbol{\pi}_{\rm gc} = \mathbf{E}\boldsymbol{\times}\,\left[\frac{e\widehat{\mathbf{b}}}{ \Omega}\boldsymbol{\times}\,\left(\dot{\mathbf{X}}-\mathbf{u}_{\rm E}\right) \right]=\left(\dot{\mathbf{X}}\boldsymbol{\cdot}\frac{e\mathbf{E}}{\Omega}\right) \widehat{\mathbf{b}},\] and \[\mathbf{B}\boldsymbol{\times}\boldsymbol{\mu}_{\rm gc} = \mathbf{B}\boldsymbol{\times}\mu\left(-\,\widehat{\mathbf{b}}+\frac{ 1}{\Omega}\frac{d\widehat{\mathbf{b}}}{dt}\boldsymbol{\times}\widehat{\mathbf{b}} \right)=J\,\frac{d\widehat{\mathbf{b}}}{dt},\] \[\mathbf{B}\boldsymbol{\times}\,\left(\boldsymbol{\pi}_{\rm gc} \boldsymbol{\times}\frac{\mathbf{P}_{0}}{mc}\right) = P_{\parallel}\,\widehat{\mathbf{b}}\boldsymbol{\times}\,\left( \dot{\mathbf{X}}-\mathbf{u}_{\rm E}\right)\] \[= P_{\parallel}\,\left(\widehat{\mathbf{b}}\boldsymbol{\times} \dot{\mathbf{X}}\;-\;\frac{c\mathbf{E}}{B}\right),\] where we used \(\mathbf{B}\boldsymbol{\cdot}\boldsymbol{\pi}_{\rm gc}=0\). Next, by combining these terms in Eq. (91), several cancellations occur and the lowest-order torque density is \[\boldsymbol{\tau}_{\rm gc} = J\,\frac{d\widehat{\mathbf{b}}}{dt} = J\left(\frac{\partial\widehat{\mathbf{b}}}{\partial t}\;+\;\dot{\mathbf{X}} \boldsymbol{\cdot}\boldsymbol{\nabla}\widehat{\mathbf{b}}\right), \tag{92}\] which recovers a result derived by Ye and Kaufman [32]. Hence, substituting Eq. (92) into the right side of Eq. (90), we obtain \[\widehat{\mathtt{z}}\boldsymbol{\cdot}\boldsymbol{\mathcal{T}}_{\rm gc}=\frac{ \partial}{\partial t}\left(\int_{P}F_{\rm gc}\;J\,b_{z}\right)+\nabla\boldsymbol{ \cdot}\left(\int_{P}F_{\rm gc}\;\dot{\mathbf{X}}\;J\,b_{z}\right), \tag{93}\] which adds a gyrogauge-independent contribution \[-\,J\,b_{z}\;\equiv\;-\,J\mathbf{R}\boldsymbol{\cdot}\partial\mathbf{X}/ \partial\phi \tag{94}\] to the guiding-center toroidal angular momentum density (89). Hence, we note that, in the presence of an axisymmetric magnetic field, we can use the definition (5) and the identity \(\mathbf{B}\times\partial\mathbf{X}/\partial\phi\equiv-\,\nabla A_{\phi}\), the guiding-center toroidal angular momentum density (89) can then be written as \[P_{\rm gc\phi} = \int_{P}F_{\rm gc}\left(P_{\phi}-\frac{e}{c}\,A_{\phi}+J\,b_{z} \right)\;-\;\frac{\mathbf{D}_{\rm gc}}{4\pi c}\boldsymbol{\cdot}\nabla A_{\phi} \tag{95}\] \[= \int_{P}F_{\rm gc}\left(P_{\phi}\;+\;J\,b_{z}\right)-\nabla \boldsymbol{\cdot}\left(\frac{A_{\phi}}{4\pi c}\;\mathbf{D}_{\rm gc}\right),\] where we used the definition (25) for the guiding-center canonical toroidal angular momentum and the guiding-center Poisson equation (66). By using the guiding-center torque correction (93) and the transformation \(P_{\rm gc\phi}^{\prime}=P_{\rm gc\phi}+\nabla\boldsymbol{\cdot}\mathbf{G}_{\phi}\), where \(\mathbf{G}_{\phi}=(A_{\phi}/4\pi c)\,\mathbf{D}_{\rm gc}\), we obtain the exact guiding-center toroidal angular momentum conservation law \[\frac{\partial}{\partial t}\left(\int_{P}F_{\rm gc}\;P_{\phi}\right) = -\;\nabla\boldsymbol{\cdot}\left(\int_{P}F_{\rm gc}\;\dot{ \mathbf{X}}\,P_{\phi}\right)\;+\;\int_{P}F_{\rm gc}\;\dot{P}_{\phi} \tag{96}\] \[= -\;\nabla\boldsymbol{\cdot}\left(\int_{P}F_{\rm gc}\;\dot{ \mathbf{X}}\,P_{\phi}\right),\] which follows from applying the Noether theorem to the guiding-center Euler-Lagrange equation \(dP_{\phi}/dt=\partial L_{\rm gc}/\partial\phi\equiv 0\) associated with magnetic axisymmetry. Finally, returning to the case of a general magnetic field, the guiding-center stress tensor (86) can be expressed as \(\mathbb{T}_{\rm gc}\equiv\mathbb{T}_{\rm M}+\mathbb{T}_{\rm gc}\), where the Maxwell tensor \(\mathbb{T}_{\rm M}\equiv(|\mathbf{E}|^{2}+|\mathbf{B}|^{2})\,\mathbb{I}/8\pi -(\mathbf{E}\mathbf{E}+\mathbf{B}\mathbf{B})/4\pi\) is symmetric, while the guiding-center Vlasov stress tensor \[\mathbb{T}_{\rm gcV} = \mathbb{P}_{\rm CGL}\;-\;\frac{\chi_{\rm gc}}{4\pi}\;\mathbf{E} \,\mathbf{E}\;+\;\int_{P}F_{\rm gc}\left[P_{\parallel}\left(\widehat{ \mathtt{0}}\,\dot{\mathbf{X}}_{\perp}\;+\;\dot{\mathbf{X}}_{\perp}\;\widehat{ \mathtt{0}}\right)\;+\;\left(\dot{\mathbf{X}}_{\perp}\;\mathbf{E}\times\frac{ e\widehat{\mathtt{0}}}{\Omega}-\frac{e\widehat{\mathtt{0}}}{\Omega}\times\dot{ \mathbf{X}}_{\perp}\,\mathbf{E}\right)\right. \tag{97}\] \[-\;\boldsymbol{\Pi}_{\rm E}\boldsymbol{\cdot}(\dot{\mathbf{X}}- \mathbf{u}_{\rm E})\left(\mathbb{I}-\widehat{\mathtt{0}}\widehat{\mathtt{0} }\right)\;+\;\frac{J}{2}\;\widehat{\mathtt{0}}\left(\frac{d\widehat{\mathtt{0 }}}{dt}\times\widehat{\mathtt{0}}\right)\;+\;\frac{J}{2}\left(\frac{d\widehat{ \mathtt{0}}}{dt}\times\widehat{\mathtt{0}}\right)\widehat{\mathtt{0}}\right]\] \[+\;\int_{P}\left\{F_{\rm gc}\left[\frac{J}{2}\;\widehat{ \mathtt{0}}\left(\frac{d\widehat{\mathtt{0}}}{dt}\times\widehat{\mathtt{0}} \right)\;-\;\frac{J}{2}\;\left(\frac{d\widehat{\mathtt{0}}}{dt}\times\widehat{ \mathtt{0}}\right)\widehat{\mathtt{0}}\;-\;\frac{J}{2}\;\dot{\mathbf{X}}\; \nabla\boldsymbol{\times}\widehat{\mathtt{0}}\right]\;+\;(\cdots)\right\}\] is symmetric up to the gyrogauge terms, where FLR corrections have been omitted (\(\cdots\)) in the guiding-center polarization (51) and magnetization (57), as well as the explicit FLR corrections in Eq. (86). Here, the Chew-Goldberger-Low (CGL) pressure tensor [8] \[\mathbb{P}_{\rm CGL}\;=\;\int_{P}F_{\rm gc}\left[\frac{P_{\parallel}^{2}}{m} \;\widehat{\mathtt{0}}\widehat{\mathtt{0}}+\left(\mathbb{I}-\widehat{ \mathtt{0}}\widehat{\mathtt{0}}\right)\;\mu B\right],\] appears naturally in lowest-order guiding-center Vlasov-Maxwell theory [3] and several kinetic-magnetohydrodynamic models [50; 11]. The next term \(\chi_{\rm gc}\,\mathbf{E}\mathbf{E}/4\pi\) involves the guiding-center electric susceptibility [4]\(\chi_{\rm gc}\equiv 4\pi\int_{P}F_{\rm gc}(mc^{2}/B^{2})\), while the correction terms involving \(P_{\parallel}\left(\widehat{\mathtt{0}}\,\dot{\mathbf{X}}_{\perp}\;+\;\dot{ \mathbf{X}}_{\perp}\;\widehat{\mathtt{0}}\right)\), which are partially contributed by the moving electric-dipole contribution to the guiding-center magnetization (56), have been derived previously from simpler guiding-center Vlasov-Maxwell models [33; 51]. Future work will explore the symmetry properties of the remaining higher-order terms in the guiding-center stress tensor (97). ## V Summary The results of the Lie-transform analysis leading to higher-order guiding-center Hamiltonian dynamics yield the guiding-center Hamiltonian \[H_{\rm gc} = J\,(\Omega+\mathcal{S})\;+\;|\mathbf{P}_{0}|^{2}/2m \tag{98}\] \[+\;e\,\Phi\;-\;\nabla\boldsymbol{\cdot}\left(\mathbb{Q}_{\rm gc} \boldsymbol{\cdot}\mathbf{E}\right)\] and the guiding-center Lagrangian \[L_{\rm gc} = \left[\frac{e}{c}\mathbf{A}\;+\;\mathbf{P}_{0}\;-\;J\left( \mathbf{R}+\frac{1}{2}\,\nabla\boldsymbol{\times}\widehat{\mathtt{0}}\right) \right]\boldsymbol{\cdot}\dot{\mathbf{X}} \tag{99}\] \[+\;J\;\dot{\theta}\;-\;H_{\rm gc},\] where \(\mathbf{P}_{0}\equiv P_{\parallel}\,\widehat{\mathtt{b}}+\mathbf{E}\times e \widehat{\mathtt{0}}/\Omega\) and the symmetric dyadic tensor (50) generates a finite-Larmor-radius (FLR) correction to the electrostatic potential energy \(e\,\Phi\) in Eq. (98). The presence of the gyrogauge fields \((\mathcal{S},\mathbf{R})\) guarantees that the guiding-center Hamiltonian dynamics derived from the guiding-center Lagrangian (99) are gyrogauge invariant (i.e., these equations are not only gyroangle-independent, but they are also independent on how the gyroangle is measured). Next, the explicit dependence of the the guiding-center Lagrangian (99) on the electric and magnetic fields \((\mathbf{E},\mathbf{B})\) yields guiding-center polarization and magnetization effects, represented by the vector fields \((\boldsymbol{\mathcal{P}}_{\mathrm{gc}},\boldsymbol{\mathcal{M}}_{\mathrm{gc}})\), in the guiding-center Maxwell equations (66)-(67). The guiding-center Vlasov-Maxwell equations are, then, derived by an Eulerian variational principle from which exact energy-momentum conservation laws are derived. Future work will explore the the invariance properties of the energy-momentum conservation laws as well as the symmetry properties of the guiding-center stress tensor \(\mathbb{T}_{\mathrm{gc}}\) defined in Eq. (86). In addition, the Hamiltonian formulation of the guiding-center Vlasov-Maxwell equations will be constructed. ###### Acknowledgements. The present work was supported by the National Science Foundation grant PHY-2206302. **AUTHOR DECLARATIONS** **Conflict of Interest** The author has no conflicts to disclose. **Author Contributions** Alain J. Brizard: Conceptualization (lead); Formal analysis (lead); Writing - original draft (lead);Writing - review & editing (lead). **DATA AVAILABILITY** No data was generated in the course of this work. ## Appendix A Time-dependent Gyrogauge Invariance By introducing the fixed unit-vectors \((\widehat{1},\widehat{2},\widehat{\mathsf{b}}\equiv\widehat{1}\crossdot 2)\), we write the definitions for the rotating unit-vectors \[\widehat{\rho} \equiv \cos\theta\;\widehat{1}\;-\;\sin\theta\;\widehat{2}\] \[\widehat{\perp} \equiv -\;\sin\theta\;\widehat{1}\;-\;\cos\theta\;\widehat{2}\;\Biggr{\}}\,, \tag{100}\] where the gyroangle \(\theta\) is measured _clockwise_ from the \(\widehat{1}\)-axis (for a positively-charged particle), so that \(\widehat{\perp}\equiv\partial\widehat{\rho}/\partial\theta\). We note that, while the choice of the fixed unit-vectors \((\widehat{1},\widehat{2})\) can be made arbitrarily in the plane locally perpendicular to \(\widehat{\mathsf{b}}\), we must ensure that the resulting guiding-center equations of motion do not depend on a specific choice. Hence, our guiding-center Hamiltonian theory must be _gyrogauge_-invariant in the following sense. First, we allow the rotation of the unit-vectors \((\widehat{1},\widehat{2})\) about the magnetic unit-vector \(\widehat{\mathsf{b}}\) by an arbitrary angle \(\chi(\mathbf{x},t)\) that depends on the field position \(\mathbf{x}\) at time \(t\), so that \[\left(\begin{array}{c}\widehat{1}^{\prime}\\ \widehat{2}^{\prime}\end{array}\right)\;=\;\left(\begin{array}{cc}\cos\chi &\sin\chi\\ -\sin\chi&\cos\chi\end{array}\right)\cdot\left(\begin{array}{c}\widehat{1} \\ \widehat{2}\end{array}\right). \tag{101}\] Second we require that the rotating unit-vectors (100) be invariant under this rotation, i.e., \(\widehat{\rho}^{\prime}=\widehat{\rho}\) and \(\widehat{\perp}^{\prime}=\widehat{\perp}\), which implies that the gyroangle \(\theta\) must transform as \[\theta^{\prime}(\theta,\mathbf{x},t)\;=\;\theta+\chi(\mathbf{x},t) \tag{102}\] under the gyrogauge rotation (101). Third, we introduce the gyrogauge vector field \[\mathbf{R}\;\equiv\;\nabla\widehat{\perp}\boldsymbol{\cdot}\widehat{\rho}\;= \;\nabla\widehat{1}\boldsymbol{\cdot}\widehat{2}, \tag{103}\] and the gyrogauge scalar field \[\mathcal{S}\;\equiv\;\frac{\partial\widehat{\perp}}{\partial t}\boldsymbol{ \cdot}\widehat{\rho}\;=\;\frac{\partial\widehat{1}}{\partial t}\boldsymbol{ \cdot}\widehat{2}, \tag{104}\] which transform as \(\mathbf{R}^{\prime}=\mathbf{R}+\nabla\chi\) and \(\mathcal{S}^{\prime}=\mathcal{S}+\partial\chi/\partial t\) under the gyrogauge rotation (101). We, therefore, readily see that a gyrogauge-invariant guiding-center theory can only include the gyrogauge fields \((\mathbf{R},\mathcal{S})\) either as the one-form \(\mathsf{d}\theta-\mathbf{R}\cdot\mathsf{d}\mathbf{x}-\mathcal{S}\,\mathsf{d}t\), the gradient operator \(\nabla+\mathbf{R}\;\partial/\partial\theta\), or the partial time derivative \(\partial/\partial t+\mathcal{S}\,\partial/\partial\theta\), which are all gyrogauge invariant. Finally, we note that the vector fields [6] \[\nabla\crossdot\mathbf{R} = -\,\frac{1}{2}\,\epsilon_{ijk}\,b^{i}\;\nabla b^{j}\crossdot\nabla b ^{k}, \tag{105}\] \[\frac{\partial\mathbf{R}}{\partial t}-\nabla\mathcal{S} = -\,\nabla\widehat{\mathsf{b}}\boldsymbol{\cdot}\left(\widehat{ \mathsf{b}}\crossdot\frac{\partial\widehat{\mathsf{b}}}{\partial t}\right)\] are manifestly gyrogauge independent, since they are expressed entirely in terms of \(\widehat{\mathsf{b}}\), \(\nabla\widehat{\mathsf{b}}\), and \(\partial\widehat{\mathsf{b}}/\partial t\).
2304.05241
Nonergodic measurements of qubit frequency noise
Slow fluctuations of a qubit frequency are one of the major problems faced by quantum computers. To understand their origin it is necessary to go beyond the analysis of their spectra. We show that characteristic features of the fluctuations can be revealed using comparatively short sequences of periodically repeated Ramsey measurements, with the sequence duration smaller than needed for the noise to approach the ergodic limit. The outcomes distribution and its dependence on the sequence duration are sensitive to the nature of noise. The time needed for quantum measurements to display quasi-ergodic behavior can strongly depend on the measurement parameters.
Filip Wudarski, Yaxing Zhang, M. I. Dykman
2023-04-11T14:25:22Z
http://arxiv.org/abs/2304.05241v1
# Nonergodic measurements of qubit frequency noise ###### Abstract Slow fluctuations of a qubit frequency are one of the major problems faced by quantum computers. To understand their origin it is necessary to go beyond the analysis of their spectra. We show that characteristic features of the fluctuations can be revealed using comparatively short sequences of periodically repeated Ramsey measurements, with the sequence duration smaller than needed for the noise to approach the ergodic limit. The outcomes distribution and its dependence on the sequence duration are sensitive to the nature of noise. The time needed for quantum measurements to display quasi-ergodic behavior can strongly depend on the measurement parameters. ## I Introduction Due to the probabilistic nature of quantum measurements, many currently implemented quantum algorithms rely on repeatedly running a quantum computer. It is important that the qubit parameters remain essentially the same between the runs. This imposes a constraint on comparatively slow fluctuations of the qubit parameters, in particular qubit frequencies, and on developing means of revealing and characteriezing such fluctuations. Slow qubit frequency fluctuations have been a subject of intense studies [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Of primary interest has been their spectrum, although their statistics has also attracted interest [24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. This statistics is important as it may help to reveal the source of the underlying noise. In particular, fluctuations from the coupling to a few TLSs should be non-Gaussian [34; 35; 36; 37; 38; 39; 40; 25; 30]. The fluctuation statistics has been often described in terms of higher-order time correlators or their Fourier transforms, bi-spectra and high-order spectra. Most work thus far has been done on fluctuations induced by noise with the correlation time smaller than the qubit decay time. Here we show that important information about qubit frequency fluctuations can be extracted from what we call nonergodic measurements. The idea is to perform \(M\) successive qubit measurements (for example, Ramsey measurements) over time longer than the qubit decay time but shorter than the noise correlation time. The measurement outcomes are determined by the instantaneous state of the noise source, for example, by the instantaneous TLSs' states. They vary from one series of \(M\) measurements to another. Thus the outcome distribution reflects the distribution of the noise source over its states. It provides information that is washed out in the ensemble averaging inherent to ergodic measurements. Closely related is the question of how long should a quantum measurement sequence be to reach the ergodic limit in which the noise source explores all its states. Does the measurement duration depend on the type and parameters of the measurement, not only the noise source properties, and if so, on which parameters? A convenient and frequently used method of performing successive measurements is to repeat them periodically. In this case the duration of data acquisition of \(M\) measurements is \(\propto M\). For the measurements to be nonergodic it should suffice for this duration to be smaller than the noise correlation time. This imposes a limitation on \(M\) from above. The limitation on \(M\) from below is imposed by the uncertainty that comes from the quantum nature of the measurements and thus requires statistical averaging over the outcomes. We consider a periodic sequence of Ramsey measurements sketched in Fig. 1. At the beginning of a measurement the qubit, initially in the ground state \(|0\rangle\), is rotated about the \(y\)-axis of the Bloch sphere by \(\pi/2\), which brings it to the state \((|0\rangle+|1\rangle)/\sqrt{2}\). After time \(t_{R}\) the rotation is repeated and is followed by a projective measurement of finding the qubit in state \(|1\rangle\). The qubit is then reset to \(|0\rangle\), cf. [41]. In our scheme the measurements are repeated \(M\gg 1\) times, with period \(t_{\rm cyc}\). The outcome of a \(k\)th Ramsey measurement \(x_{k}\) is \(0\) or \(1\). The probability \(p\) to find \(x_{k}=1\) is determined by the qubit phase accumulated over time \(t_{R}\). This phase comes from the detuning of the qubit frequency from the frequency of the reference drive and from the noise-induced qubit frequency fluctuations \(\delta\omega_{\rm q}(t)\). The detuning is controllable, and we will use \(\phi_{R}\) to indicate the phase that comes from it. The Hamiltonian \(H_{\rm fl}\) that describes frequency fluctuations and the phase \(\theta_{k}\) accumulated in the \(k\)th measurement due to these fluctuations have the form \[H_{\rm fl}=-\frac{1}{2}\delta\omega_{\rm q}(t)\sigma_{z},\qquad\theta_{k}= \int_{kt_{\rm cyc}}^{kt_{\rm cyc}+t_{R}}\delta\omega_{\rm q}(t)dt. \tag{1}\] Figure 1: Schematics of \(M\) Ramsey measurements. In each measurement the qubit phase is accumulated over time \(t_{R}\). The measurements give the probability \(p\) to find the qubit in state \(|1\rangle\). After a measurement the qubit is reset to state \(|0\rangle\). The measurements are repeated with period \(t_{\rm cyc}\). Here we have set \(\hbar=1\); we associate the Pauli operators \(\sigma_{x,y,z}\) with the operators acting on the qubit states. We are interested in slow frequency fluctuations. The correlation time of \(\delta\omega_{\text{q}}(t)\) is \(\gtrsim t_{\text{cyc}}\) and may significantly exceed the reciprocal qubit decay rate. In terms of the phases \(\theta\) and \(\phi_{R}\), the probability to have \(x_{k}=1\) is [42] \[p(\theta)=\frac{1}{2}\left[1+e^{-t_{R}/T_{2}}\cos(\phi_{R}+\theta)\right], \tag{2}\] where \(T_{2}^{-1}\) is the qubit decoherence rate due to fast processes leading to decay and dephasing. In the absence of qubit frequency noise \(\theta=0\) for all \(M\) measurements and the distribution of the measurement outcomes is a binomial distribution [43]. Because of the frequency noise, the phase \(\theta\) in Eq. (2) becomes random, changing from one measurement to another, and thus the probability \(p(\theta)\) also becomes random. Then the outcomes distribution is determined not just by the quantum randomness, but also by the distribution of the values of \(\theta\). The randomness of the phase is captured by the probability \(\rho(m|M)\) to have \(x_{k}=1\) as a measurement outcome \(m\) times in \(M\) measurements, \(\rho(m|M)=\text{Prob}(\sum_{k=1}^{M}x_{k}=m)\). We consider \(\rho(m|M)\) for periodically repeated measurements, see Fig. 1. If the frequency noise has correlation time small compared to the period \(t_{\text{cyc}}\), the phases \(\theta_{k}\) in successive measurements are uncorrelated. Then \(\rho(m|M)\) is still given by a binomial distribution, \[\rho_{\text{binom}}(m|M)=\binom{M}{m}r_{1}^{m}(1-r_{1})^{M-m}, \tag{3}\] where \(r_{1}\equiv\langle x_{k}\rangle=\langle p(\theta)\rangle_{\theta}\); here \(\langle...\rangle_{\theta}\) indicates averaging over realizations of \(\theta\). For large \(M\) this distribution is close to a Gaussian peak centered at \(r_{1}\). We are interested in the opposite case of slow frequency noise. Here the distribution \(\rho(m|M)\) can strongly deviate from the binomial distribution. The deviation becomes pronounced and characteristic of the noise if \(Mt_{\text{cyc}}\) is comparable or smaller than the noise correlation time while \(M\) is still large. The effect is particularly clear in the _static limit_, where the noise does not change over time \(Mt_{\text{cyc}}\), i.e., the phase \(\theta\) remains constant during \(M\) measurements. Even though \(\theta\) is constant, its value \(\theta=\theta(\ell)\) is random, it varies from one series of \(M\) measurements to another; here \(\ell\) enumerates the series, and we assume that noise correlations decay between successive series. The probability \(P[\theta(\ell)]\) to have a given \(\theta(\ell)\) is determined by the noise statistics. The distribution of the outcomes \(\rho(m|M)\) is obtained by averaging the results of multiply repeated series of \(M\) measurements. Extending the familiar arguments that lead to Eq. (3), we find \[\rho(m|M)=\binom{M}{m}\sum_{\ell}P[\theta(\ell)]p^{m}[\theta(\ell)]\] \[\times\{1-p[\theta(\ell)]\}^{M-m} \tag{4}\] The distribution (4) directly reflects the distribution of the noise over its states. In particular, if the values of \(\theta(\ell)\) are discrete and well-separated (see an example below), \(\rho(m|M)\) has a characteristic fine structure with peaks located at \(m\approx Mp[\theta(\ell))]\) for \(M\gg 1\); the peak heights are determined by \(P[\theta(\ell)]\). An important example of slow frequency noise is the noise that results from dispersive coupling to a set of slowly switching TLSs, \[\delta\omega_{\text{q}}(t)=\sum_{n}V^{(n)}\hat{\tau}_{z}^{(n)}. \tag{5}\] Here \(n=1,...,N_{\text{TLS}}\) enumerates the TLSs, \(\hat{\tau}_{z}^{(n)}\) is the Pauli operator of the \(n\)th TLS, and \(V^{(n)}\) is the coupling parameter; the states of the \(n\)th TLS are \(\ket{0}^{(n)}\) and \(\ket{1}^{(n)}\), and \(\hat{\tau}_{z}^{(n)}\ket{i}^{(n)}=(-1)^{i}\ket{i}^{(n)}\) with \(i=0,1\). We assume that the TLSs do not interact with each other. Their dynamics is described by the balance equations for the state populations. The only parameters are the rates \(W_{ij}^{(n)}\) of \(\ket{i}^{(n)}\rightarrow\ket{j}^{(n)}\) transitions, where \(i,j=0,1\)[44]. The rates \(W_{ij}^{(n)}\) also give the stationary occupations of the TLSs states \(w_{0,1}^{(n)}\), \[w_{0}^{(n)}=W_{10}^{(n)}/W^{(n)},\quad w_{1}^{(n)}=W_{01}^{(n)}/W^{(n)}. \tag{6}\] Here \(W^{(n)}=W_{01}^{(n)}+W_{10}^{(n)}\) is the TLS relaxation rate. The value of \(\min W^{(n)}\) gives the reciprocal correlation time of the noise from the TLSs. In the static-limit approximation, the TLSs remain in the initially occupied states \(\ket{0}^{(n)}\) or \(\ket{1}^{(n)}\) during all \(M\) measurements. Then, from Eq. (5), the phase \(\delta\omega_{\text{q}}t_{R}\) that the qubit accumulates during a measurement is \[\theta(\{j_{n}\})=\sum_{n}V^{(n)}(-1)^{j_{n}}t_{R}, \tag{7}\] where \(j_{n}=0\) if the occupied TLS state is \(\ket{0}^{(n)}\) and \(j_{n}=1\) if the occupied state is \(\ket{1}^{(n)}\). The probability to have a given \(\theta(\{j_{n}\})\) is determined by the stationary state occupations, \(P[\theta(\{j_{n}\})]=\prod_{n}w_{j_{n}}^{(n)}\). For the TLSs' induced noise, \(\ell\) in Eq. (4) enumerates various combinations \(\{j_{n}\}\). With the increasing coupling \(V^{(n)}\), the separation of the values of \(\theta(\{j_{n}\})\) increases, helping to observe the fine structure of \(\rho(m|M)\). The expression for \(\rho(m|M)\) simplifies in the important case where the TLSs are symmetric, \(w_{0}^{(n)}=w_{1}^{(n)}=1/2\), and all coupling parameters are the same, \(V^{(n)}=V\), cf. [45; 9]. In this case \(\theta(\{j_{n}\})\) takes on values \(\theta_{\text{sym}}(\ell)=Vt_{R}(2\ell-N_{\text{TLS}})\) with \(0\leq\ell\leq N_{\text{TLS}}\), and \[\rho(m|M)=2^{-N_{\text{TLS}}}\binom{M}{m}\sum_{\ell}\binom{N_{ \text{TLS}}}{\ell}p^{m}[\theta_{\text{sym}}(\ell)]\] \[\times\{1-p[\theta_{\text{sym}}(\ell)]\}^{M-m}. \tag{8}\] The phases \(\theta_{\text{sym}}(\ell)\) are determined by the coupling constant \(V\) multiplied by the difference of the number of TLSs in the states \(|0\rangle\) and \(|1\rangle\), so that \(\theta_{\rm sym}(\ell)\) may be significantly larger than for a single TLS [46]. The probability \(\rho(m|M)\) of having "\(1\)" \(m\) times in \(M\) measurements has a characteristic form also in the case of Gaussian frequency fluctuations if the fluctuations are slow, so that \(\delta\omega_{\rm q}(t)\) does not change over time \(Mt_{\rm cyc}\). An important example of slow noise is \(1/f\) noise. In the static limit \(\rho(m|M)\) is described by an extension of Eq. (4), which takes into account that \(\theta\) takes on continuous values. Respectively, one has to change in Eq. (4) from the sum over \(\ell\) to the integral over \(\theta(\ell)\), with \(P[\theta(\ell)]\) becoming the probability density. For Gaussian noise \(P[\theta(\ell)]=(2\pi f_{0})^{-1/2}\exp[-\theta^{2}(\ell)/2f_{0}]\), where \(f_{0}=\langle\delta\omega_{\rm q}^{2}\rangle t_{R}^{2}\) (we assume that \(\langle\delta\omega_{\rm q}\rangle=0\)). The distribution \(\rho(m|M)\) does not have fine structure, it depends only on the noise intensity in the static limit. The opposite of the static limit is the ergodic limit, where \(Mt_{\rm cyc}\) is much larger than the noise correlation time and the noise has time to explore all states during the measurements. In this limit \(\rho(m|M)\) as a function of \(m/M\) has a narrow peak at \(r_{1}=\langle m/M\rangle\equiv\sum_{m}(m/M)\rho(m|M)\), with \(\langle[(m/M)-r_{1}]^{n}\rangle\propto M^{1-n}\). _Simulations._ We performed numerical simulations to explore the transition from the static to the ergodic limit and the features of \(\rho(m|M)\) for slow noise. We used \(t_{\rm cyc}=3t_{R}\). The measurements were simulated at least \(10^{5}\) times. In Figs. 2 and 3 we show \(\rho(m|M)\) for the noise from symmetric TLSs, \(W_{01}^{(n)}=W_{10}^{(n)}=W^{(n)}/2\). The results for asymmetric TLSs are similar [46]. Figure 2 shows evolution of \(\rho(m|M)\) with the varying measurements number \(M\). It is very different for different numbers of TLSs and the measurement parameter \(\phi_{R}\). The figure refers to a relatively weak qubit-TLS coupling. Panel (a) refers to a single TLS. Here, in the static limit \(\rho(m|M)\) is double-peaked, with the peaks at \(m/M\approx 0.92\) and \(0.78\), from Eq. (8). Such peaks are seen for \(M=100\) and \(M=10^{4}\), where \(MWt_{\rm cyc}=3.6\times 10^{-2}\) and \(3.6\), respectively, even though one might expect the system to be close to ergodic for \(M=10^{4}\). For \(M=30\) the fine structure is smeared, because \(M\) is not large enough to average out the uncertainty of quantum measurements, but \(\rho(m|M)\) displays a significant and characteristic asymmetry. For \(M=10^{5}\), where \(MWt_{\rm cyc}=36\), the distribution does approach the ergodic limit, with a single peak at \(m/M\approx 0.85\)[46]. Figure 2 (b) refers to 10 TLSs. Their scaled switching rates \(W^{(n)}t_{\rm cyc}\) form a geometric series, varying from \(\approx 0.32\) to \(\approx 3.7\times 10^{-4}\), so that the static limit does not apply and the fine structure is not resolved [46]. The asymmetry of \(\rho(m|M)\) is profound. It gradually decreases with the increasing \(M\). It is important that, for \(\phi_{R}=\pi/4\), the distribution approaches the ergodic limit for \(Mt_{\rm cyc}\times(\min W^{(n)})\gtrsim 30\), similar to the case of one TLS (the choice \(\phi_{R}=\pi/4\) is explained in [46]). The inset in Fig. 2 (b) shows the evolution of \(\rho(m|M)\) for \(1/f\)-type Gaussian frequency noise \(\delta\omega_{\rm q}(t)\) with the power spectrum \(S_{q}(\omega)=(2D/\pi)\int_{\omega_{\rm min}}^{\infty}dW/(W^{2}+\omega^{2})\). The cutoff frequency \(\omega_{\rm min}\) is set equal to the minimal switching rate of the 10 TLS in the main panel min(\(W^{(n)}\)), and the intensity \(D\) is chosen so that, in the ergodic limit, \(\rho(m|M)\) has a maximum for the same \(m/M\) as for the 10 TLSs [46]). Yet the evolution of \(\rho(m|M)\) with the increasing \(M\) is fairly different from that in the main panel. The result of Fig. 2 (c) is surprising. The data refers to the same 10 TLS as in panel (b), except that the phase of the Ramsey measurement is set to \(\phi_{R}=0\). The change of \(\phi_{R}\) does not affect the dynamics of the TLSs. However, for the same \(M\) values, the peak of \(\rho(m|M)\) is much narrower than where \(\phi_{R}=\pi/4\) and the ergodic limit is approached by the measurement outcomes for an order of magnitude smaller \(M\). A simple measure of closeness of \(\rho(m|M)\) to the ergodic limit is the variance \(\sigma_{M}^{2}=\sum_{m}(m/M)^{2}\rho(m|M)-r_{1}^{2}\), where \(r_{1}=\langle x_{k}\rangle\equiv\sum_{m}(m/M)\rho(m|M)_{M\to\infty}\). A straight Figure 2: Transition from nonergölc to ergodic behavior with the increasing number of measurements. Red diamonds, blue crosses, green dots, purple triangles, and orange squares in panels (a)-(c) show \(\rho(m|M)\) for \(M=30,10^{2},10^{3},10^{4}\) and \(10^{5}\), respectively. The parameter of the dispersive qubit-to-TLS coupling is \(Vt_{R}=0.2\). The control parameter \(\phi_{R}\) is \(\pi/4\) in panels (a) and (b). (a) Coupling to a single symmetric TLS with the scaled switching rate \(Wt_{R}=1.2\times 10^{-4}\). (b) Coupling to 10 symmetric TLSs with the switching rates \(W^{(n)}t_{R}=\exp(-3n/4)\), \(n=3,4,\ldots,12\). The inset shows the results for Gaussian \(1/f\)-type noise described in the text, \(f_{0}=\langle\delta\omega_{\rm q}^{2}\rangle t_{R}^{2}\). (c) Coupling to the same 10 TLSs as in panel (b), but for \(\phi_{R}=0\). (d) The variance of \(\rho(m|M)\) for the data in panels (a)-(c); the solid lines show the theory, Eq. (9), the data points show the results of the simulations. The dashed lines show the ergodic limit. forward calculation shows that \[\sigma_{M}^{2}=M^{-1}r_{1}(1-r_{1})+2M^{-1}\sum_{k=1}^{M-1}\tilde{r }_{2}(k)[1-(k/M)],\] \[\tilde{r}_{2}(k)=\langle x_{n}x_{n+k}\rangle-\langle x_{n}\rangle^ {2} \tag{9}\] (\(\tilde{r}_{2}(k)\) is the centered correlator of the measurement outcomes). For correlated noise \(\rho(m|M)\) differs from the binomial distribution (3) and \(\sigma_{M}^{2}\) is larger than its value \(r_{1}(1-r_{1})/M\) for uncorrelated noise. Still, in agreement with statistical physics, in the ergodic limit \(\sigma_{M}^{2}\propto M^{-1}\). In contrast, the static-limit value of \(\sigma_{M}^{2}\) is generally much larger and scales differently with \(M\). Figure 2 (d) shows how \(\sigma_{M}^{2}\) approaches the ergodic scaling. For \(\phi_{R}=\pi/4\) and the same correlation time of the noise from 1 or 10 TLSs and of Gaussian noise (\(\sim 1/\min W^{(n)}\) and \(\sim 1/\omega_{\min}\)), \(\sigma_{M}^{2}\) behaves similarly for large \(M\). Yet, for the same 10 TLSs, but for \(\phi_{R}=0\) the variance approaches the ergodic limit much faster. Figure 3 shows the effect of the strength of the coupling to the TLSs for an intermediate number of measurements \(M=100\). Panel (a) shows a profoundly double-peaked distribution for a single TLS, in excellent agreement with the static-limit (8). As expected, the distance between the peaks increases with the increasing coupling. For 10 TLSs, as seen in panel (b), the distribution is broad and strongly asymmetric. Both its shape and the position of the maximum sensitively depend on the coupling. It is important that the coupling parameter \(Vt_{R}\) can be changed in the experiment by varying \(t_{R}\), which helps pointing to the mechanism of the noise. We note the distinction from direct measurements of qubit frequency as a function of time [2; 5; 20], which is efficient for still much slower noise. _Discussion._ To reach ergodic limit, a system of 10 TLSs has to visit its \(2^{10}\) states. The needed time is a property of the TLSs themselves. However, the results of the measurements can approach quasi-ergodic limit, except for the far tail of the outcomes distribution, over a shorter time. This time depends on how the measurements are done. In our setup, the noise is measured by the qubit, and then the results are read through Ramsey measurements. An important parameter is the qubit-to-TLSs coupling, which we chose to be the same for all TLSs to avoid any bias. Unexpectedly, there is another important parameter, the phase \(\phi_{R}\). The effect of \(\phi_{R}\) on the convergence to the ergodic limit is not obvious in advance. It comes through the dependence on \(\phi_{R}\) of the centered correlator \(\tilde{r}_{2}(k)\) of the measurement outcomes. For weak coupling to slowly switching TLSs, \(V^{(n)}t_{R}\ll 1\) and \(W^{(n)}t_{R}\ll 1\), and for small \(\phi_{R}\) this correlator is small. Moreover, it falls off with the increasing \(k\) much faster than for \(\phi_{R}=\mathcal{O}(1)\)[46]. This indicates a reduced role of the noise correlations for small \(\phi_{R}\). Respectively, the ergodic limit is reached must faster with the increasing \(M\). _Conclusions_ We studied the distribution of the outcomes of periodically repeated Ramsey measurements with the sequence length \(Mt_{\rm cyc}\) shorter than needed to approach the ergodic limit. Such distribution proves to provide an alternative, and sensitive, means of characterizing qubit frequency noise with a long correlation time. The analytical results and simulations show that, for non-Gaussian noise, in particular the noise from TLSs, the distribution can display a characteristic fine structure. Even where there is no fine structure, the form of the distribution and its evolution with the sequence length are noise-specific. The results show that the way the system approaches the ergodic limit with the increasing number of quantum measurements depends not only on the noise source, but also on the character and parameters of the measurement. These parameters are not necessarily known in advance. Their effect can be strong and depends on the noise source. Measurement outcomes can practically approach the ergodic limit well before the noise source approaches this limit. FW and MID acknowledge partial support from NASA Academic Mission Services, Contract No. NNA16BD14C, and from Google under NASA-Google SAA2-403512.
2303.10833
Linear Codes Constructed From Two Weakly Regular Plateaued Functions with Index (p-1)/2
Linear codes are the most important family of codes in cryptography and coding theory. Some codes have only a few weights and are widely used in many areas, such as authentication codes, secret sharing schemes and strongly regular graphs. By setting $ p\equiv 1 \pmod 4 $, we construct an infinite family of linear codes using two distinct weakly regular unbalanced (and balanced) plateaued functions with index $ (p-1)/2 $. Their weight distributions are completely determined by applying exponential sums and Walsh transform. As a result, most of our constructed codes have a few nonzero weights and are minimal.
Shudi Yang, Tonghui Zhang, Zheng-An Yao
2023-03-20T02:37:39Z
http://arxiv.org/abs/2303.10833v2
# Linear Codes From Two Weakly Regular Plateaued Functions with index \((p-1)/2\)1 ###### Abstract Linear codes are the most important family of codes in coding theory. Some codes have only a few weights and are widely used in many areas, such as authentication codes, secret sharing schemes, association schemes and strongly regular graphs. By setting \(p\equiv 1\pmod{4}\), we construct an infinite family of linear codes using two weakly regular unbalanced (and balanced) plateaued functions with index \(\frac{p-1}{2}\). Most of our constructed codes have a few weights and are minimal. After analysing their punctured version, we find that they are projective codes containing some optimal ones. keywords: linear code, plateaued function, the weight distribution, Walsh transform. Msc: [2010] 94B15, 11T71 + Footnote †: journal: ## 1 Introduction Throughout the paper, we always set \(p\) to be an odd prime. We will use the symbol \(\mathbb{F}_{p}\) to denote the finite field with \(p\) elements. A linear code \(C\) over \(\mathbb{F}_{p}\) with length \(n\), dimension \(k\) and minimum distance \(d\) is said to have parameters \([n,k,d]\), which means that \(C\) is a \(k\)-dimensional linear subspace of \(\mathbb{F}_{p}^{n}\) with Hamming distance \(d\). The code \(C\) is called projective if its dual code has minimum distance larger than \(2\). The Hamming weight of a codeword \(\mathbf{c}\), denoted by \(\mathtt{wt}(\mathbf{c})\), is defined as the number of nonzero entries in \(\mathbf{c}\). Let \(A_{w}=\#\{\mathbf{c}\in C:\mathtt{wt}(\mathbf{c})=w\}\) for \(0\leqslant w\leqslant n\). Then the sequence \((A_{0},A_{1},A_{2},\ldots,A_{n})\) stands for the weight distribution of \(C\), where \(A_{0}=1\). The code \(C\) is called \(t\)-weight if the number of nonzero \(A_{w}\) for \(1\leqslant w\leqslant n\) equals \(t\). The weight distribution is of vital importance since it contains the information of computing the error probability of error detection and correction. In recent decades, a large number of linear codes have been investigated, most of which have a few weights and good parameters [3, 4, 7, 8, 10, 12, 16, 17, 22, 23, 25, 26]. The construction of linear codes is usually based on different functions, such as, Boolean functions [3], bent functions [19, 26], square functions [20] and weakly regular plateaued functions [4, 16, 17]. Let us introduce an efficient way to construct linear codes, which is proposed by Ding _et al._[5]. Let \(q=p^{m}\) for an integer \(m\), and \(D=\{d_{1},d_{2},\ldots,d_{n}\}\) be a subset of \(\mathbb{F}_{q}^{*}\). Define \[C_{D}=\left\{\mathbf{c}(a)=\left(\mathrm{Tr}(ax)\right)_{x\in D}:a\in\mathbb{ F}_{q}\right\},\] where \(\mathrm{Tr}\) is the absolute trace function. It can be checked that \(C_{D}\) is a linear code of length \(n\). The set \(D\) is called the defining set of \(C_{D}\). This approach is soon generalized by Li _et al._[11], who defined a class of codes by \[C_{D}=\left\{\mathbf{c}(a,b)=\left(\mathrm{Tr}(ax+by)\right)_{(x,y)\in D}:a,b \in\mathbb{F}_{q}\right\}, \tag{1.1}\] where the set \(D\) is a subset of \(\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\), and it is also called a defining set. Based on this method, Wu _et al._[21] offered a new approach to linear codes using the defining set \[D=\left\{(x,y)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}:f(x)+g(y)=0\right\}, \tag{1.2}\] where \(f\) and \(g\) are weakly regular bent functions from \(\mathbb{F}_{q}\) to \(\mathbb{F}_{p}\). Later, Cheng _et al._ in [4] introduced several linear codes \(C_{D}\) of (1.1) with a few weights by considering \(f\) and \(g\) to be weakly regular unbalanced \(s\)-plateaued functions in the defining set (1.2), where \(0\leqslant s\leqslant m\). In 2022, Smak [18] went deeper by choosing weakly regular unbalanced and balanced \(s_{f}\)-plateaued function \(f\) and \(s_{g}\)-plateaued function \(g\) in the defining set (1.2), where \(0\leqslant s_{f}\), \(s_{g}\leqslant m\). All of them studied the indexes of functions \(f\) and \(g\) among the set \(\{2,p-1\}\), that is, \(l_{f},l_{g}\in\{2,p-1\}\). Along the research line studied in [4; 18; 21], we further consider the index of \(\frac{p-1}{2}\), where \(p\equiv 1\pmod{4}\). Now the defining set is denoted by \[D_{f,g}=\left\{(x,y)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}:f(x)+g(y)=0 \right\}, \tag{1.3}\] where \(f\) and \(g\) are weakly regular unbalanced and balanced \(s\)-plateaued and \(t\)-plateaued functions, respectively, for \(0\leqslant s,\,t\leqslant m\). For clarity, we only concentrate on the case \(l_{g}=\frac{p-1}{2}\) and \(l_{f}\in\{2,\frac{p-1}{2},p-1\}\), where \(p\equiv 1\pmod{4}\). In this paper, we will investigate the constructed linear codes \(C_{D_{f,g}}\) of (1.1) and (1.3), and determine their parameters and the weight distributions using Walsh transform. Their punctured codes are also determined. As we will show later, they are projective, and some of them are optimal since they meet the Griesmer bound. The rest of this paper is arranged as follows. In Section 2, we present an introduction to the mathematical foundations, including cyclotomic fields and weakly regular plateaued functions. Section 3 gives necessary results for our computation. The main results are proposed in Section 4, where we study the weight distributions and the parameters of our constructed codes and their punctured ones. Section 5 shows the minimality and applications of these codes. Finally, we conclude the whole paper in Section 6. ## 2 Mathematical background In this section, we give a brief exposition of exponential sums, cyclotomic fields, cyclotomic classes and weakly regular plateaued functions. First of all, we set up notation and terminology. Let \(q=p^{m}\) for an integer \(m\geqslant 2\). The set of square (resp. non-square) elements in \(\mathbb{F}_{p}^{*}\) is denoted by \(S_{q}\) (resp. \(N_{sq}\)). Let \(\eta\) be the quadratic character of \(\mathbb{F}_{p}\), that is, \[\eta(x)=\begin{cases}0,&\text{if }x=0,\\ 1,&\text{if }x\in S_{q},\\ -1,&\text{if }x\in N_{sq}.\end{cases}\] ### Cyclotomic classes and cyclotomic fields Let \(\theta\) be a fixed primitive element of \(\mathbb{F}_{q}\) and \(N\geqslant 2\) be a divisor of \(q-1\). The cyclotomic classes of order \(N\) in \(\mathbb{F}_{q}\) are defined by \(C_{i}^{(N,q)}=\theta^{i}\langle\theta^{N}\rangle\) for \(i=0,1,\ldots,N-1\), where \(\langle\theta^{N}\rangle\) stands for the subgroup of \(\mathbb{F}_{q}^{*}\) generated by \(\theta^{N}\). Obviously, we have \(\mathbb{F}_{q}^{*}=\bigcup_{i=0}^{N-1}C_{i}^{(N,q)}\) and every cyclotomic class has the same number of elements, that is \(\#C_{i}^{(N,q)}=\frac{q-1}{N}\). The \(p\)-th cyclotomic field is denoted by \(K=\mathbb{Q}(\zeta_{p})\), where \(\zeta_{p}=\exp\left(\frac{2\pi\sqrt{-1}}{p}\right)\). The following lemma enunciates useful properties of this field. **Lemma 1**: **.** _([9]) The following assertions hold for \(K=\mathbb{Q}(\zeta_{p})\)._ \((1)\) _The ring of integers in \(K\) is \(\mathbb{Z}[\zeta_{p}]\), where \(\mathbb{Z}\) is the ring of integers, and \(\{\zeta_{p}^{i}:1\leqslant i\leqslant p-1\}\) is an integer basis of \(\mathbb{Z}[\zeta_{p}]\)._ \((2)\) _The field extension \(K/\mathbb{Q}\) is Galois of degree \(p-1\), and the Galois group \(\operatorname{Gal}(K/\mathbb{Q})=\{\sigma_{z}:z\in\mathbb{F}_{p}^{*}\}\), where the automorphism \(\sigma_{z}\) of \(K\) is defined by \(\sigma_{z}(\zeta_{p})=\zeta_{p}^{z}\)._ \((3)\) _The cyclotomic field \(K\) has a unique quadratic subfield \(\mathbb{Q}(\sqrt{p^{*}})\), where \(p^{*}=\eta(-1)p\). For \(z\in\mathbb{F}_{p}^{*}\), \(\sigma_{z}(\sqrt{p^{*}})=\eta(z)\sqrt{p^{*}}\)._ By Lemma 1, it is easily proved that \(\sigma_{z}(\zeta_{p}^{a})=\zeta_{p}^{za}\) and \(\sigma_{z}(\sqrt{p^{*}}^{m})=\eta^{m}(z)\sqrt{p^{*}}^{m}\) for all \(a\in\mathbb{F}_{p}\) and \(z\in\mathbb{F}_{p}^{*}\). ### Exponential sums In this subsection, we briefly sketch the concept of exponential sums. Let \(\eta_{m}\) denote the quadratic character of \(\mathbb{F}_{q}\), where \(q=p^{m}\). The quadratic Gauss sum for \(\mathbb{F}_{q}\) is defined by \[G(\eta_{m})=\sum_{x\in\mathbb{F}_{q}^{*}}\eta_{m}(x)\chi_{1}(x),\] where \(\chi_{1}(x)=\zeta_{p}^{\operatorname{Tr}(x)}\) is the canonical additive character of \(\mathbb{F}_{q}\), and \(\operatorname{Tr}\) is the absolute trace function. From Theorem 5.15 in [13], \(G(\eta_{m})=(-1)^{m-1}\sqrt{p^{*}}^{m}\) and \(G(\eta)=\sqrt{p^{*}}\). For \(n\in\mathbb{N}\) and \(a\in\mathbb{F}_{q}^{*}\), the Jacobsthal sum is defined by \[H_{n}(a)=\sum_{x\in\mathbb{F}_{q}}\eta_{m}(x^{n+1}+ax)=\sum_{x\in\mathbb{F}_{q }}\eta_{m}(x)\eta_{m}(x^{n}+a).\] Define \[I_{n}(a)=\sum_{x\in\mathbb{F}_{q}}\eta_{m}(x^{n}+a).\] It is a companion sum related to Jacobsthal sums because \(I_{2n}(a)=I_{n}(a)+H_{n}(a)\), which is due to Theorem 5.50 in [13]. We can evaluate easily that \(I_{1}(a)=0\) and \(I_{2}(a)=-1\) for all \(a\in\mathbb{F}_{q}^{*}\). In general, the sums \(I_{n}(a)\) can be described in terms of Jacobi sums. **Lemma 2** (Theorem 5.51, [13]).: _For all \(a\in\mathbb{F}_{q}^{*}\) and \(n\in\mathbb{N}\), we have_ \[I_{n}(a)=\eta_{m}(a)\sum_{j=1}^{d-1}\lambda^{j}(-a)J(\lambda^{j},\eta_{m}),\] _where \(\lambda\) is a multiplicative character of \(\mathbb{F}_{q}\) of order \(d=\gcd(n,q-1)\), and \(J(\lambda^{j},\eta_{m})\) is a Jacobi sum in \(\mathbb{F}_{q}\)._ **Lemma 3** (Theorem 5.33, [13]).: _Let \(q=p^{m}\) be odd and \(f(x)=a_{2}x^{2}+a_{1}x+a_{0}\in\mathbb{F}_{q}[x]\) with \(a_{2}\neq 0\). Then_ \[\sum_{x\in\mathbb{F}_{q}}\chi_{1}(f(x))=\chi_{1}(a_{0}-a_{1}^{2}(4a_{2})^{-1}) \eta_{m}(a_{2})G(\eta_{m}).\] ### Weakly regular plateaued functions We now introduce weakly regular plateaued functions and review some basic facts about them. Let \(f:\mathbb{F}_{q}\to\mathbb{F}_{p}\) be a \(p\)-ary function. For \(\beta\in\mathbb{F}_{q}\), the Walsh transform of \(f\) is defined as a complex-valued function \(\widehat{\chi}_{f}\) on \(\mathbb{F}_{q}\), \[\widehat{\chi}_{f}(\beta)=\sum_{x\in\mathbb{F}_{q}}\zeta_{p}^{f(x)- \operatorname{Tr}(\beta x)}.\] A \(p\)-ary function \(f\) is called to be balanced if it satisfies \(\widehat{\chi}_{f}(0)=0\); otherwise, it is called unbalanced over \(\mathbb{F}_{q}\). As a natural extension of bent functions, Zheng _et al._ firstly set up the concept of plateaued functions in characteristic \(2\) in [24], and later Mesnager [14] gave a general version of any characteristic \(p\). Several years ago, Mesnager _et al._ presented the notion of (non)-weakly regular plateaued functions in their work [15]. We follow the notation used in [15]. For each \(\beta\in\mathbb{F}_{q}\), a function \(f\) is called \(s\)-plateaued if \(|\widehat{\chi}_{f}(\beta)|^{2}\in\{0,p^{m+s}\}\), where \(0\leqslant s\leqslant m\). It is worth noting that every bent function is \(0\)-plateaued. The Walsh support of function \(f\), denoted by \(\mathcal{S}_{f}\), is defined by \[\mathcal{S}_{f}=\{\beta\in\mathbb{F}_{q}:|\widehat{\chi}_{f}(\beta)|^{2}=p^{m +s}\}.\] Applying the Parseval identity, one gets the absolute Walsh distribution of plateaued functions. **Lemma 4**.: _(Lemma 1, [14]) Let \(f\) be an \(s\)-plateaued function. Then for \(\beta\in\mathbb{F}_{q}\), \(|\widehat{\chi}_{f}(\beta)|^{2}\) takes \(p^{m-s}\) times the value \(p^{m+s}\) and \(p^{m}-p^{m-s}\) times the value \(0\)._ From Lemma 4, the cardinality of \(\mathcal{S}_{f}\) is given by \(\#\mathcal{S}_{f}=p^{m-s}\). **Definition 1**.: ([15]) Let \(f\) be an \(s\)-plateaued function, where \(0\leqslant s\leqslant m\). Then, \(f\) is called weakly regular \(s\)-plateaued if there exists a complex number \(u\) having unit magnitude such that \[\widehat{\chi}_{f}(\beta)\in\{0,up^{\frac{m+s}{2}}\zeta_{p}^{g(\beta)}\}\] for all \(\beta\in\mathbb{F}_{q}\), where \(g\) is a \(p\)-ary function over \(\mathbb{F}_{q}\) satisfying \(g(\beta)=0\) for all \(\beta\in\mathbb{F}_{q}\setminus\mathcal{S}_{f}\). Otherwise, if \(u\) depends on \(\beta\) with \(|u|=1\), then \(f\) is called non-weakly regular \(s\)-plateaued. Particularly, a weakly regular plateaued function \(f\) is said to be regular plateaued if \(u=1\). **Lemma 5**.: _(Lemma 5, [15]) Let \(\beta\in\mathbb{F}_{q}\) and \(f\) be a weakly regular \(s\)-plateaued function. For every \(\beta\in\mathcal{S}_{f}\) we have_ \[\widehat{\chi}_{f}(\beta)=\varepsilon_{f}\sqrt{p^{*}}^{n+s}\zeta_{p}^{f^{*}( \beta)},\] _where \(\varepsilon_{f}\in\{\pm 1\}\) is the sign of \(\widehat{\chi}_{f}\) and \(f^{\star}\) is a \(p\)-ary function over \(\mathbb{F}_{q}\) with \(f^{\star}(\beta)=0\) for all \(\beta\in\mathbb{F}_{q}\setminus\mathcal{S}_{f}\). We call \(f^{\star}\) the dual function of \(f\)._ We now introduce two non-trivial subclasses of weakly regular plateaued functions. Let \(f\) be a weakly regular unbalanced (resp. balanced) \(s\)-plateaued function with \(0\leqslant s\leqslant m\). We denote by WRP (resp. WRPB) the subclass of the unbalanced (resp. balanced) functions \(f\) that meet the following homogeneous conditions simultaneously: 1. \(f(0)=0\); 2. There exists a positive integer \(h\), such that \(2\mid h\), \(\gcd(h-1,p-1)=1\) and \(f(zx)=z^{h}f(x)\) for any \(z\in\mathbb{F}_{p}^{*}\) and \(x\in\mathbb{F}_{q}\). **Remark 1**.: For every \(f\in\) WRP (resp. \(f\in\) WRPB), we have \(0\in\mathcal{S}_{f}\) (resp. \(0\notin\mathcal{S}_{f}\)). The following lemmas, due to [16] and [18], play a significant role in calculating the parameters of our constructed codes. **Lemma 6**.: _(Lemma 6, [16]) Let \(f\in\)_ WRP _or \(f\in\)_ WRPB _with \(\widehat{\chi}_{f}(\beta)=\varepsilon_{f}\sqrt{p^{*}}^{m+s}\zeta_{p}^{f^{*}( \beta)}\) for every \(\beta\in\mathcal{S}_{f}\). Then, for every \(z\in\mathbb{F}_{p}^{*}\), \(z\beta\in\mathcal{S}_{f}\) if \(\beta\in\mathcal{S}_{f}\), and otherwise, \(z\beta\in\mathbb{F}_{q}\setminus\mathcal{S}_{f}\)._ **Lemma 7**.: _(Propositions 2 and 3, [16]) Let \(f\in\)_ WRP _or \(f\in\)_ WRPB _with \(\widehat{\chi}_{f}(\beta)=\varepsilon_{f}\sqrt{p^{*}}^{m+s}\zeta_{p}^{f^{*}( \beta)}\) for every \(\beta\in\mathcal{S}_{f}\). Then \(f^{\star}(0)=0\) and \(f^{\star}(z\beta)=z^{l_{f}}f^{\star}(\beta)\) for all \(z\in\mathbb{F}_{p}^{*}\) and \(\beta\in\mathcal{S}_{f}\), where \(l_{f}\) is an even positive integer with \(\gcd(l_{f}-1,p-1)=1\). We call \(l_{f}\) the index of \(f\)._ **Lemma 8**.: _(Lemma 10, [16]) Let \(f\in\)_ WRP _or \(f\in\)_ WRPB _with \(\widehat{\chi}_{f}(\beta)=\varepsilon_{f}\sqrt{p^{*}}^{m+s}\zeta_{p}^{f^{*}( \beta)}\) for every \(\beta\in\mathcal{S}_{f}\). For \(c\in\mathbb{F}_{p}\), define_ \[\mathcal{N}_{f}(c)=\#\{\beta\in\mathcal{S}_{f}:f^{\star}(\beta)=c\}.\] _When \(m-s\) is even, we have_ \[\mathcal{N}_{f}(c)=\begin{cases}p^{m-s-1}+(p-1)\eta^{m+1}(-1)\varepsilon_{f} \sqrt{p^{*}}^{m-s-2},&\text{if }c=0,\\ p^{m-s-1}-\eta^{m+1}(-1)\varepsilon_{f}\sqrt{p^{*}}^{m-s-2},&\text{if }c\in \mathbb{F}_{p}^{*}.\end{cases}\] _Otherwise,_ \[\mathcal{N}_{f}(c)=\begin{cases}p^{m-s-1},&\text{if }c=0,\\ p^{m-s-1}+\eta(c)\eta^{m}(-1)\varepsilon_{f}\sqrt{p^{*}}^{m-s-1},&\text{if }c \in\mathbb{F}_{p}^{*}.\end{cases}\] **Lemma 9**.: _(Lemma 3.12, [18]) Let \(f,\,g\in\mathrm{WRP}\) or \(f,\,g\in\mathrm{WRPB}\), with \(\widehat{\chi}_{f}(\alpha)=\varepsilon_{f}\sqrt{p^{*}}^{m+s}\zeta_{p}^{f^{*}( \alpha)}\) and \(\widehat{\chi}_{g}(\beta)=\varepsilon_{g}\sqrt{p^{*}}^{m+t}\zeta_{p}^{g^{*}( \beta)}\) for \(\alpha\in\mathcal{S}_{f}\) and \(\beta\in\mathcal{S}_{g}\), respectively. Define_ \[\mathcal{T}(0) =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{*}(a)+g^{*} (b)=0\},\] \[\mathcal{T}(c) =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{*}(a)+g^{*} (b)=c\},\text{ where }c\in\mathbb{F}_{p}^{*}.\] _Then we have_ \[\mathcal{T}(0) =\begin{cases}p^{2m-s-t-1}+(p-1)p^{-1}\varepsilon_{f}\varepsilon _{g}\sqrt{p^{*}}^{2m-s-t},&\text{if }s+t\text{ is even},\\ p^{2m-s-t-1},&\text{if }s+t\text{ is odd},\end{cases}\] \[\mathcal{T}(c) =\begin{cases}p^{2m-s-t-1}-p^{-1}\varepsilon_{f}\varepsilon_{g} \sqrt{p^{*}}^{2m-s-t},&\text{if }s+t\text{ is even},\\ p^{2m-s-t-1}+\eta(c)\varepsilon_{f}\varepsilon_{g}\sqrt{p^{*}}^{2m-s-t-1},& \text{if }s+t\text{ is odd}.\end{cases}\] **Lemma 10**.: _(Lemma 3.7, [18]) Let \(n=\#D_{f,g}\), where \(D_{f,g}\) is defined by (1.3) with \(f\) and \(g\) be as in Lemma 9. If \(f,\,g\in\mathrm{WRPB}\), then \(n=p^{2m-1}-1\). If \(f,\,g\in\mathrm{WRP}\), then_ \[n=\begin{cases}p^{2m-1}-1,&\text{if }2\nmid s+t,\\ p^{2m-1}-1+(p-1)p^{-1}\varepsilon_{f}\varepsilon_{g}\sqrt{p^{*}}^{2m+s+t},& \text{if }2\mid s+t.\end{cases}\] ## 3 Auxiliary results To get the frequencies of codewords in the constructed codes, we will need several lemmas which are depicted and proved in the sequel. **Lemma 11**.: _Let \(p\equiv 1\pmod{2}\). For the quadratic character \(\eta\) over \(\mathbb{F}_{p}\), we have_ \[\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S_{q}\\ v\neq\pm u\end{subarray}}\eta(u+v)=-\frac{p-1}{2}(\eta(2)+1),\] \[\sum_{u\in N_{sq}}\sum_{\begin{subarray}{c}v\in N_{sq}\\ v\neq\pm u\end{subarray}}\eta(u+v)=\frac{p-1}{2}(\eta(2)+1).\] Proof.: Notation that \(-1\in S_{q}\) when \(p\equiv 1\pmod{4}\), and \(-1\in N_{sq}\) when \(p\equiv 3\pmod{4}\). It follows that \[\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S_{q}\\ v\neq\pm u\end{subarray}}\eta(u+v) =\sum_{u\in S_{q}}\eta(u)\sum_{\begin{subarray}{c}v\in S_{q}\\ v\neq\pm u\end{subarray}}\eta(1+v)\] \[=\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S_{q}\\ v\neq\pm 1\end{subarray}}\eta(1+v)\] \[=\frac{p-1}{2}\Big{(}\sum_{v\in S_{q}}\eta(1+v)-\eta(2)\Big{)}\] \[=\frac{p-1}{2}\Big{(}\frac{1}{2}\sum_{x\in\mathbb{F}_{p}}\eta(1+x ^{2})-\frac{1}{2}-\eta(2)\Big{)}\] \[=\frac{p-1}{2}\Big{(}\frac{1}{2}I_{2}(1)-\frac{1}{2}-\eta(2) \Big{)}.\] The first assertion then follows from the fact that \(I_{2}(1)=-1\). The second one is analogously proved and is omitted here. **Lemma 12**.: _Let \(p\equiv 1\pmod{4}\) and \(f,\,g\in\mathrm{WRP}\) or \(f,\,g\in\mathrm{WRPB}\), with \(\widehat{\chi}_{f}(\alpha)=\varepsilon_{f}\sqrt{p^{*}}^{m+s}\zeta_{p}^{f^{*}( \alpha)}\) and \(\widehat{\chi}_{g}(\beta)=\varepsilon_{g}\sqrt{p^{*}}^{m+t}\zeta_{p}^{g^{*}( \beta)}\) for every \(\alpha\in\mathcal{S}_{f}\) and every \(\beta\in\mathcal{S}_{g}\), respectively. Suppose that \(s+t\) is odd. Write_ \[B_{S_{q}} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{*}(a)+g^{*}( b)\in S_{q},f^{*}(a)-g^{*}(b)\in S_{q}\},\] \[B_{N_{sq}} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{*}(a)+g^{*}( b)\in N_{sq},f^{*}(a)-g^{*}(b)\in N_{sq}\}.\] _Then if \(2\nmid m-s\) and \(2\mid m-t\), we have_ \[B_{S_{q}} =\frac{p-1}{2}\sqrt{p}^{2m-s-t-3}\Big{(}\frac{p-1}{2}\sqrt{p}^{2 m-s-t-1}-\eta(2)\varepsilon_{f}\sqrt{p}^{n-t}\] \[\quad+\frac{p+1}{2}\varepsilon_{g}\sqrt{p}^{n-s-1}+(\eta(2)+p) \varepsilon_{f}\varepsilon_{g}\Big{)},\] \[B_{N_{sq}} =\frac{p-1}{2}\sqrt{p}^{2m-s-t-3}\Big{(}\frac{p-1}{2}\sqrt{p}^{2 m-s-t-1}+\eta(2)\varepsilon_{f}\sqrt{p}^{n-t}\] \[\quad+\frac{p+1}{2}\varepsilon_{g}\sqrt{p}^{n-s-1}-(\eta(2)+p) \varepsilon_{f}\varepsilon_{g}\Big{)}.\] _Otherwise if \(2\mid m-s\) and \(2\nmid m-t\), we have_ \[B_{S_{q}} =\frac{p-1}{2}\sqrt{p}^{2m-s-t-3}\Big{(}\frac{p-1}{2}\sqrt{p}^{2m-s -t-1}-\eta(2)\varepsilon_{g}\sqrt{p}^{m-s}\] \[\qquad+\frac{p+1}{2}\varepsilon_{f}\sqrt{p}^{m-t-1}+(\eta(2)+p) \varepsilon_{f}\varepsilon_{g}\Big{)},\] \[B_{N_{sq}} =\frac{p-1}{2}\sqrt{p}^{2m-s-t-3}\Big{(}\frac{p-1}{2}\sqrt{p}^{2m -s-t-1}+\eta(2)\varepsilon_{g}\sqrt{p}^{m-s}\] \[\qquad+\frac{p+1}{2}\varepsilon_{f}\sqrt{p}^{m-t-1}-(\eta(2)+p) \varepsilon_{f}\varepsilon_{g}\Big{)}.\] Proof.: We only calculate \(B_{S_{q}}\) and omit the other. Now suppose that \(2\nmid m-s\) and \(2\mid m-t\). Let \(f^{\star}(a)+g^{\star}(b)=u\), \(f^{\star}(a)-g^{\star}(b)=v\), where \(u,v\in\mathbb{F}_{p}^{*}\). Then we have \(f^{\star}(a)=\frac{u+v}{2}\), \(g^{\star}(b)=\frac{u-v}{2}\) and so \[B_{S_{q}}=\sum_{u\in S_{q}}\sum_{v\in S_{q}}\mathcal{N}_{f}(\frac{u+v}{2}) \mathcal{N}_{g}(\frac{u-v}{2}),\] where \(\mathcal{N}_{f}\) and \(\mathcal{N}_{g}\) are computed in Lemma 8. It follows that \[B_{S_{q}}=\sum_{u\in S_{q}}\mathcal{N}_{f}(u)\mathcal{N}_{g}(0)+\sum_{u\in S_{ q}}\mathcal{N}_{f}(0)\mathcal{N}_{g}(u)+S,\] where \[S=\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S_{q}\\ v\neq\pm u\end{subarray}}\mathcal{N}_{f}(\frac{u+v}{2})\mathcal{N}_{g}(\frac{u -v}{2}). \tag{3.1}\] Observe that \(\frac{u-v}{2}\neq 0\) in (3.1). If we write \(c=\frac{u-v}{2}\neq 0\), then from Lemma 8, \[S =\mathcal{N}_{g}(c)\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S _{q}\\ v\neq\pm u\end{subarray}}\mathcal{N}_{f}(\frac{u+v}{2})\] \[=\mathcal{N}_{g}(c)\sum_{u\in S_{q}}\sum_{\begin{subarray}{c}v\in S _{q}\\ v\neq\pm u\end{subarray}}\Big{(}p^{m-s-1}+\eta(\frac{u+v}{2})\varepsilon_{f} \sqrt{p}^{m-s-1}\Big{)}\] \[=\mathcal{N}_{g}(c)\Big{(}\frac{p-1}{2}\cdot\frac{p-5}{2}p^{m-s-1 }+\eta(2)\varepsilon_{f}\sqrt{p}^{m-s-1}\sum_{u\in S_{q}}\sum_{\begin{subarray} {c}v\in S_{q}\\ v\neq\pm u\end{subarray}}\eta(u+v)\Big{)}.\] The desired assertion then follows from Lemmas 8 and 11. ## 4 Main results In this section, we fix \(p\equiv 1\pmod{4}\). Let \(f,\,g\in\text{WRP}\) or \(f,\,g\in\text{WRPB}\). For \(\alpha,\,\beta\in\mathbb{F}_{q}\), we may assume from Lemma 5 that \(\widehat{\chi}_{f}(\alpha)=\varepsilon_{f}\sqrt{p}^{m+s}\zeta_{p}^{f^{\star}( \alpha)}\) and \(\widehat{\chi}_{g}(\beta)=\varepsilon_{g}\sqrt{p}^{m+t}\zeta_{p}^{g^{\star}(\beta)}\), where \(\alpha\in\mathcal{S}_{f}\), \(\beta\in\mathcal{S}_{g}\), \(\varepsilon_{f},\varepsilon_{g}\in\{\pm 1\}\) and \(0\leqslant s,t\leqslant m\). The dual functions \(f^{\star}\) and \(g^{\star}\) are given in Lemma 7 with \(l_{f}\in\{2,\frac{p-1}{2},p-1\}\) and \(l_{g}=\frac{p-1}{2}\). For \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\), we define \[N_{0}=\#\left\{(x,y)\in\mathbb{F}_{q}^{2}:f(x)+g(y)=0,\mathrm{Tr}(ax+by)=0 \right\}. \tag{4.1}\] ### The calculation of \(N_{0}\) To compute the weights of codewords in our codes, it suffices to determine the values of \(N_{0}\) in (4.1), which are stated in Lemmas 13, 14 and 15. **Lemma 13**.: _Let \(f,\,g\in\mathrm{WRP}\) or \(f,\,g\in\mathrm{WRPB}\) with \(l_{g}=\frac{p-1}{2}\). Suppose that \(s+t\) is odd and \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\). For \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have \(N_{0}=p^{2m-2}\), and for \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have the following assertions. When \(l_{f}=\frac{p-1}{2}\), we have_ \[N_{0}=\begin{cases}p^{2m-2},&\mathrm{if}\ f^{\star}(a)+g^{\star}(b)=0,\\ p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\mathrm{if}\ f^{\star}(a)+g^{\star}(b)\in S_{q},\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\mathrm{if}\ f^{\star}(a)+g^{\star}(b)\in N_{sq}.\end{cases}\] _When \(l_{f}=p-1\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2 )\sqrt{p}^{2m+s+t-3},&\mathrm{if}\ f^{\star}(a)\in S_{q},g^{\star}(b)=\pm f^{ \star}(a),\\ p^{2m-2}-\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2)\sqrt{p}^{2m+s+t-3},& \mathrm{if}\ f^{\star}(a)\in N_{sq},g^{\star}(b)=\pm f^{\star}(a),\\ p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\mathrm{if}\ f^{\star}(a)+g^{\star}(b)\in S_{q},f^{\star}(a)-g^{ \star}(b)\in S_{q},\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\mathrm{if}\ f^{\star}(a)+g^{\star}(b)\in N_{sq},f^{\star}(a)-g^{ \star}(b)\in N_{sq},\\ p^{2m-2},&\mathrm{otherwise}.\end{cases}\] _When \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3 },&\text{ if }f^{\star}(a)=0,g^{\star}(b)\in S_{q}\\ &\text{ or }g^{\star}(b)=0,f^{\star}(a)\in S_{q},\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\text{ if }f^{\star}(a)=0,g^{\star}(b)\in N_{sq}\\ &\text{ or }g^{\star}(b)=0,f^{\star}(a)\in N_{sq},\\ p^{2m-2}-2(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\text{ if }f^{\star}(a)\in S_{q},g^{\star}(b)\in S_{q},\\ p^{2m-2}+2(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\text{ if }f^{\star}(a)\in N_{sq},g^{\star}(b)\in N_{sq},\\ p^{2m-2},&\text{ otherwise.}\end{cases}\] _When \(l_{f}=2\) and \(p\equiv 5\pmod{8}\), we have_ \[N_{0}=\begin{cases}p^{2m-2},&\text{ if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\text{ if }f^{\star}(a)=0,g^{\star}(b)\in S_{q}\\ &\text{ or }g^{\star}(b)=0,f^{\star}(a)\in S_{q},\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3},&\text{ if }f^{\star}(a)=0,g^{\star}(b)\in N_{sq}\\ &\text{ or }g^{\star}(b)=0,f^{\star}(a)\in N_{sq},\\ p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-3}\eta(f^{\star}(a)) \Big{(}I_{4}\Big{(}\frac{g^{\star}(b)}{f^{\star}(a)}\Big{)}-\eta\Big{(}\frac{ g^{\star}(b)}{f^{\star}(a)}\Big{)}\Big{)},&\text{ otherwise,}\end{cases}\] _where \(I_{4}\) is a companion sum determined in Lemma 2._ Proof.: Let \(2\nmid s+t\). By definition in (4.1) and the orthogonal property of group characters, \[N_{0} =\frac{1}{p^{2}}\sum_{x,y\in\mathbb{F}_{q}}\sum_{z\in\mathbb{F}_{ p}}\zeta_{p}{}^{z(f(x)+g(y))}\sum_{h\in\mathbb{F}_{p}}\zeta_{p}{}^{h\text{Tr}( ax+by)}\] \[=\frac{1}{p^{2}}\sum_{x,y\in\mathbb{F}_{q}}\Big{(}1+\sum_{z\in \mathbb{F}_{p}^{\star}}\zeta_{p}{}^{z(f(x)+g(y))}\Big{)}\Big{(}1+\sum_{h\in \mathbb{F}_{p}^{\star}}\zeta_{p}{}^{h\text{Tr}(ax+by)}\Big{)}\] \[=p^{2m-2}+\frac{1}{p^{2}}\sum_{z\in\mathbb{F}_{p}^{\star}}\sum_{x, y\in\mathbb{F}_{q}}\zeta_{p}{}^{z(f(x)+g(y))}\] \[\qquad+\frac{1}{p^{2}}\sum_{x,y\in\mathbb{F}_{q}}\sum_{z\in \mathbb{F}_{p}^{\star}}\sum_{h\in\mathbb{F}_{p}^{\star}}\zeta_{p}{}^{z(f(x)+g (y))+h\text{Tr}(ax+by)}\] \[=p^{2m-2}+p^{-2}(\Delta_{1}+\Delta_{2}), \tag{4.2}\] where we write \[\Delta_{1} =\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{x,y\in\mathbb{F}_{q}}\zeta_{p}{}^ {z(f(x)+g(y))},\] \[\Delta_{2} =\sum_{x,y\in\mathbb{F}_{q}}\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{h \in\mathbb{F}_{p}^{*}}\zeta_{p}{}^{z(f(x)+g(y))+h\text{Tr}(ax+by)}.\] It follows that \[\Delta_{1} =\sum_{z\in\mathbb{F}_{p}^{*}}\sigma_{z}\big{(}\widehat{\chi}_{f} (0)\widehat{\chi}_{g}(0)\big{)}\] \[=\begin{cases}0,&\text{if }f,g\in\text{WRPB},\\ \varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in\mathbb{F}_{p}^{*}} \eta^{s+t}(z),&\text{if }f,g\in\text{WRP}.\end{cases}\] So we always have \(\Delta_{1}=0\) when \(2\nmid s+t\). Now it is sufficient to determine \(\Delta_{2}\). We observe from its definition that \[\Delta_{2} =\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{h\in\mathbb{F}_{p}^{*}}\sum_{ x\in\mathbb{F}_{q}}\zeta_{p}{}^{z(f(x)-\text{Tr}(hax)}\sum_{y\in\mathbb{F}_{q}} \zeta_{p}{}^{zg(y)-\text{Tr}(hby)}\] \[=\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{h\in\mathbb{F}_{p}^{*}}\sum_{ x\in\mathbb{F}_{q}}\zeta_{p}{}^{z(f(x)-\text{Tr}(\frac{h}{z}ax))}\sum_{y\in \mathbb{F}_{q}}\zeta_{p}{}^{z(g(y)-\text{Tr}(\frac{h}{z}by))}\] \[=\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{h\in\mathbb{F}_{p}^{*}}\sigma _{z}\big{(}\widehat{\chi}_{f}(ha)\widehat{\chi}_{g}(hb)\big{)}. \tag{4.3}\] Obviously, when \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), then from Lemma 6\((ha,hb)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\) for every \(h\in\mathbb{F}_{p}^{*}\). Hence \(\widehat{\chi}_{f}(ha)=0\) or \(\widehat{\chi}_{g}(hb)=0\), and consequently by (4.3) \[\Delta_{2}=0.\] When \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\), then \((ha,hb)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\) for every \(h\in\mathbb{F}_{p}^{*}\). By (4.3), Lemmas 1, 5 and 7, we obtain \[\Delta_{2} =\sum_{z\in\mathbb{F}_{p}^{*}}\sigma_{z}\Big{(}\sum_{h\in\mathbb{ F}_{p}^{*}}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\zeta_{p}^{h^{l_{f}}f^{*}(a)+ h^{l_{g}}g^{*}(b))}\Big{)}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta^{s+t}(z)\sigma_{z}\Big{(}\sum_{h\in\mathbb{F}_{p}^{*} }\zeta_{p}^{h^{l_{f}}f^{*}(a)+h^{l_{g}}g^{*}(b)}\Big{)}. \tag{4.4}\] In the following, we will apply Lemmas 1 and 3 to determine \(\Delta_{2}\) in (4.4) by considering the cases of \(l_{f}=2\), \(l_{f}=\frac{p-1}{2}\) and \(l_{f}=p-1\), separately. (1) The first case we consider is \(l_{f}=\frac{p-1}{2}\). For \(h\in\mathbb{F}_{p}^{*}\), clearly \(h^{\frac{p-1}{2}}=1\) if \(h\in S_{q}\), and \(-1\) otherwise. So we have from (4.4) that \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in\mathbb{ F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{f^{*}(a)+g^{*}(b)}+ \sum_{h\in N_{sq}}\zeta_{p}^{-f^{*}(a)-g^{*}(b)}\Big{)}\] \[=\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t} \Big{(}\sum_{z\in\mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{z(f^{*}(a)+g^{*}(b))}+ \sum_{z\in\mathbb{F}_{p}^{*}}\eta(-z)\zeta_{p}^{-z(f^{*}(a)+g^{*}(b))}\Big{)}\] \[=(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{z(f^{*}(a)+g^{*}(b))}\] \[=\begin{cases}0,&\text{if $f^{*}(a)+g^{*}(b)=0$},\\ (p-1)\varepsilon_{f}\varepsilon_{g}\eta(f^{*}(a)+g^{*}(b))\sqrt{p}^{2m+s+t+1},&\text{if $f^{*}(a)+g^{*}(b)\neq 0$}.\end{cases}\] (2) The second case is that \(l_{f}=p-1\). In this case, \(h^{p-1}=1\) for every \(h\in\mathbb{F}_{p}^{*}\). By (4.4), we have \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{f^{*}( a)+g^{*}(b)}+\sum_{h\in N_{sq}}\zeta_{p}^{f^{*}(a)-g^{*}(b)}\Big{)}\] \[=\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t} \Big{(}\sum_{z\in\mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{z(f^{*}(a)+g^{*}(b))}+ \sum_{z\in\mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{z(f^{*}(a)-g^{*}(b))}\Big{)}\] \[=\begin{cases}0,&\text{if $f^{*}(a)=g^{*}(b)=0$},\\ \frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2f^{*}(a))\sqrt{p}^{2m+s+t+1},&\text{if $f^{*}(a)\neq 0,g^{*}(b)=-f^{*}(a)$},\\ \frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2f^{*}(a))\sqrt{p}^{2m+s+t+1},&\text{if $f^{*}(a)\neq 0,g^{*}(b)=f^{*}(a)$},\\ \frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\Big{(}\eta(f^{*}(a)+g^{*}(b))+ \eta(f^{*}(a)-g^{*}(b))\Big{)}\sqrt{p}^{2m+s+t+1},&\text{otherwise}.\end{cases}\] (3) The last case is that \(l_{f}=2\) and it is distinguished between two subcases. **Subcase (a)**: If \(p\equiv 1\pmod{8}\), then \(-1\in C_{0}^{(4,p)}\). So from (4.4), \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{h^{2}f^{ *}(a)+g^{*}(b)}+\sum_{h\in N_{sq}}\zeta_{p}^{h^{2}f^{*}(a)-g^{*}(b)}\Big{)}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{h^{2}f^{ *}(a)+g^{*}(b)}+\sum_{h\in N_{sq}}\zeta_{p}^{-(h^{2}f^{*}(a)+g^{*}(b))}\Big{)}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\Big{(}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sum_{h\in S_{q}}\zeta_{p}^{z(h^{2}f^{*}(a)+g^{*}(b)) }+\sum_{z\in\mathbb{F}_{p}^{*}}\eta(-z)\sum_{h\in N_{sq}}\zeta_{p}^{-z(h^{2}f^ {*}(a)+g^{*}(b))}\Big{)}.\] Replacing \(-z\) by \(z\) in the last double sum above, we obtain \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in\mathbb{F} _{p}^{*}}\eta(z)\sum_{h\in\mathbb{F}_{p}^{*}}\zeta_{p}^{z(h^{2}f^{*}(a)+g^{*}(b))}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{zg^{*}(b)}\sum_{h\in\mathbb{F}_{p}^{*}} \zeta_{p}^{zh^{2}f^{*}(a)}\] \[=\begin{cases}0,&\text{if $f^{*}(a)=g^{*}(b)=0$},\\ (p-1)\varepsilon_{f}\varepsilon_{g}\eta(g^{*}(b))\sqrt{p}^{2m+s+t+1},&\text{ if $f^{*}(a)=0,g^{*}(b)\neq 0$},\\ (p-1)\varepsilon_{f}\varepsilon_{g}\eta(f^{*}(a))\sqrt{p}^{2m+s+t+1},&\text{ if $f^{*}(a)\neq 0,g^{*}(b)=0$},\\ -(p-1)\varepsilon_{f}\varepsilon_{g}\big{(}\eta(f^{*}(a))+\eta(g^{*}(b)) \big{)}\sqrt{p}^{2m+s+t+1},&\text{otherwise}.\end{cases}\] **Subcase (b)**: If \(p\equiv 5\pmod{8}\), then \(-1\in C_{2}^{(4,p)}\). So from (4.4), \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{h^{2}f^ {*}(a)+g^{*}(b)}+\sum_{h\in N_{sq}}\zeta_{p}^{h^{2}f^{*}(a)-g^{*}(b)}\Big{)}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{h^{2}f^ {*}(a)+g^{*}(b)}+\sum_{h\in S_{q}}\zeta_{p}^{-(h^{2}f^{*}(a)+g^{*}(b))}\Big{)}\] \[=2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\eta(z)\sigma_{z}\Big{(}\sum_{h\in S_{q}}\zeta_{p}^{h^{2}f^ {*}(a)+g^{*}(b)}\Big{)}\] \[=2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{h\in S_{q }}\sum_{z\in\mathbb{F}_{p}^{*}}\eta(z)\zeta_{p}^{z(h^{2}f^{*}(a)+g^{*}(b))}.\] Assume that \(f^{*}(a)g^{*}(b)\neq 0\). If \(\frac{g^{*}(b)}{f^{*}(a)}\in C_{2}^{(4,p)}\), then the equation \(h^{2}f^{*}(a)+g^{*}(b)=0\) has exactly two solutions \(h_{1},h_{2}\) in \(S_{q}\), where \(h_{2}=-h_{1}\). Otherwise if \(\frac{g^{*}(b)}{f^{*}(a)}\notin C_{2}^{(4,p)}\), then the inequality \(h^{2}f^{*}(a)+g^{*}(b)\neq 0\) holds for all \(h\) in \(S_{q}\). Consequently, if \(f^{*}(a)g^{*}(b)\neq 0\), then \[\Delta_{2} =2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t+1}\sum_{h\in S_{ q}}\eta(h^{2}f^{*}(a)+g^{*}(b))\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t+1}\sum_{h\in \mathbb{F}_{p}^{*}}\eta(h^{4}f^{*}(a)+g^{*}(b))\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t+1}\eta(f^{*}(a)) \Big{(}I_{4}\Big{(}\frac{g^{*}(b)}{f^{*}(a)}\Big{)}-\eta\Big{(}\frac{g^{*}(b)} {f^{*}(a)}\Big{)}\Big{)},\] where \(I_{4}\) is determined from Lemma 2. Thus we conclude that \[\Delta_{2}=\begin{cases}0,&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ (p-1)\varepsilon_{f}\varepsilon_{g}\eta(g^{\star}(b))\sqrt{p}^{2m+s+t+1},&\text {if }f^{\star}(a)=0,g^{\star}(b)\neq 0,\\ (p-1)\varepsilon_{f}\varepsilon_{g}\eta(f^{\star}(a))\sqrt{p}^{2m+s+t+1},& \text{if }f^{\star}(a)\neq 0,g^{\star}(b)=0,\\ \varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t+1}\eta(f^{\star}(a))\Big{(}I_ {4}\Big{(}\frac{g^{\star}(b)}{f^{\star}(a)}\Big{)}-\eta\Big{(}\frac{g^{\star}( b)}{f^{\star}(a)}\Big{)}\Big{)},&\text{otherwise}.\end{cases}\] The desired conclusion then follows from (4.2), completing the proof. **Lemma 14**.: _Let \(f,\,g\in\mathrm{WRP}\) with \(l_{g}=\frac{p-1}{2}\). Suppose that \(s+t\) is even and \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\). For \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have \(N_{0}=p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4}\), and for \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have the following assertions. When \(l_{f}=\frac{p-1}{2}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+ t-2},&\text{if }f^{\star}(a)+g^{\star}(b)=0,\\ p^{2m-2},&\text{if }f^{\star}(a)+g^{\star}(b)\neq 0.\end{cases}\] _When \(l_{f}=p-1\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+ t-2},&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-2},&\text {if }f^{\star}(a)\neq 0,g^{\star}(b)=-f^{\star}(a)\\ &\text{or }f^{\star}(a)\neq 0,g^{\star}(b)=f^{\star}(a),\\ p^{2m-2},&\text{otherwise}.\end{cases}\] _When \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s +t-2},&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-2},&\text{if }f^{\star}(a)g^{ \star}(b)\in S_{q},\\ p^{2m-2},&\text{otherwise}.\end{cases}\] _When \(l_{f}=2\) and \(p\equiv 5\pmod{8}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s +t-2},&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+4\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-2},&\text{if }\frac{g^{\star}(b)}{f^{ \star}(a)}\in C_{2}^{(4,p)},\\ p^{2m-2},&\text{otherwise}.\end{cases}\] Proof.: The proof is completed in a manner analogous to the previous lemma by noting that \(2\mid s+t\). From (4.2) and (4.3), \[N_{0}=p^{2m-2}+p^{-2}(\Delta_{1}+\Delta_{2}),\] where \[\Delta_{1} =(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},\] \[\Delta_{2} =\sum_{z\in\mathbb{F}_{p}^{*}}\sum_{h\in\mathbb{F}_{p}^{*}} \sigma_{z}\Big{(}\widehat{\chi}_{f}(ha)\widehat{\chi}_{g}(hb)\Big{)}.\] We always have \(\Delta_{2}=0\) unless \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\). Now we let \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\). Then the value of \(\Delta_{2}\) in (4.4) is determined by distinguishing the cases of \(l_{f}=2\), \(l_{f}=\frac{p-1}{2}\) and \(l_{f}=p-1\), respectively. (1) The first case we consider is \(l_{f}=\frac{p-1}{2}\). It follows from (4.4) that \[\Delta_{2} =(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\zeta_{p}^{z(f^{*}(a)+g^{*}(b))}\] \[=\begin{cases}(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2 m+s+t},&\text{if }f^{*}(a)+g^{*}(b)=0,\\ -(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{if }f^{*}(a)+g^{*}(b) \neq 0.\end{cases}\] (2) The second case is that \(l_{f}=p-1\). Again from (4.4), we have \[\Delta_{2} =\frac{p-1}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t} \Big{(}\sum_{z\in\mathbb{F}_{p}^{*}}\zeta_{p}^{z(f^{*}(a)+g^{*}(b))}+\sum_{z\in \mathbb{F}_{p}^{*}}\zeta_{p}^{z(f^{*}(a)-g^{*}(b))}\Big{)}\] \[=\begin{cases}(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2 m+s+t},&\text{if }f^{*}(a)=g^{*}(b)=0,\\ \frac{p-1}{2}(p-2)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{if }f^{*}(a) \neq 0,g^{*}(b)=-f^{*}(a)\\ &\text{or }f^{*}(a)\neq 0,g^{*}(b)=f^{*}(a),\\ -(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{otherwise.}\end{cases}\] (3) The last case is that \(l_{f}=2\) and we need only consider two different subcases. **Subcase (a)**: If \(p\equiv 1\pmod{8}\), then from (4.4), \[\Delta_{2} =\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in\mathbb{F }_{p}^{*}}\sum_{h\in\mathbb{F}_{p}^{*}}\zeta_{p}^{z(h^{2}f^{*}(a)+g^{*}(b))}\] \[=\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{z\in \mathbb{F}_{p}^{*}}\zeta_{p}^{zg^{*}(b)}\sum_{h\in\mathbb{F}_{p}^{*}}\zeta_{p}^ {zh^{2}f^{*}(a)}\] \[=\begin{cases}(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m +s+t},&\text{ if }f^{*}(a)=g^{*}(b)=0,\\ (p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{ if }f^{*}(a)g^{*}(b) \in S_{q},\\ -(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{ otherwise.}\end{cases}\] **Subcase (b)**: If \(p\equiv 5\pmod{8}\), then from (4.4), \[\Delta_{2}=2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\sum_{h\in S_{q}} \sum_{z\in\mathbb{F}_{p}^{*}}\zeta_{p}^{z(h^{2}f^{*}(a)+g^{*}(b))}.\] The value of \(\Delta_{2}\) is clear if \(f^{*}(a)g^{*}(b)=0\). We now assume that \(f^{*}(a)g^{*}(b)\neq 0\). If \(\frac{g^{*}(b)}{f^{*}(a)}\in C_{2}^{(4,p)}\), then the equation \(h^{2}f^{*}(a)+g^{*}(b)=0\) has exactly two solutions \(h_{1},h_{2}\) in \(S_{q}\), where \(h_{2}=-h_{1}\). Hence, \[\Delta_{2} =2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\Big{(}2(p-1)-( \frac{p-1}{2}-2)\Big{)}\] \[=(3p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}.\] Otherwise if \(\frac{g^{*}(b)}{f^{*}(a)}\notin C_{2}^{(4,p)}\), then the inequality \(h^{2}f^{*}(a)+g^{*}(b)\neq 0\) holds for all \(h\) in \(S_{q}\). Thus \[\Delta_{2} =2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}\times\frac{p-1} {2}\times(-1)\] \[=-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t}.\] So we conclude that \[\Delta_{2}=\begin{cases}(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s +t},&\text{ if }f^{*}(a)=g^{*}(b)=0,\\ (3p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{ if }\frac{g^{*}(b)}{f^{*}(a)} \in C_{2}^{(4,p)},\\ -(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t},&\text{ otherwise.}\end{cases}\] The desired conclusion then follows from (4.2), completing the proof. **Lemma 15**.: _Let \(f,\,g\in\mathrm{WRPB}\) with \(l_{g}=\frac{p-1}{2}\). Suppose that \(s+t\) is even and \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\). For \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have \(N_{0}=p^{2m-2}\), and for \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have the following assertions. When \(l_{f}=\frac{p-1}{2}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2 m+s+t-4},&\text{if }f^{\star}(a)+g^{\star}(b)=0,\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4},&\text{if }f^{ \star}(a)+g^{\star}(b)\neq 0.\end{cases}\] _When \(l_{f}=p-1\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2 m+s+t-4},&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+\frac{p-1}{2}(p-2)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4},& \text{if }f^{\star}(a)\neq 0,g^{\star}(b)=-f^{\star}(a)\\ &\text{or }f^{\star}(a)\neq 0,g^{\star}(b)=f^{\star}(a),\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4},&\text{otherwise.}\end{cases}\] _When \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), we have_ \[N_{0}=\begin{cases}p^{2m-2}+(p-1)^{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{ 2m+s+t-4},&\text{if }f^{\star}(a)=g^{\star}(b)=0,\\ p^{2m-2}+(3p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4},&\text{if } \frac{g^{\star}(b)}{f^{\star}(a)}\in C_{2}^{(4,p)},\\ p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{2m+s+t-4},&\text{otherwise.} \end{cases}\] Proof.: Notation that \(\Delta_{1}=\sum_{z\in\mathbb{F}_{p}^{\star}}\sigma_{z}\big{(}\widehat{\chi}_{ f}(0)\widehat{\chi}_{g}(0)\big{)}=0\) for \(f,\,g\in\mathrm{WRPB}\). From (4.2), \(N_{0}=p^{2m-2}+p^{-2}(\Delta_{1}+\Delta_{2})=p^{2m-2}+p^{-2}\Delta_{2}\), where \(\Delta_{2}\) is determined in Lemma 14. This completes the proof. ### Weight distributions of \(C_{d_{f,g}}\) from two weakly regular plateaued functions In this subsection, we will deal with the weight distributions of \(C_{D_{f,g}}\) defined by (1.1) and (1.3). Its length is denoted by \(n\) and is given in Lemma 10. We will show the main results in the following theorems explicitly. For abbreviation, we write \(\tau=2m+s+t\) and \(\gamma=2m-s-t\). **Theorem 1**.: _Suppose that \(f,\,g\in\mathrm{WRP}\) or \(f,\,g\in\mathrm{WRPB}\) with \(l_{g}=\frac{p-1}{2}\). Let \(s+t\) be odd. If \(l_{f}=\frac{p-1}{2}\), then \(C_{D_{f,g}}\) is a three-weight \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 1. If \(l_{f}=p-1\), then \(C_{D_{f,g}}\) is a five-weight \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 2. If \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), then \(C_{D_{f,g}}\) is a five-weight \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 3. If \(l_{f}=2\) and \(p\equiv 5\pmod{8}\), then \(C_{D_{f,g}}\) is a \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 4._ Proof.: From Lemma 10, the length is \(n=p^{2m-1}-1\). Let \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\), the weight of nonzero codewords \(\mathbf{c}(a,b)\) is denoted by \(\mathtt{wt}(\mathbf{c}(a,b))\). It follows that \[\mathtt{wt}(\mathbf{c}(a,b))=n+1-N_{0},\] where \(N_{0}\) is given by Lemma 13. To be more precise, for each \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=(p-1)p^{2m-2}.\] For each \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\), there are four different cases when \(\mathtt{wt}(\mathbf{c}(a,b))\neq(p-1)p^{2m-2}\). (1) When \(l_{f}=\frac{p-1}{2}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)\big{(}p^{2m-2}- \varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\,\,-3}\big{)},&\frac{p-1}{2}\mathcal{ T}(i)\text{ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\,\,-3}\big{)},& \frac{p-1}{2}\mathcal{T}(j)\text{ times},\end{cases}\] where \(\mathcal{T}(i)\) and \(\mathcal{T}(j)\) are computed in Lemma 9 for \(i\in S_{q}\) and \(j\in N_{sq}\). This leads to the weight distribution in Table 1. (2) When \(l_{f}=p-1\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)\big{(}p^{2m-2}- \frac{1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2)\sqrt{p}^{\,\,-3}\big{)},&E_{ 1}\text{ times},\\ (p-1)\big{(}p^{2m-2}+\frac{1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2)\sqrt{p}^ {\,\,-3}\big{)},&E_{2}\text{ times},\\ (p-1)\big{(}p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\,\,-3}\big{)},&B_ {S_{q}}\text{ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\,\,-3}\big{)},&B_ {N_{sq}}\text{ times},\end{cases}\] where the numbers \(B_{S_{q}}\) and \(B_{N_{sq}}\) are computed in Lemma 12, and \[E_{1} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)\in S_ {q},g^{\star}(b)=\pm f^{\star}(a)\}=(p-1)\mathcal{N}_{f}(i)\mathcal{N}_{g}(i),\] \[E_{2} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)\in N _{sq},g^{\star}(b)=\pm f^{\star}(a)\}=(p-1)\mathcal{N}_{f}(j)\mathcal{N}_{g}(j),\] for \(i\in S_{q}\), \(j\in N_{sq}\), and \(\mathcal{N}_{f}\) and \(\mathcal{N}_{g}\) are given in Lemma 8. The weight distribution in Table 2 is then established. (3) When \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)\big{(}p^{2m-2}-\varepsilon_{f }\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)},&\text{$E_{3}$ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)}, &\text{$E_{4}$ times},\\ (p-1)\big{(}p^{2m-2}+2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)},&\text{$\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(i)\mathcal{N}_{g}(i)$ times},\\ (p-1)\big{(}p^{2m-2}-2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)},&\text{$\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)$ times},\end{cases}\] where \[E_{3} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)=0, g^{\star}(b)\in S_{q}\text{ or }g^{\star}(b)=0,f^{\star}(a)\in S_{q}\}\] \[=\frac{p-1}{2}\big{(}\mathcal{N}_{f}(0)\mathcal{N}_{g}(i)+ \mathcal{N}_{f}(i)\mathcal{N}_{g}(0)\big{)},\] \[E_{4} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)=0, g^{\star}(b)\in N_{sq}\text{ or }g^{\star}(b)=0,f^{\star}(a)\in N_{sq}\}\] \[=\frac{p-1}{2}\big{(}\mathcal{N}_{f}(0)\mathcal{N}_{g}(j)+ \mathcal{N}_{f}(j)\mathcal{N}_{g}(0)\big{)},\] for \(i\in S_{q}\) and \(j\in N_{sq}\). Thus we get the weight distribution listed in Table 3. (4) When \(l_{f}=2\) and \(p\equiv 5\pmod{8}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)\big{(}p^{2m-2}-\varepsilon_{f }\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)},&\text{$E_{3}$ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau\,-3}\big{)},&\text{$E_{4}$ times},\\ (p-1)p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau\,-3}\eta(u)\Big{(} I_{4}\big{(}\frac{v}{u}\big{)}-\eta\big{(}\frac{v}{u}\big{)}\Big{)}&\text{$ \mathcal{N}_{f}(u)\mathcal{N}_{g}(v)$ times}.\\ \text{for all }u,v\in\mathbb{F}_{p}^{*},\end{cases}\] The weight distribution of this case is summarized in Table 4. **Theorem 2**.: _Suppose that \(f,\,g\in\mathrm{WRP}\) with \(l_{g}=\frac{p-1}{2}\). Let \(s+t\) be even. If \(l_{f}=\frac{p-1}{2}\), then \(C_{D_{f,g}}\) is a three-weight \([n,2m]\) linear code with its weight \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline 0 & 1 \\ \((p-1)p^{2m-2}\) & \(p^{2m}-1-E_{1}-E_{2}-B_{S_{q}}-B_{N_{sq}}\) \\ \((p-1)\big{(}p^{2m-2}-\frac{1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2)\sqrt{p}^{ \tau-3}\big{)}\) & \(E_{1}\) \\ \((p-1)\big{(}p^{2m-2}+\frac{1}{2}\varepsilon_{f}\varepsilon_{g}\eta(2)\sqrt{p}^{ \tau-3}\big{)}\) & \(E_{2}\) \\ \((p-1)\big{(}p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\big{)}\) & \(B_{S_{q}}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-3}\big{)}\) & \(B_{N_{sq}}\) \\ \hline \end{tabular} \end{table} Table 2: The weight distribution of \(C_{D_{f,g}}\) in Theorem 1 when \(l_{f}=p-1\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline 0 & 1 \\ \((p-1)p^{2m-2}\) & \(p^{2m}-1-E_{3}-E_{4}-\frac{(p-1)^{2}}{4}\big{(}\mathcal{N}_{f}(i)\mathcal{N}_{ g}(i)+\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)\big{)}\) \\ \((p-1)\big{(}p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\big{)}\) & \(E_{3}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-3}\big{)}\) & \(E_{4}\) \\ \((p-1)\big{(}p^{2m-2}+2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-3}\big{)}\) & \(\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(i)\mathcal{N}_{g}(i)\) \\ \((p-1)\big{(}p^{2m-2}-2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-3}\big{)}\) & \(\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)\) \\ \hline \end{tabular} \end{table} Table 3: The weight distribution of \(C_{D_{f,g}}\) in Theorem 1 when \(l_{f}=2\) and \(p\equiv 1\pmod{8}\). distribution listed in Table 5. If \(l_{f}=p-1\), then \(C_{D_{f,g}}\) is a four-weight \([n,2m]\) linear code with its weight distribution listed in Table 6. Otherwise if \(l_{f}=2\), then \(C_{D_{f,g}}\) is a four-weight \([n,2m]\) linear code with its weight distribution listed in Table 7 when \(p\equiv 1\pmod{8}\), and in Table 8 when \(p\equiv 5\pmod{8}\). Here we set \(n=p^{2m-1}-1+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\) for brevity._ Proof.: The length of the code \(C_{D_{f,g}}\) comes from Lemma 10. For \((a,b)\in\mathbb{F}_{q}^{2}\backslash\{(0,0)\}\), the weight \(\mathtt{wt}(\mathbf{c}(a,b))=n+1-N_{0}\) can be obtained from Lemma 14. To be more explicit, when \((a,b)\notin\mathcal{S}_{f}\times\mathcal{S}_{g}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=(p-1)\big{(}p^{2m-2}+(p-1)\varepsilon_{f} \varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}.\] By Lemma 4, the frequency of such codewords equals \(p^{2m}-p^{\gamma}\) since \(f,\,g\in\mathrm{WRP}\). When \((a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}\backslash\{(0,0)\}\), we will discuss the following four different cases. (1) When \(l_{f}=\frac{p-1}{2}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)p^{2m-2},&\mathcal{T}(0)-1\text { times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\big{)},&(p -1)\mathcal{T}(c)\text{ times},\end{cases}\] where \(\mathcal{T}(0)\) and \(\mathcal{T}(c)\) are given in Lemma 9 for \(c\neq 0\). This gives the weight distribution in Table 5. \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)p^{2m-2}\) & \(p^{2m}-1-E_{3}-E_{4}-\sum_{u,v\in\mathbb{F}_{p}^{*}}\mathcal{N}_{f}(u) \mathcal{N}_{g}(v)\) \\ \((p-1)\big{(}p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\big{)}\) & \(E_{3}\) \\ \((p-1)p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\eta(u)\Big{(}I_{4 }\big{(}\frac{v}{u}\big{)}-\eta\big{(}\frac{v}{u}\big{)}\Big{)}\) & \(\mathcal{N}_{f}(u)\mathcal{N}_{g}(v)\) \\ \multicolumn{2}{c}{for all \(u,v\in\mathbb{F}_{p}^{*}\)} \\ \hline \end{tabular} \end{table} Table 4: The weight distribution of \(C_{D_{f,g}}\) in Theorem 1 when \(l_{f}=2\) and \(p\equiv 5\pmod{8}\). (2) When \(l_{f}=p-1\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)p^{2m-2},&\mathcal{N}_{f}(0) \mathcal{N}_{g}(0)-1\text{ times},\\ (p-1)\big{(}p^{2m-2}+\frac{1}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau -2}\big{)},&F_{1}\text{ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\big{)},&F _{2}\text{ times},\end{cases}\] where we define \[F_{1}=\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)\neq 0,g^{ \star}(b)=\pm f^{\star}(a)\}=2\sum_{c\in\mathbb{F}_{p}^{*}}\mathcal{N}_{f}(c) \mathcal{N}_{g}(c),\] \[F_{2}=p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-F_{1}.\] Thus we obtain the weight distribution in Table 6. (3) When \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), we have \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)p^{2m-2},&\mathcal{N}_{f}(0) \mathcal{N}_{g}(0)-1\text{ times},\\ (p-1)p^{2m-2}+(p-3)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2},&F_{3} \text{ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\big{)},& F_{4}\text{ times},\end{cases}\] where \[F_{3} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:f^{\star}(a)g^{ \star}(b)\in S_{q}\}=\frac{(p-1)^{2}}{4}(\mathcal{N}_{f}(i)\mathcal{N}_{g}(i) +\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)),\] \[F_{4} =p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-F_{3},\] for \(i\in S_{q}\) and \(j\in N_{sq}\). This implies the weight distribution listed in Table 7. (4) When \(l_{f}=2\) and \(p\equiv 5\pmod{8}\), we get \[\mathtt{wt}(\mathbf{c}(a,b))=\begin{cases}(p-1)p^{2m-2},&\mathcal{N}_{f}(0) \mathcal{N}_{g}(0)-1\text{ times},\\ (p-1)p^{2m-2}+(p-5)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2},&F_{5} \text{ times},\\ (p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\big{)},& F_{6}\text{ times},\end{cases}\] where we write \[F_{5} =\#\{(a,b)\in\mathcal{S}_{f}\times\mathcal{S}_{g}:\frac{g^{\star }(b)}{f^{\star}(a)}\in C_{2}^{(4,p)}\}\] \[=\frac{(p-1)^{2}}{8}(\mathcal{N}_{f}(i)\mathcal{N}_{g}(i)+ \mathcal{N}_{f}(j)\mathcal{N}_{g}(j))=\frac{1}{2}F_{3},\] \[F_{6}=p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-\frac{1}{2}F_{3},\] for \(i\in S_{q}\) and \(j\in N_{sq}\). Thus the result in Table 8 is derived. **Theorem 3**.: _Suppose that \(f,\,g\in\mathrm{WRPB}\) with \(l_{g}=\frac{p-1}{2}\). Let \(s+t\) be even. If \(l_{f}=\frac{p-1}{2}\), then \(C_{D_{f,g}}\) is a three-weight \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 9. If \(l_{f}=p-1\), then \(C_{D_{f,g}}\) is a four-weight \(1,2m\)] linear code with its weight distribution listed in Table 10. Otherwise if \(l_{f}=2\), then \(C_{D_{f,g}}\) is a four-weight \([p^{2m-1}-1,2m]\) linear code with its weight distribution listed in Table 11 when \(p\equiv 1\pmod{8}\), and in Table 12 when \(p\equiv 5\pmod{8}\)._ Proof.: Notation that \((0,0)\) is not in \(\mathcal{S}_{f}\times\mathcal{S}_{g}\) since \(f\), \(g\in\mathrm{WRPB}\). This theorem can be derived in the same way as Theorem 2 by using Lemmas 4, 8, 9, 10 and 15. We omit the details here. **Remark 2**.: In Theorems 1, 2 and 3, we dealt with the code \(C_{D_{f,g}}\) for \(f,g\in\mathrm{WRP}\) or \(f,g\in\mathrm{WRPB}\) with \(l_{f}\in\{2,\frac{p-1}{2},p-1\}\) and \(l_{g}=\frac{p-1}{2}\), where \(p\equiv 1\pmod{4}\). However, some of the results coincide with the known ones in the literature. Specifically, when \(f,g\in\mathrm{WRP}\), the weight distributions in Tables 1, 5 and 7 coincide with the results of Tables 3, 4 and 5 in [18], respectively. If we set \(t=s\) in Tables 5 and 7, then we get the results of Theorem 4 in [4]. \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)p^{2m-2}\) & \(\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-1\) \\ \((p-1)p^{2m-2}+(p-5)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\) & \(\frac{1}{2}F_{3}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\big{)}\) & \(p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-\frac{1}{2}F_{3}\) \\ \((p-1)\big{(}p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(p^{2m}-p^{\gamma}\) \\ \hline \end{tabular} \end{table} Table 8: The weight distribution of \(C_{D_{f,g}}\) in Theorem 2 when \(l_{f}=2\) and \(p\equiv 5\pmod{8}\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)\big{(}p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-4} \big{)}\) & \(p^{\gamma-1}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\gamma-2}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}{}^{-4}\big{)}\) & \((p-1)\big{(}p^{\gamma-1}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\gamma-2}\big{)}\) \\ \((p-1)p^{2m-2}\) & \(p^{2m}-p^{\gamma}-1\) \\ \hline \end{tabular} \end{table} Table 9: The weight distribution of \(C_{D_{f,g}}\) in Theorem 3 when \(l_{f}=\frac{p-1}{2}\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)\big{(}p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)\) \\ \((p-1)\big{(}p^{2m-2}-\frac{p-2}{2}\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4} \big{)}\) & \(F_{1}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-F_{1}\) \\ \((p-1)p^{2m-2}\) & \(p^{2m}-p^{\gamma}-1\) \\ \hline \end{tabular} \end{table} Table 10: The weight distribution of \(C_{D_{f,g}}\) in Theorem 3 when \(l_{f}=p-1\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)\big{(}p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)\) \\ \((p-1)p^{2m-2}-(p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\) & \(F_{3}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-F_{3}\) \\ \((p-1)p^{2m-2}\) & \(p^{2m}-p^{\gamma}-1\) \\ \hline \end{tabular} \end{table} Table 11: The weight distribution of \(C_{D_{f,g}}\) in Theorem 3 when \(l_{f}=2\) and \(p\equiv 1\) (mod 8). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \((p-1)\big{(}p^{2m-2}-(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)\) \\ \((p-1)p^{2m-2}-(3p+1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\) & \(\frac{1}{2}F_{3}\) \\ \((p-1)\big{(}p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\big{)}\) & \(p^{\gamma}-\mathcal{N}_{f}(0)\mathcal{N}_{g}(0)-\frac{1}{2}F_{3}\) \\ \((p-1)p^{2m-2}\) & \(p^{2m}-p^{\gamma}-1\) \\ \hline \end{tabular} \end{table} Table 12: The weight distribution of \(C_{D_{f,g}}\) in Theorem 3 when \(l_{f}=2\) and \(p\equiv 5\) (mod 8). When \(f,g\in\mathrm{WRPB}\), the weight distributions in Tables 9 and 11 coincide with the results of Tables 6 and 7 in [18], respectively. ### The punctured code In the following, we study the punctured code from \(C_{D_{f,g}}\) by deleting some coordinates of each codeword. As we can see from Tables 1, 3, 5 and 9, the length and each nonzero Hamming weight have \(p-1\) as a common divisor. This suggests that they can be punctured into shorter ones. Let \(f\in\mathrm{WRP}\) or \(f\in\mathrm{WRPB}\). For any \(x\in\mathbb{F}_{q}\), we obtain \(f(x)=0\) if and only if \(f(zx)=0\) for all \(z\in\mathbb{F}_{p}^{*}\), since \(f(zx)=z^{h}f(x)\) for an even integer \(h\) with \(\gcd(h-1,p-1)=1\). Thus we can select a subset \(\overline{D}_{f,g}=\{\overline{(x,y)}:(x,y)\in D_{f,g}\}\) from \(D_{f,g}\) in (1.3), such that \(\bigcup_{z\in\mathbb{F}_{p}^{*}}z\overline{D}_{f,g}=D_{f,g}\) forms a partition of \(D_{f,g}\). Hence, we get the punctured code \(C_{\overline{D}_{f,g}}^{\perp}\) from \(C_{D_{f,g}}\). Moreover, the code \(C_{\overline{D}_{f,g}}\) is projective since the minimum distance of its dual \(C_{\overline{D}_{f,g}}^{\perp}\) is at least \(3\) as checked in [4]. We can also find some optimal codes when they meet certain specific conditions. The following results related to the weight distributions of \(C_{\overline{D}_{f,g}}\) follow directly from Tables 1, 3, 5 and 9, respectively. Remember that \(\tau=2m+s+t\) and \(\gamma=2m-s-t\). **Corollary 1**.: _Suppose that \(f,\,g\in\mathrm{WRP}\) or \(f,\,g\in\mathrm{WRPB}\) with \(l_{g}=\frac{p-1}{2}\). Let \(s+t\) be odd and \(\bar{d}^{\perp}\) be the minimum distance of \(C_{\overline{D}_{f,g}}^{\perp}\). Then \(\bar{d}^{\perp}\geqslant 3\). Moreover, if \(l_{f}=\frac{p-1}{2}\), then \(C_{\overline{D}_{f,g}}\) is a three-weight \([\frac{p^{2m-1}-1}{p-1},2m]\) linear code with its weight distribution listed in Table 13, and if \(l_{f}=2\) and \(p\equiv 1\pmod{8}\), then \(C_{\overline{D}_{f,g}}\) is a five-weight \([\frac{p^{2m-1}-1}{p-1},2m]\) linear code with its weight distribution listed in Table 14, where \(E_{3}\) and \(E_{4}\) are computed in Theorem 1._ **Corollary 2**.: _Suppose that \(f,\,g\in\mathrm{WRP}\) with \(l_{f}=l_{g}=\frac{p-1}{2}\) and \(s+t\) is even. Let \(\bar{d}^{\perp}\) be the minimum distance of \(C_{\overline{D}_{f,g}}^{\perp}\). Then \(\bar{d}^{\perp}\geqslant 3\) and \(C_{\overline{D}_{f,g}}\) is a three-weight \([\frac{p^{2m-1}-1}{p-1}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2},2m]\) linear code with its weight distribution listed in Table 15. Moreover, the code \(C_{\overline{D}_{f,g}}\) achieves Griesmer bound if \(\tau=4\) and \(\varepsilon_{f}\varepsilon_{g}=-1\)._ \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \(p^{2m-2}\) & \(p^{2m}-(p-1)p^{\gamma-1}-1\) \\ \(p^{2m-2}-\sqrt{p}^{\tau-3}\) & \(\frac{p-1}{2}\big{(}p^{\gamma-1}+\sqrt{p}^{\gamma-1}\big{)}\) \\ \(p^{2m-2}+\sqrt{p}^{\tau-3}\) & \(\frac{p-1}{2}\big{(}p^{\gamma-1}-\sqrt{p}^{\gamma-1}\big{)}\) \\ \hline \end{tabular} \end{table} Table 13: The weight distribution of \(C_{\overline{\mathcal{D}}_{f,g}}\) in Corollary 1 when \(l_{f}=\frac{p-1}{2}\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \(p^{2m-2}\) & \(p^{2m}-1-E_{3}-E_{4}-\frac{(p-1)^{2}}{4}\big{(}\mathcal{N}_{f}(i)\mathcal{N}_{ g}(i)+\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)\big{)}\) \\ \(p^{2m-2}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\) & \(E_{3}\) \\ \(p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-3}\) & \(E_{4}\) \\ \(p^{2m-2}+2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}-3\) & \(\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(i)\mathcal{N}_{g}(i)\) \\ \(p^{2m-2}-2\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau}-3\) & \(\frac{(p-1)^{2}}{4}\mathcal{N}_{f}(j)\mathcal{N}_{g}(j)\) \\ \hline \end{tabular} \end{table} Table 14: The weight distribution of \(C_{\overline{\mathcal{D}}_{f,g}}\) in Corollary 1 when \(l_{f}=2\) and \(p\equiv 1\pmod{8}\). \begin{table} \begin{tabular}{l l} \hline weight & frequency \\ \hline \(0\) & \(1\) \\ \(p^{2m-2}\) & \(p^{\gamma-1}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\gamma-2}-1\) \\ \(p^{2m-2}+\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-2}\) & \((p-1)(p^{\gamma-1}-\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\gamma-2})\) \\ \(p^{2m-2}+(p-1)\varepsilon_{f}\varepsilon_{g}\sqrt{p}^{\tau-4}\) & \(p^{2m}-p^{\gamma}\) \\ \hline \end{tabular} \end{table} Table 15: The weight distribution of \(C_{\overline{\mathcal{D}}_{f,g}}\) in Corollary 2. **Corollary 3**.: _Suppose that \(f,\,g\in\mathrm{WRPB}\) with \(l_{f}=l_{g}=\frac{p-1}{2}\) and \(s+t\) is even. Let \(\bar{d}^{\perp}\) be the minimum distance of \(C^{\perp}_{\overline{D}_{f,g}}\). Then \(\bar{d}^{\perp}\geqslant 3\) and \(C_{\overline{D}_{f,g}}\) is a three-weight \([\frac{p^{2m-1}-1}{p-1},2m]\) linear code with its weight distribution listed in Table 16. Moreover, the code \(C_{\overline{D}_{f,g}}\) achieves Griesmer bound if \(\tau=6\) and \(\varepsilon_{f}\varepsilon_{g}=-1\)._ **Example 1**.: Let \(f,g:\mathbb{F}_{5^{4}}\rightarrow\mathbb{F}_{5}\) be defined as \(f(x)=\mathrm{Tr}(x^{6})\) and \(g(y)=\mathrm{Tr}(y^{26}-y^{2})\). Then \(f,\,g\in\mathrm{WRP}\) with \(s=t=2\), \(\varepsilon_{f}=-1\), \(\varepsilon_{g}=1\) and \(l_{f}=l_{g}=2\). Their Walsh transforms satisfy \(\widehat{\chi}_{f}(\alpha)\in\{0,-5^{3}\zeta_{5}^{f^{\star}(\alpha)}\}\) and \(\widehat{\chi}_{g}(\beta)\in\{0,5^{3}\zeta_{5}^{g^{\star}(\beta)}\}\), where \(\alpha,\beta\in\mathbb{F}_{5^{4}}\) and \(f^{\star}(0)=g^{\star}(0)=0\). Hence \(C_{D_{f,g}}\) is a three-weight code with parameters \([65624,8,50000]\) and the weight enumerator \(1+520z^{50000}+390000z^{52500}+104z^{62500}\). Its punctured code \(C_{\overline{D}_{f,g}}\) has parameters \([16406,8,12500]\) and the weight enumerator \(1+520z^{12500}+390000z^{13125}+104z^{15625}\). **Example 2**.: Let \(f,g:\mathbb{F}_{5^{3}}\rightarrow\mathbb{F}_{5}\) be defined as \(f(x)=\mathrm{Tr}(x^{6}+x^{2})\) and \(g(y)=\mathrm{Tr}(\theta y^{6}+\theta^{3}y^{2})\) for a primitive element \(\theta\) of \(\mathbb{F}_{5^{3}}^{*}\). Then \(f,\,g\in\mathrm{WRP}\) with \(s=0\), \(t=1\), \(\varepsilon_{f}=-1\), \(\varepsilon_{g}=1\), \(l_{f}=l_{g}=2\), \(\widehat{\chi}_{f}(\alpha)\in\{-\sqrt{5}^{3}\zeta_{5}^{f^{\star}(\alpha)}\}\) and \(\widehat{\chi}_{g}(\beta)\in\{0,5^{2}\zeta_{5}^{g^{\star}(\beta)}\}\), where \(\alpha,\beta\in\mathbb{F}_{5^{3}}\) and \(f^{\star}(0)=g^{\star}(0)=0\). Actually, the function \(f\) is quadratic bent and its Walsh transform satisfies \(|\widehat{\chi}_{f}(\alpha)|^{2}=125\). Hence \(C_{D_{f,g}}\) is a three-weight code with parameters \([3124,6,2400]\) and the weight enumerator \(1+1300z^{2400}+13124z^{2500}+1200z^{2600}\). Its punctured code \(C_{\overline{D}_{f,g}}\) has parameters \([781,6,600]\) and the weight enumerator \(1+1300z^{600}+13124z^{625}+1200z^{650}\). **Example 3**.: Let \(f,g:\mathbb{F}_{5^{2}}\rightarrow\mathbb{F}_{5}\) be defined as \(f(x)=\mathrm{Tr}(x^{2})\) and \(g(y)=\mathrm{Tr}(\theta y^{2}-\theta y^{6})\) for a primitive element \(\theta\) of \(\mathbb{F}_{5^{2}}^{*}\). Then \(f,g\) are quadratic bent functions in the set WRP, with \(s=t=0\), \(\varepsilon_{f}=-1\), \(\varepsilon_{g}=1\), \(l_{f}=l_{g}=2\), \(\widehat{\chi}_{f}(\alpha)\in\{-5\zeta_{5}^{f^{\star}(\alpha)}\}\) and \(\widehat{\chi}_{g}(\beta)\in\{5\zeta_{5}^{g^{\star}(\beta)}\}\), where \(\alpha,\beta\in\mathbb{F}_{5^{2}}\) and \(f^{\star}(0)=g^{\star}(0)=0\). Then the code \(C_{D_{f,s}}\) is a two-weight code with parameters \([104,4,80]\) and the weight enumerator \(1+520z^{80}+104z^{100}\). Its punctured code \(C_{\overline{D}_{f,g}}\) has parameters \([26,4,20]\) and the weight enumerator \(1+520z^{20}+104z^{25}\). The punctured code is optimal with respect to the Griesmer bound. ## 5 Minimality of the codes and their applications Any linear code can be applied to design secret sharing schemes by considering the access structure. However, the access structure based on a linear code is usually very complicated, and only can be determined exactly in several specific cases. One such case is when the code is minimal. A linear code \(C\) over \(\mathbb{F}_{p}\) is called minimal if every nonzero codeword \(\mathbf{c}\) of \(C\) solely covers its scalar multiples \(z\mathbf{c}\) for \(z\in\mathbb{F}_{p}^{*}\). In 1998, Ashikhmin and Barg [1] provided a well-known criteria for minimal linear codes. **Lemma 16**.: _(Ashikhmin-Barg Bound [1]) Let \(C\) be a linear code over \(\mathbb{F}_{p}\). Then all nonzero codewords of \(C\) are minimal, provided that_ \[\frac{w_{min}}{w_{max}}>\frac{p-1}{p},\] _where \(w_{min}\) and \(w_{max}\) stand for the minimum and maximum nonzero weights in \(C\), respectively._ Now we will show under what circumstances the constructed linear codes are minimal according to Lemma 16. **Theorem 4**.: _We have the following bounds on parameters of the code \(C_{D_{f,g}}\)._ (1) _The linear codes described in Tables 1,2 and 3 are minimal provided when \(\varepsilon_{f}\varepsilon_{g}\in\{\pm 1\}\) and \(2m-s-t\geqslant 5\)._ (2) _The linear codes described in Tables 5, 6 7 and 8 are minimal provided when \(\varepsilon_{f}\varepsilon_{g}=1\) and \(2m-s-t\geqslant 4\), or \(\varepsilon_{f}\varepsilon_{g}=-1\) and \(2m-s-t\geqslant 6\)._ (3) _The linear codes described in Tables 9, 10, 11 and 12 are minimal provided when \(\varepsilon_{f}\varepsilon_{g}\in\{\pm 1\}\) and \(2m-s-t\geqslant 4\)._ **Remark 3**.: Our punctured codes \(C_{\overline{D}_{f,g}}\) are minimal for almost all cases. It should be noticed that the minimum distance of \(C_{D_{f,g}}^{\perp}\) equals 2 since there are two linearly dependent entries in each codeword in \(C_{D_{f,g}}\). So under the framework stated in [6], the minimal codes described in Theorems 1, 2 and 3 can be employed to construct high democratic secret sharing schemes with new parameters. The punctured codes are projective and minimal, as we have discussed previously. So they are also suitable for secret sharing schemes. The projective three-weight codes in Tables 13, 15 and 16 can be applied to design association schemes [2]. ## 6 Conclusion The paper studied the construction of linear codes using defining set from two weakly regular plateaued functions with index \(\frac{p-1}{2}\) for \(p\equiv 1\pmod{4}\), and hence, this is an extension of the results in [4], [18] and [21]. The punctured codes were also investigated and we found optimal codes among them. Moreover, our codes are suitable for designing association schemes and secret sharing schemes.
2306.04040
FedVal: Different good or different bad in federated learning
Federated learning (FL) systems are susceptible to attacks from malicious actors who might attempt to corrupt the training model through various poisoning attacks. FL also poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups. Traditional methods used to address such biases require centralized access to the data, which FL systems do not have. In this paper, we present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients that could raise privacy concerns and consequently compromise the integrity of the FL system. To this end, we propose an innovative score function based on a server-side validation method that assesses client updates and determines the optimal aggregation balance between locally-trained models. Our research shows that this approach not only provides solid protection against poisoning attacks but can also be used to reduce group bias and subsequently promote fairness while maintaining the system's capability for differential privacy. Extensive experiments on the CIFAR-10, FEMNIST, and PUMS ACSIncome datasets in different configurations demonstrate the effectiveness of our method, resulting in state-of-the-art performances. We have proven robustness in situations where 80% of participating clients are malicious. Additionally, we have shown a significant increase in accuracy for underrepresented labels from 32% to 53%, and increase in recall rate for underrepresented features from 19% to 50%.
Viktor Valadi, Xinchi Qiu, Pedro Porto Buarque de Gusmão, Nicholas D. Lane, Mina Alibeigi
2023-06-06T22:11:13Z
http://arxiv.org/abs/2306.04040v1
# FedVal: Different good or different bad in federated learning ###### Abstract Federated learning (FL) systems are susceptible to attacks from malicious actors who might attempt to corrupt the training model through various poisoning attacks. FL also poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups. Traditional methods used to address such biases require centralized access to the data, which FL systems do not have. In this paper, we present a novel approach _FedVal_ for both robustness and fairness that does not require any additional information from clients that could raise privacy concerns and consequently compromise the integrity of the FL system. To this end, we propose an innovative score function based on a server-side validation method that assesses client updates and determines the optimal aggregation balance between locally-trained models. Our research shows that this approach not only provides solid protection against poisoning attacks but can also be used to reduce group bias and subsequently promote fairness while maintaining the system's capability for differential privacy. Extensive experiments on the CIFAR-10, FEMNIST, and PUMS ACSIncome datasets in different configurations demonstrate the effectiveness of our method, resulting in state-of-the-art performances. We have proven robustness in situations where 80% of participating clients are malicious. Additionally, we have shown a significant increase in accuracy for underrepresented labels from 32% to 53%, and increase in recall rate for underrepresented features from 19% to 50%. ## 1 Introduction Federated Learning (FL) is a novel privacy-preserving machine learning paradigm that collaboratively trains a model across many devices, each using its own local data. As the popularity of machine learning (ML) has exploded in recent years, one of the most significant bottlenecks for ML projects has been the collection of, and access to large, high-quality datasets [36]. However, with the growing concerns around privacy and data regulations [42, 41], finding effective ways to gather data while preserving privacy has become increasingly important [36]. FL offers a promising approach to address these challenges, and it has been applied in a range of industries, including mobile internet, healthcare, finance, and insurance [25, 35, 10]. Essentially, FL allows multiple participating devices or systems to collaboratively train an ML model while keeping their data on their own devices rather than gathering it in a single location, helping protect the privacy of the individuals or organizations. In FL, only the model parameters are shared and transmitted back to a server, where they are typically aggregated using a weighted averaging function [30, 17]. However, this simple approach may not be sufficient in many situations due to the decentralized nature of FL, which introduces challenges such as byzantine failures [27], data heterogeneity [28, 19, 34, 47], security and privacy preservation of participants' data [18, 4, 1]. Firstly, machine learning models trained using FL are vulnerable to byzantine failures such as faulty sensors, communication noise, and poisoning attacks [24]. These failures can be caused by compromised clients contributing malicious or faulty global model updates [38]. For example, in the context of training an ML model for autonomous cars, faulty sensors, inconsistent connection, or malicious attacks attempting to manipulate the model to gain an advantage could pose a threat. FL models are also vulnerable to attacks on the databases used for training [33], as the central server cannot access the data to verify its integrity. Protecting the ML model against these types of failures is important in FL, particularly because of its decentralized nature, which allows for the inclusion of potentially malicious clients in the training process. In addition, when working with FL systems, it is typical for the data distributions between clients not to be independent and identically distributed (non-IID), which can pose challenges for the model to effectively and fairly learn from the data [23, 12]. Consider the scenario of using FL to train an ML model on data collected from cars. The data distributions gathered by different cars will likely vary significantly due to differences in driving conditions such as location (e.g., rural areas versus cities) and climate. As a direct result of data heterogeneity, the performance of a model trained using FL might degrade when compared to its centralized counterpart [23, 12, 28, 19, 32]. Furthermore, one of the main benefits of FL is the ability to train ML models on large amounts of data without having to access or handle sensitive information directly. This is particularly important in the context of data privacy regulations such as the General Data Protection Regulation (GDPR) [41], which require organizations to protect the personal data of individuals from unauthorized access or use. However, it is important to note that even with federated learning, there is still a risk that sensitive information could be inadvertently revealed or reconstructed from the model's parameters or gradients [4, 48, 46]. Since federated learning operates by communicating the gradients and the model parameter updates between the central server and the participating clients during each communication round, it is possible to learn sensitive and private information about the underlying data through these gradients and model updates. To address this concern, researchers have developed techniques for ensuring the privacy and security of FL systems, such as differential privacy [1] and secure multi-party computation [4], which make it possible to prove that the sensitive data can not be reconstructed or compromised during the training process. It is important that the methods provided to handle non-IID issues and poisoning attacks are compatible with such privacy-preserving techniques so we can achieve the highest performance while securing access to private data. In this paper, we examine the overarching threat model posed by various issues, focusing on realistic Federated Learning settings that address privacy concerns and account for heterogeneity issues. Our primary emphasis is on mitigating large-scale byzantine failures and poisoning attacks, which can have significant consequences. We examine state-of-the-art solutions to these open problems in federated learning. We also evaluate the effectiveness of these solutions when applied in combination with differential privacy techniques such as norm clipping and noise injection [1]. Our objective is to uncover potential avenues for enhancing the interrelated aspects of security, robustness, and fairness in FL systems and present innovative solutions to address these challenges simultaneously. **Contributions** To this end, we propose _FedVal_, a novel adaptive server-side validation solution to address the challenges of both poisoning attacks and data heterogeneity. _FedVal_ involves scaling the impact of client updates based on their performance to the extent that they have learned from the data. We argue that this approach is more effective than other existing methods that have been proposed, and we demonstrate its efficacy through comprehensive experiments conducted in different FL settings. Our experiments show that our method can provide strong protection against poisoning attacks, even under conditions where 80% of clients are malicious and in the presence of differential privacy techniques. Additionally, _FedVal_ is able to handle the fairness challenges posed by non-IID data effectively, for example by completely salvaging quantity distribution skew situations where other solutions struggle. We have also demonstrated that _FedVal_ exhibits computational efficiency with the potential to deliver substantial robustness using only 10 validation elements per class. Overall, our results highlight the importance of considering the interplay between poisoning attacks, data heterogeneity, and privacy-preserving techniques in the design of FL systems. We believe our work will inspire further research in this direction and contribute to the development of more robust and secure FL systems. ## 2 Background & Related work In this Section, we provide an overview of the federated learning paradigm (Section 2.1), poisoning attacks (Section 2.2), heterogeneity issues (Section 2.3), and differential privacy (Section 2.4). Possible solutions to poisoning attacks are presented in Sections 2.5 and 2.6, while solutions to data heterogeneity issues are found in Section 2.7. ### Federated learning FL uses a decentralized paradigm that operates over communication rounds, as illustrated in Figure 1. At the beginning of each communication round \(t\), a fraction \(r\) of \(N\) clients is selected by the central server to participate in training. The server sends the current model's parameters \(\theta^{\prime}_{g}\) to the selected clients \(S_{t}\), which will then apply a training function \(f\) on their local data \(D_{d}\) to update their local models to \(\theta^{i+1}_{d}=f(\theta^{\prime}_{g},D_{d})\). Finally, the updated models \(\theta^{i+1}_{d}\) are sent back to the server, which aggregates them into the next global model \(\theta^{i+1}_{g}\). There are several common methods to aggregate the local model updates received from clients. The most commonly applied approach is to perform a weighted averaging of the models according to the number of local data samples used for training [2, 23, 34, 32]: \[\theta^{\prime+1}_{g}=\sum_{d}^{S_{t}}\frac{\theta^{\prime+1}_{d}|D_{d}|}{ \sum_{d}|D_{d}|} \tag{1}\] Once the global model has been updated, the central server can start a new communication round. ### Poisoning attacks FL is vulnerable to poisoning attacks, in which a malicious client attempts to corrupt the global model by returning an altered model. Poisoning attacks can occur in a number of ways, such as data poisoning or model poisoning [38]. In addition, these attacks can be classified as targeted or untargeted, depending on whether the adversary aims to compromise the model on a specific function, such as a label or writing style, or seeks to cause general harm. Regarding poisoning attacks in FL, it is common for a subset of clients to be compromised and collaborate in an attempt to corrupt the global model while avoiding detection. For example, if an adversary has full access to both the model and the data between training rounds, they will likely perform a model poisoning attack. These attacks are particularly potent as they enable the attacker to manipulate the model in a way that causes maximum damage after aggregation while still evading detection [38, 37]. In contrast, if the adversary only has access to the data, such as by breaching a database, this limits the type of attacks they can perform. In most research, data poisoning attacks are usually made by label flipping [25, 39, 24, 31]. Notably, for data poisoning attacks, the percentage of malicious clients is likely to be higher as it is easier to only gain access to the data. In addition to the distinction between data and model poisoning, attacks can also be targeted or untargeted. Targeted attacks aim to compromise a specific function of the model, while untargeted attacks aim to harm the model in any possible way [38]. Research has demonstrated that even a single compromised client is sufficient to completely compromise the global model if no protection is in place [13, 38, 44]. As such, it is crucial to develop techniques for detecting and defending against poisoning attacks in FL systems to ensure the integrity and robustness of the global model. In this work, we employ two state-of-the-art poisoning attacks based on the methods proposed by Sun et al. [39] and Shejwalkar et al. [38]. Sun et al. introduced a backdoor attack using label-swapping techniques, in which attackers aim to cause the model to misclassify a specific label with distinct features as another label. Shejwalkar et al. proposed a data-dependent model poisoning attack using projected gradient ascent to manipulate weights in the most harmful direction. Their attack leverages knowledge of benign gradients to specifically target the multi-Krum defense. Our approach is a more generalized variant of this type of attack, which instead statically scales the norms of the attacking clients by a factor that inflicts the most damage to the model. ### Data Statistical Heterogeneity Statistical heterogeneity in federated learning refers to the fact that individual clients might have different data distributions, which can make decentralized training more challenging. This work focuses on one of the most challenging tasks derived from data heterogeneity, i.e., ensuring group fairness. Group fairness refers to not biasing the global model toward any specific demographic group having a common data distribution, such as race or gender [12]. Traditional ML methods for handling fairness, such as re-sampling [7] and re-weighting [21], require the data to be centralized, which is not possible in a federated setting [18]. Therefore, new approaches must be developed to ensure that the model is fair and unbiased concerning different groups of users, such as those defined by demographic characteristics or geographic locations. ### Differential privacy Recent studies in the field of ML have demonstrated the potential for extracting information or reconstructing the training data from the model updates in FL [1, 40, 4]. This presents a significant challenge when working with sensitive data that is intended to be private. One of the key use cases for FL is the ability to analyze sensitive data without violating data privacy regulations. To ensure the security of these private data, it is crucial to implement mechanisms to protect it from unauthorized access. One way to achieve this is by incorporating differential privacy (DP) techniques into the FL framework. These methods typically involve working with the norms \(\|\Delta\theta^{t+1}\|=\|\theta^{t}-\theta^{t+1}\|\) of client updates, by methods such as clipping and adding noise [1, 43], to prevent the extraction of important information from the clients. There is always a trade-off between model performance and model security, and carefully tuning the hyper-parameter of DP is required to ensure that the training model is effectively protected while maintaining a reasonable model performance. To find this tuning is a topic under ongoing research [18]. ### Defenses preliminary Defenses against poisoning attacks in FL follow a common theme of removing a set number of outlier weights or outlier clients [37, 38, 31, 3, 13, 44, 26, 25]. We make the argument that this approach is not feasible since the number of malicious clients cannot be known and will likely lead to issues where vital information is lost. Methods for estimating a possible percentage of malicious clients can be found in literature [31, 38], but these are limited to specific cases and provide no tangible guarantees. In Section A, we conducted a mathematical investigation of how high the ceiling is for a potential number of malicious clients present in a round for certain situations. We conclude that even with the knowledge of how many malicious actors might be present, we must still ensure robustness for many Figure 1: Illustration of FL in centralized setting: model parameters are sent back and forth between a server and clients. more clients than there are present due to the randomness of client selection. By this argument and the argument that the number of malicious clients can not be known, we assume that the threshold for how many potentially malicious clients or weights must be removed will need to be a relatively high percentage. Firstly, because we do not know the number of potentially malicious clients, and secondly, because the threshold must be quite a bit higher than the potential number of malicious clients. The main problem with this is that using specific criteria to remove a high number of clients each round will cause issues. For example, this can lead to underrepresented clients with unique data never being allowed participation in the making of the model, which could potentially lead to a less accurate and fair model. Therefore, it is crucial to find a more effective approach to defending against poisoning attacks in FL. ### Existing defenses In FL, several defenses have been proposed to protect against poisoning attacks. These defenses can be broadly grouped into three categories: loss function-based rejection defenses, outlier-based defenses, and norm-bounding defenses. Loss Function-based Rejection (LFR) [13] is one such defense, which validates each client by using a server-side validation dataset. Clients that have the most amount of loss are removed from the aggregation process. LFR has been shown to be one of the stronger defenses against poisoning attacks, however, it suffers from the issue of needing prior knowledge of the potential number of malicious clients. This can be a significant drawback as mentioned earlier in this section. It can also be noted that extensively validating each client update may be computationally heavy. Another group of defenses are outlier-based defenses, some euclidean-distance-based outlier detection defenses are multi-Krum [3] and trimmed mean [44]. These defenses remove a set number of outliers either in weights or updates. The removal is based on the euclidean distance between all client updates norms. While, in theory, these approaches look promising, they run into problems when the data is non-IID and has issues against model poisoning attacks that are scaled to make the malicious client as close as possible to being flagged as an outlier. A recent review has shown that the theoretically guaranteed protection claimed by these papers does not hold under state-of-the-art model poisoning attacks and the model is vulnerable with only 10% malicious clients [38]. In addition, these defenses would likely be even less effective in FL settings that use differential privacy, as adding noise to the weights would increase the variance between benign clients even more. This would make it more difficult to identify outliers and increases the chances of mistakenly identifying benign clients as malicious. Researchers have also proposed cosine-similarity-based defenses against poisoning attacks, such as with FLTrust [6] and FoolsGold [14], which do not assume knowledge of the number of malicious clients. FLTrust proposes a defense that shares many similarities with the one we propose in our work; similar to our approach, they utilize a small server-side dataset, but instead of calculating the loss, they compare the direction of gradients with those observed during server-side training on that dataset. FLTrust also scores each client, but based on similarity instead of loss. They then weigh client updates by that score in the aggregation. Despite the initial promise of these cosine-similarity-based defenses, even under attacks with a large percentage of malicious clients, recent research has demonstrated that they are susceptible to model poisoning attacks that focus on specific key weights within the model, such as the attacks presented in the work of Kasyap et al. [20]. Other researchers have proposed using auto-encoders trained on validation data for anomaly detection as a defense mechanism against adversarial attacks, such as those described in Li et al. [25] and Li et al. [26]. These defenses also involve removing a set number of outliers, which likely will cause issues as discussed previously. We also believe that these defenses are susceptible to state-of-the-art model poisoning attacks, similar to other anomaly detection methods. Another category of outlier-based defenses capitalizes on singular-value decomposition techniques, exemplified by the approach presented by Shejwalkar et al. [37]. Despite demonstrating potential, a primary disadvantage of singular-value decomposition resides in its cubic time complexity, which constrains the complete examination of the model. As a result, investigators often resort to tactics such as random sampling of model dimensions, as illustrated by Shejwalkar et al.[37]. Considering the evolution of new attacks capable of significantly impairing model performance by merely targeting a few weights [20], we argue that singular-value decomposition defenses, like the one advanced by Shejwalkar et al. [37], may be nearing obsolescence. Norm-bounding [39] is another popular defense mechanism, which involves binding the norms for weights in client updates. The theory behind this defense is that for an attack to be successful, the attacker would need to have larger norms to move the model into an undesirable position. Binding the norms is a widespread practice that is often used in differential privacy [1] and has also been adopted by more recent poisoning defenses such as SparseFed [31], another type of outlier-based defense, that argues that all defenses should also add norm bounding as a measure to strengthen their defense. While binding the norms is effective at reducing the impact of a poisoning attack, the impact of an attack on a system that has only norm-binding as the defense measure is still relatively large [38, 39]. Additionally, we believe SparseFed might be susceptible to state-of-the-art model poisoning attacks, similar to other anomaly detection methods. In conclusion, while several defenses have been proposed to protect against poisoning attacks in FL, each has its weaknesses and limitations. Outlier-based defenses may struggle against state-of-the-art model poisoning attacks, and norm-bounding defenses may not provide a strong enough defense. On the other hand, previous loss function-based defenses, while providing a strong defense, still assume knowledge of the number of malicious clients. ### Heterogeneity solutions For issues related to data heterogeneity, a significant amount of research has been conducted on preventing client drift, such as using techniques like SCAFFOLD [19] and FedProx [28]. While these methods have proven effective in creating models that are more representative of the majority of clients, they might make it difficult in systems where de-biasing and fairness considerations are necessary, which we have investigated in section 5.5. Handling group fairness, which is commonly caused by uneven underlying data distributions [18], is an area in FL that has not been widely explored. Recent works such as FairFed [12] and the work of Du et al. [11] have begun to address this issue. Prior research on fairness in FL often employs re-weighting schemes similar to the one presented in our work. FairFed [12] proposes a solution where clients analyze their data and send back information to the central server for aggregation. However, sending additional information by clients to the central server can create risks for privacy [18]. ## 3 Methodology In this section, we present our innovative method _FedVal_. _FedVal_ attaches a weight based on bias reduction and relative performance of each client model to the aggregation of their updates, which allows a more dynamic and nuanced approach to protecting the global model. In Section 3.1 we motivate the advantages of using _FedVal_ and in Section 3.2 we delve into the design of _FedVal_. ### Motivations for _FedVal_ _FedVal_ is an innovative method that aims to protect the global model from malicious client attacks while maintaining the robustness and fairness of FL training. One key advantage of score-based methods like _FedVal_ is that they never discard good model parameters. Other solutions that commonly remove a set number of clients' updates or norms in each round [37, 31, 3, 13, 44, 26, 25] can complicate dealing with data heterogeneity issues since the outliers that are removed each round might contain vital information for the model. While the primary goal of _FedVal_ is to protect the global model from malicious client updates, we have also incorporated a bias reducer term for further improvement from the average clients' model. The term aims to decrease the potential negative effects caused by the defense mechanism to improve the fairness of the FL system and model performance in the non-IID data distribution situation. _FedVal_ aims to give a secure solution to the interplay between fairness and robustness in FL as illustrated in an example in Figure 2. In this example, there is a total of 3 classes in the classification task, and 4 clients are selected in an FL communication round. 3 of the clients have updated the model in similar directions, but they are missing one of the three classes. One of the client models is slightly worse than the first two but has information on the class that they are missing. The last client is malicious and has sent back a completely ruined model. Figure 2 highlights the advantage of _FedVal_ as the algorithm that is able to notice the extra contribution from the third client as it has updated the model with new information that other clients are missing, hence giving it higher aggregation weight. _FedVal_ can also identify the malicious client and give it zero weight for aggregation. On the other hand, other robust aggregators would typically favor the two most similar clients and use them for the aggregation. If a standard averaging method such as FedAvg is used, the new model would be completely deteriorated by the malicious update. But even in a situation with no malicious updates, the new information from client 3 is, in most cases, lost since the norms with the specific information for class 3 get under-valued compared to those of the majority client update models. ### FedVal In this section we give a comprehensive explanation of the _FedVal_ algorithm, its scoring function, and its various implementation methods. Figure 2: Toy example of a federated learning situation with four clients being aggregated, two majority clients with skewed data, one client with information that is missing in the majority of clients, and one malicious client. The core of _FedVal_ is the score function, which is used to score each client based on their performance, as determined by a server-side validation dataset. Let \(S(\theta_{d}^{t})\) be the score for client \(d\) at round \(t\). The main purpose of the score function is to extract relevant features for the model and eliminate unwanted or redundant features. We consider a \(K\) class classification problem defined over a label space \(\mathcal{Y}=[K]\), where \([K]=\{1,...,K\}\). Conceptually the score function can be represented as a summation over different labels with a bias reducer term multiplied by a slope term. let \(\bar{\mathbf{L}}_{k}\) be mean validation loss for validation data with label \(k\), \(\bar{\mathbf{L}}_{ang}\) be the average validation loss over the whole validation set, and \(MAD\) mean absolute deviation as explained in Equation 4. \(div_{k,d}\), loss diversion term, is defined to be the difference between loss on the validation set with only label \(k\) using the local model from client \(d\) and the average loss from all clients on the validation set with only label \(k\). Apart from the dynamic parts, there is a term (\(Cs_{1}\)) that gets summed at the end, which gives an average client some baseline score, ensuring that none of the clients' contributions will be unnecessarily missed. We use \(C=3\) in our experiments, but it is worth noting that this hyper-parameter can be improved based on each case. The full formula of the score function can be found below in Equation 2, this score is later used for aggregation as demonstrated in Equation 5. \[S(\theta_{d}^{t})=\sum_{k=1}^{K}(\underbrace{\max(1,\left(\frac{\bar{\mathbf{L}}_{ k}}{\bar{\mathbf{L}}_{avg}}\right)}_{\text{bias reducer term}})*\underbrace{\frac{s_{1}*div_{k,d}}{MAD_{k}}}_{\text{slope term}}+Cs_{1}) \tag{2}\] \[div_{k,d}=\bar{\mathbf{L}}_{k}-\mathbf{L}_{k,d} \tag{3}\] In Equation 2, the score function is summed over the label space. However, it can also be extended with other dimensions that matter, such as overall loss and demographic groups. Note that the hyper-parameters \(s_{1}\) and \(s_{2}\) can be different and thus need to be altered if other dimensions are chosen. Alterations within the label space may also be desirable if fairness for certain important labels is essential. The slope term is determined by the loss diversion term and the MAD (mean absolute deviation) term. The MAD is defined as Equation 4. Assume there are \(M\) total validation samples. Then, MAD is the loss deviation of each validation sample from the average loss. \[MAD=\frac{\sum_{m=1}^{M}|\mathbf{L}_{m}-\sum_{i=1}^{M}\mathbf{L}_{i}|}{rN} \tag{4}\] Regarding the bias reducer term in Equation 2, it divides the average loss on the label \(k\) by the average loss on all labels. If the updated model performs worse for label \(k\), hence higher average loss \(\bar{\mathbf{L}}_{k}\), the bias reducer term would be above 1. Thus, the term scales the whole score higher if label \(k\) is performing worse than average. Therefore, the bias reducer term can in theory help balance the model to make the global model focus more on the label that is performing worse than other labels. An illustration of how a slope behaves in the _FedVal_ equation for a specific label can be seen in Figure 3. The two dynamic terms in Equation 2 is paired with two hyper-parameters \(s_{1}\) and \(s_{2}\). The \(s_{1}\) parameter - which is paired with the slope term - is a booster that steepens the slope. On the other hand, the \(s_{2}\) parameter - which is paired with the bias reducer term - increases how important it is for a dimension to not underperform compared to other model functionality. Something to note is that the bias reducer term has polynomial growth, which makes further underperformances matter more severely. This nuanced approach gives a potential implementer of _FedVal_ possibilities to decide which labels or dimensions should be prioritized and promote a more balanced model. The \(s_{2}\) parameter is designed to be adaptive, which is achieved by evaluating various \(s_{2}\) values on the validation data set and selecting the value that results in the minimum loss. To obtain a fair global model, which performs equally well for each class, we created a balanced validation set representative of the model's end goal. By choosing \(s_{2}\) that achieves the minimum loss, we can push the global model towards a more balanced and fairer performance. In our experiment, we initialize \(s_{2}\) to be 3, and in each round, check the set \([s_{2},s_{2}+0.5,s_{2}-0.5,s_{2}-5,s_{2}+5]\) for the optimal \(s_{2}\). The FL training implementing _FedVal_ will follow the standard communication round. At the start of each communication round, a subset of clients is selected, and current global weight parameters will be broadcast to these selected clients. Then, these selected clients will perform local training and return their updated models to the server. The server validates all clients over a validation dataset and scores them as demonstrated in Equation 2. Lastly, the model parameters will be aggregated using the scores of each client as weight according to Equation 5. The complete _FedVal_ algorithm can be seen in Figure 3: Example of how the score is set for a specific label. The slope of the graph gets steeper exponentially by how much this label is behind other labels. The effect of larger or smaller deviations between client loss for the label increases or decreases the slope. Algorithm 1 in Appendix B. \[\theta_{g}^{t+1}=\theta_{g}^{t}+\sum_{d}^{S_{t}}\frac{S(\theta_{d}^{t+1})\Delta \theta_{d}^{t+1}}{\sum_{d}^{S_{t}}S(\theta_{d}^{t+1})} \tag{5}\] The salient dependency of _FedVal_ on mean values for client updates is worth noting, as it could theoretically be prone to colluding attacks by multiple malicious clients as outlined in [31, 38]. Under such circumstances, one client could manipulate the mean, enabling other clients to subtly introduce smaller attacks. However, in the classification problem-focused experimental setting of this study, this potential vulnerability is mostly irrelevant due to the inherent limit on classification loss functions. The potential vulnerability of _FedVal_s in regression problems remains unexplored, which constitutes an area for future work. A viable approach to this issue might involve constraining the loss function within a reasonable range of values. Time complexity is a crucial factor when considering defenses that validate clients on server-side validation data. It has been previously noted that exhaustively testing each client on a validation dataset can be computationally intensive [26]. However, it is important to note that the actual time complexity of validating each client can be reduced by utilizing parallel processing techniques, which results in a time complexity of \(O(w)\) in parallel, where \(w\) represents the dimensions of the model and \(v\) represents the number of validation samples. A direct comparison for time complexity between _FedVal_ and other aggregation methods is difficult, as these other methods mainly depend on number of clients and model dimensions. However, an estimate is that _FedVal_ is on the upper middle end of computational load for robust aggregators. In Section 5.4, we will further investigate the time complexity by examining the number of validation samples required for _FedVal_ and show that _FedVal_ only requires a limited number of validation samples to be effective. In conclusion, the design of _FedVal_ will provide a relatively light-weight and flexible defense that is prepared for any potential scenario, while promoting fairness among labels and continuously striving to reduce the loss on the validation dataset during each training iteration, which can be shown in more detail from our experiments in Section 5. ## 4 Experimental Setup This section will discuss the general setups we have used in our experimental study. Federated learning is simulated with the Virtual Client Engine (VCE) of the comprehensive Flower framework [2] enabling us to scale to a large number of clients within a single machine. Datasets and hyper-parameters are detailed below. ### Datasets and Partitions In our research, we employ three widely used datasets of diverse size and complexity, namely CIFAR-10 [22], FEMNIST [5], and PUMS ACSIncome [9]. CIFAR-10 is a renowned benchmark dataset in computer vision and machine learning, consisting of 60,000 32x32 color images equally divided among 10 distinct categories. On the other hand, FEMNIST is an expanded version of the EMNIST dataset [8], partitioned among 3597 authors of the handwritten characters and digits. With over 800,000 28x28 grayscale images, FEMNIST is specifically designed to simulate a realistic federated setting where each author is represented by a client. In the FEMNIST dataset, there is a quite noticeable quantity skew across classes. Specifically, considering the division between numbers, capital letters, and lowercase letters, where lowercase letters and capital letters are underrepresented. Each number has approximately 4 times more elements than the letters, and capital letters are more common than lowercase letters, as reported in [8]. Additionally, the PUMS ACSIncome dataset [9] was also utilized. The ACSIncome dataset, based on the Public Use Microdata Sample (PUMS) from the American Community Survey (ACS), provides income-related data on individuals, making it an excellent resource for studying aspects such as demographic bias and fairness in ML applications [12, 15]. This work has employed several general federated settings for evaluations. The first setting utilized 40 clients for the CIFAR-10, with each client having a dataset of 1250 images. The experiment was conducted over 60 rounds, and 10 clients were selected at random in each round, using a fixed set seed to ensure a consistent selection of data across experiments. This pseudo-random selection of clients ensured that the same data was used for comparison purposes in each experiment. The second setting is on the FEMNIST experiments, which were performed using the 3597 authors as clients. Each client has a varying number of samples, with an average of around 225 samples per client. The FEMNIST experiment was run over 200 rounds, and in each round, 30 clients were randomly selected in the same pseudo-random fashion as before to ensure consistent data selection for comparisons. For experiments displayed in bar graphs with FEMNIST, average result over five rounds are displayed. The third setting involves experiments with the ACSIncome dataset. Here, we specifically investigated the recall rate for minority groups as a measure of fairness. We utilized 40 clients for these experiments, which were conducted over 30 rounds with 15 clients selected for each round. Client selection was done in similar pseudo-random fashion as for previous settings and average result over multiple rounds were presented. In all experimental conditions, we conduct a series of tests, opting to feature the results with the lowest performance for each respective algorithm in our study, unless otherwise stated. Our intent behind this approach is to portray precisely the influence of heterogeneity issues and poisoning attacks on a federated training system. By focusing on these results, we're able to highlight potential instabilities that would remain undetected if we merely displayed averages. To study the impact of heterogeneity, a range of techniques have been utilized to manipulate client distribution in the CIFAR-10 dataset. We follow the latent Dirichlet distribution (LDA) implemented by [23, 45, 16] where both label distribution and quantity distribution is determined by input parameter \(\alpha\). The level of heterogeneity is governed by the parameter \(\alpha\). As \(\alpha\rightarrow\infty\), partitions become more uniform (IID), and as \(\alpha\to 0\), partitions tend to be more heterogeneous. Our experimental evaluation considers both \(\alpha=0.4\) and \(\alpha=1000\) for the non-IID and IID cases. Additionally, this paper introduces a new method to examine fairness in the federated setting. This method artificially creates a situation where a less common type of client contains some vital data for the model. On the other hand, in the FEMNIST dataset, heterogeneity occurs naturally due to the unique writing styles of each author, as well as differences in label distribution and sample quantity between authors. ### Model Architecture and Training Details This work employs a similar model for both CIFAR-10 and FEMNIST. We implement a widely adopted CNN neural network for the image classification tasks for both datasets. The model used for both datasets is a convolutional neural network with 6 convolutional layers, and the kernel size used is \(3x3\) pixels. For both datasets, the models are trained with SGD, and the number of local client epochs is set to 10. Both models employ a learning rate of 0.005. In addition to employing a similar model for CIFAR-10 and FEMNIST, a different model architecture was utilized for the PUMS ACSIncome dataset. The model consists of a sequential neural network with four dense layers. It was compiled using the Adam optimizer with a learning rate of 0.0001 and trained with the binary cross-entropy loss function. Also, we set the _FedVal_ specific hyper-parameters \(s_{1}\) to be 3 for all datasets. Since \(s_{2}\) is an adaptive hyper-parameter, it starts from the 3 and is chosen adaptively in each round as explained in Section 3.2. As previously mentioned, the summation in Equation 2 can be extended with other dimensions that matter. Our implementation adds overall accumulated average loss to the summation where \(s_{1,avg}\) is set to 5 while \(s_{2}\) is redundant since the bias reducer term will always be 1 due to dividing overall loss by itself. For all experiments in the paper, the \(C\) hyper-parameter is set to 3. The implemented score function is demonstrated in Equation 6. \[S(\theta_{d}^{\prime})= \sum_{k=1}^{K}(\max(1,(\frac{\bar{\bar{L}}_{k}}{\bar{\bar{L}}_{ avg}})^{s_{2}})*\frac{s_{1,k}*div_{k,d}}{MAD_{k}}+Cs_{1,k})+ \tag{6}\] \[\frac{s_{1,avg}*div_{avg,d}}{MAD_{avg}}+Cs_{1,avg}\] In addition to this, in our experiments with ACSIncome, the scoring dimensions were additionally extend to consider recall rate across certain groups. For this dimension the \(s_{2}\) hyperparameter was statically set to 30. ### Poisoning Attacks Methods We employ two separate poisoning attacks to comprehensively cover the possible types of attacks that might occur in FL systems. The first attack is a _targeted data poisoning attack_, adapted from the method presented by Sun et al. [39], which introduces small changes to the model that affect only a limited subset of its tasks. The second attack is an _untargeted model poisoning attack_, adapted from the gradient ascent attack presented by Shejwalkar et al. [38], which seeks to reduce the global accuracy of the model while avoiding detection by the defense mechanisms in place. These types of attacks have been widely studied in the field of poisoning attacks, as documented in recent research works[38, 37, 25, 19, 39]. The underlying principle of the _targeted data poisoning attack_ is to manipulate a small subset of the model, making it challenging for defenses to detect the attack. This increases the potential impact on the targeted functionality while minimizing the risk of detection. As a data poisoning attack, it also expands the adversary's reach, allowing for more malicious actors to participate. On the other hand, the _untargeted model poisoning attack_ is designed to cause damage to the model without targeting a specific area of its functionality. With full access to the model during training rounds, the adversary can launch a more powerful attack by finding the optimal malicious direction that the defense will not identify as malicious. This attack is an untargeted _projected gradient ascent attack (PGA)_ and aims to impact the model wherever it is feasible. ### Differential Privacy Methods The impact of adding differential privacy on defenses against poisoning attacks in federated learning systems has been the subject of extensive research and discussions [18, 39]. In this work, we contribute to this body of research by incorporating differential privacy techniques into the federated learning system. We adopt the approach proposed by Andrew et al. [1] by adaptively binding norms and adding noise to each update vector. Our aim is to investigate the behavior of various defenses under these conditions and assess the reduction in the impact of many poisoning attacks [39, 31]. Our work expands upon previous research by exploring the interplay between differential privacy and defenses against poisoning attacks in federated learning systems. Due to the adaptive nature of the approach proposed by Andrew et al., a quantitative evaluation of privacy is not feasible and has been left out of this paper. ### Baselines and comparison We implement various state-of-the-art solutions as baselines from a wide range of prospective in our experiments for comprehensive comparisons. _LFR_[13] is a poisoning defense that removes a specified number of clients in each round based on their accumulated loss on a server-side validation dataset. In our experiments, 40% of clients were removed each round, as determined by the mathematical experiment in Section A and the number of malicious clients used. LFR serves as a benchmark algorithm for validation-loss-based defenses. It is worth mentioning that following the original paper, the FEMNIST dataset experiments with LFR use a heterogeneous validation dataset with the same label distribution as the training dataset. _Multi-Krum_[3], an outlier-based defense method, is a defense that removes a specified number of client updates deemed as outliers, based on the Euclidean distance between updates. As Multi-Krum (and other outlier-based defenses) struggle to detect model poisoning attacks [20, 38], the number of clients to remove for multi-Krum is chosen to be 50%. We also used _norm-bounding_[39] as a benchmark defense to mitigate the effects of poisoning attacks. This defense is integrated into our differential privacy solution [1] and was evaluated both in combination with other defenses and as a standalone measure against the attacks. We use a target quantile of 50% of norms to be bound as in [1]. Lastly, we implement _FedProx_[28]. FedProx is a widely-adopted method targeting to solve the data heterogeneity problem by preventing model drift from client updates. FedProx is implemented to investigate shortcomings and potential issues with such solutions. Experiments with FedProx follow the same protocol suggested in the original paper, which tests the hyper-parameter \(\mu\) from the range of \([1,5]\). As mentioned before, fairness and reducing group bias in federated learning is a field not well researched, and existing solutions often use methods that might not be feasible due to privacy and security issues [18]. This makes it difficult to find baseline solutions targeted at promoting fairness. _FedVal_ distinguishes itself from other existing methodologies in several crucial aspects. While multi-Krum utilizes outlier detection, _FedVal_ uncovers malicious clients by analysing accumulated loss on a server-side validation dataset, a strategy similar to that employed by LFR. Despite these similarities, _FedVal_ and LFR differ significantly in their operations. LFR relies heavily on pre-existing knowledge of the quantity of malicious nodes, as discussed in Sections 2.5 and A, can result in complications. In contrast, _FedVal_ diverges in its approach by not needing prior knowledge of number of malicious nodes. _FedVal_ leverages analyses performed on the server-side validation set to maintain model balance and identify oscillations across specific labels or dimensions. Unlike norm-bounding, which merely aims to minimize the fallout of an attack, _FedVal_ seeks to eradicate malicious client updates altogether. Furthermore, for heterogeneity issues, it stands apart from FedProx by striving to craft a model that excels by its very design, rather than simply aligning it with the majority client model. ## 5 Experiments In this section, we will present the results of our experiments and compare them with the chosen baseline solutions. We will also discuss the significance of the results and provide an extensive analysis of the performance of different algorithms. To begin with, we will present the results of the baselines with different algorithms in Section 5.1. We will then move on to the results from poisoning attacks in Section 5.2. Additionally, we will showcase an analysis of the limitations and potentials of different algorithms in Section A. Finally, we will demonstrate some results regarding fairness in the non-IID settings in Section 5.5. ### Baselines Results Figure 4 demonstrates the baseline results for all the algorithms we use. Figure 3(a) illustrates the IID test accuracy on CIFAR-10, and the results are what one would expect. Solutions that remove a set number of clients each round, such as LFR and multi-Krum have a very slight decrease in accuracy, while all other methods perform similarly. However, the results on the naturally heterogeneous dataset FEMNIST show more diverse results. All robust aggregators converge a bit slower than FedAvg and FedProx, which are methods not targeting for defense, but we can see that _FedVal_ converges to higher accuracy than other solutions, while multi-Krum and LFR converge to lower accuracy. It is worth noting that the results from FEMNIST are from IID test data, which means that, as opposed to the train data, the test data has the same amount of elements for each label. ### Poisoning attacks This section aims to evaluate the resilience of _FedVal_ against other baseline algorithms with the presence of various forms of poisoning attacks in diverse settings, including systems with differential privacy and different types of heterogeneity. To begin with, in Figure 5, we illustrate the accuracy of different baseline algorithms when the system is under a static PGA model poisoning attack by 10% of the present clients in the FEMNIST dataset. In our experiments, we have considered two different scenarios - one where noise is added and norms are cut by the value approximating 50% of norms being cut, illustrated in Figure 4(a), and another where there is no noise or norm-bounding, illustrated in Figure 4(b). Results demonstrate that _FedVal_ performs the best in both cases. This can be attributed to its natural ability to handle heterogeneous data and the ability not to ignore any good client models due to any set limit on the potential number of malicious clients. The results indicate that both _FedVal_ and LFR are able to remove all malicious updates that would have any noticeable impact, but LFR differs in that it also removes a significant number of benign updates, since 40% of clients in an LFR update are always removed. Interestingly, in the norm-bounding scenario depicted in Figure 4(a), multi-Krum performs worse than simply averaging (FedAvg). This is likely due to the fact that the norms of the malicious updates are scaled in such a way that they are not deemed malicious by the aggregator. As a result, since half of all clients are removed in our multi-Krum implementation, this leads to a larger percentage of remaining clients being malicious. In addition, in Figure 6, we present results from experiments that are similar to those shown in Figure 5, using the CIFAR-10 dataset. In this experiment, we have introduced artificial heterogeneity by utilizing the LDA with \(\alpha=0.4\) as explained in Section 4.1. Similar to the previous experiment, we compare different baseline algorithms and _FedVal_ under the condition where 10% of present clients are malicious, with the noise and norm-bounding setting on Figure 5(a), and without noise and norm-bounding Figure 5(b). From these experiments, we can see that _FedVal_ and LFR perform similarly under the noise and norm-bounding conditions, while _FedVal_ slightly outperforms LFR without noise and binding norms. It is important to note that _FedVal_ is designed with fairness across labels as a goal, with the aim of improving the accuracy of underrepresented labels. This feature increases the accuracy of the test dataset performed with the FEMNIST dataset since there are underrepresented labels present, such that some labels are present in less quantity. However, with the use of the LDA on CIFAR-10, the underlying data distribution remains homogeneous, but the client data distribution is skewed. This is likely the reason for the difference in results between Figure 5 and 6. On the other hand, we can observe the same trends for CIFAR-10 in Figure 6 as we did for FEMNIST in Figure 5. The results presented in both figures demonstrate the effectiveness of _FedVal_ in improving the performance of FL under various conditions, including label distribution skew and with the presence of poisoning attacks. Furthermore, we conduct experiments with backdoor poisoning attacks on the CIFAR-10 dataset as shown in Figure 7. In the experiment, we manipulate the label of the data by changing the labels of 'horse' images to 'deer'. Specifically, in Figure 6(a), we change 10% of the clients' data, and in Figure 6(b), we change 20% of the clients' data. The accuracy of the backdoor attack in this experiment is measured by how often the model incorrectly predicts a 'horse' image as a 'deer', so the lower accuracy indicates less successful attacks. By investigating the results, we can see that the LFR and _FedVal_ aggregation methods outperform other methods in protecting the model against the backdoor attack. Also, the results show Figure 4: Baseline test accuracy for CIFAR-10 and FEMNIST for different baseline methods. (a) accuracy for CIFAR-10 with IID distribution; (b) accuracy for the naturally heterogeneous FEMNIST dataset. Figure 5: Comparison of accuracy for various aggregation methods applied to the naturally heterogeneous FEMNIST dataset under PGA attack conditions, with and without the application of the differential privacy techniques of binding norms and noise addition. (a) PGA attack scenario with differential privacy techniques, (b) PGA attack scenario without differential privacy techniques. Figure 6: Comparison of accuracy for various aggregation methods applied to the CIFAR-10 dataset, targeted under PGA attack conditions, with simulated heterogeneity in client data distribution using LDA (\(\alpha=0.4\)). (a) PGA attack scenario with differential privacy techniques (binding norms and noise addition), (b) PGA attack scenario without differential privacy techniques. that both LFR and _FedVal_ are able to provide almost full protection for the model, as compared with the no-attack case. This highlights the effectiveness of our method in mitigating the impact of malicious actors attempting to manipulate machine learning models using backdoor attacks. ### Number of malicious clients To ensure robustness, it is important to consider the potential presence of a percentage of malicious clients in a round over a large number of rounds. More details regarding the theoretical scenarios are explained in Appendix A. Following the discussion in Section 2.5 and Appendix A, we decided to investigate the situation where the number of malicious clients exceeds the limit set by robust aggregators in order to understand the capabilities of our method _FedVal_ fully. For other aggregators, as one might expect, the model's performance deteriorates significantly in scenarios where the number of malicious clients exceeds the limit set by a robust aggregator. We conducted experiments with both 40% and 80% of total clients as malicious, illustrated in Figure 8, and the results were quite striking. Despite the high percentage of malicious clients, _FedVal_ was able to converge without any major difficulties as opposed to other methods. This highlights the strength of _FedVal_ as both a robust but also adaptive poisoning defense that is prepared for any type of situation that may occur in a federated learning system. Furthermore, the results of these experiments also demonstrate the robustness of _FedVal_ in the face of a high number of malicious clients. In the standard federated learning system using aggregation methods such as FedAvg, the presence of even a small number of malicious clients can cause serious issues with the convergence and accuracy of the model, as we can see from Figures 5, 6, and 7. However, our method can mitigate the impact of malicious clients and ensure the integrity of the model in various scenarios. Overall, the experimentation in Figure 8 has shown that _FedVal_ is an effective and reliable method for defending against poisoning attacks in federated learning systems, even in situations where the number of malicious clients is higher than what we may expect there to be. ### Time complexity We discussed the time complexity and computational load of running _FedVal_ previously in Section 5.4. To further investigate this issue, we have conducted an experimental study to determine the minimum number of validation elements required for the algorithm to maintain robustness. We present the experiment results in Figure 9, which are conducted using the CIFAR-10 dataset and various attacks. We varied the number of validation elements used in the defense and evaluated the effectiveness of the system under these conditions. The experiment shows that a relatively small number of validation elements is sufficient to ensure robustness. For the untargeted attack illustrated in Figure 8(b), we obtained the unexpected result that checking the loss on a single element for each class is enough to achieve robustness. For the targeted backdoor attack, illustrated in Figure 8(a), 75 total validation elements are required to achieve robustness, which is 7.5 elements per class. Based on these findings, it can be inferred that using 10 validation elements per class provides a considerable degree Figure 8: Performance of various defenses under stress test conditions with a large number of malicious clients conducting poisoning attacks. (a) accuracy with 40% of clients being malicious, (b) accuracy with 80% of clients being malicious. Figure 7: Backdoor performance over rounds for the backdoor task of miss-predicting horses into being deer. Malicious clients perform data poisoning attacks by swapping the labels of ‘horses’ into ‘deer’. The backdoor accuracy is measured by how often the model incorrectly predicts a ‘horse’ image as a ‘deer’. (a) attack accuracy with 10% malicious clients; (b) attack accuracy with 20% malicious clients. Figure 9: Performance of _FedVal_ under poisoning attacks with different numbers of validation elements used for the analysis on CIFAR-10. (a) accuracy for the backdoor task of the malicious clients, (b) overall accuracy. of robustness, given the experimental conditions we've explored. We recommend maintaining a validation dataset that is minimalistic yet comprehensive in scope. This provides important information, as it suggests that the _FedVal_ algorithm is computationally efficient and can be implemented with minimal computational resources. ### Fairness Continuing our experimental study, we will now focus on investigating the impact of heterogeneous data in federated learning. Specifically, we will demonstrate the importance of handling fairness in federated learning, especially in scenarios where certain classes are underrepresented in some clients. To illustrate this point, we have designed a simple scenario using the CIFAR-10 dataset, where 2 of the 10 classes are missing in the 70% of the participating clients. Figure 10 presents the results of this scenario, where classes '4' and '5' are the missing classes. The results indicate that in this scenario, the accuracy for classes '4' and '5' hover around zero for a majority of the tested solutions. Solutions managing to handle the scenario are our proposed solution, _FedVal_ and interestingly also to a certain degree with LFR. This is likely attributed to the fact that LFR would favor the clients with the complete label distribution and remove a few skewed clients each round. This is not sufficient to handle the setting as the clients with the missing labels aggregation needs to be scaled up for the missing labels to be properly addressed, as we can see happening with _FedVal_. We have also implemented FedProx for this scenario, which is a widely-adopted solution for non-IID data distribution in federated learning to prevent client drift. As mentioned, preventing client drift is beneficial in situations where the optimized global model is close to the type of model that a majority of the clients would have. However, in scenarios where some labels are only present in a small subset of all clients, solutions like FedProx falls short, as we can see from the results presented in Figure 10. The results where FedProx is evaluated in combination with _FedVal_ even indicate that FedProx increases the difficulty of extracting the information that is missing in the majority of clients, likely due to the penalty term, which prevents the clients from moving the model too much, resulting in all clients having more similar models as the ones with missing labels. On the other hand, by utilizing re-weighting schemes, _FedVal_ is able to salvage this. These results demonstrate the importance and significance of fairness in the federated learning system, especially when many clients have underrepresented or missing classes. The results also highlight the catastrophic forgetting in the federated setting [29, 17], which can be a crucial issue to consider, especially in practical and real-life scenarios, where some labels likely will be less common and not present in all clients. Relating to the experiment in Figure 10, we also investigate the accuracy of the model thoroughly across all labels in the FEMNIST dataset. As mentioned in Section 4.1, there is a quite noticeable quantity skew across classes. Specifically, considering the division between numbers, capital letters, and lowercase letters, each class of Numbers has approximately 4 times more elements than the Letters, and Capital Letters Figure 11: Fairness experiment on the naturally heterogenous FEMNIST and PUMS ACSIncoms dataset. (a) average accuracy for the final 5 rounds on each class group on FEMNIST, (b) mean absolute deviation across all labels on FEMNIST, (c) average recall for final 10 rounds on different groups in ACSIncome. Figure 10: Test on CIFAR-10 non-IID. Two labels are missing in a majority of the clients. The clients who have the missing labels are IID. are more common than Lowercase Letters. With this type of distribution, we can expect the model to quickly learn how to correctly classify Numbers, and struggle with correctly classifying capital and lowercase letters in comparison. In Figure 11, we illustrate the performance of different aggregation methods on different label classes. Figure (a)a illustrates the accuracy division across label groups, and Figure (b)b illustrates the mean absolute deviation across all labels. From Figure (a)a, we can see that by simply averaging through FedAvg or using methods that prevent client drift, such as FedProx, a large bias is created toward the more common groups. The models created by these methods tend to bias toward the more common groups (the Numbers group in our case) and neglect the less common groups, which are the Lowercase Letters. However, most of the robust aggregators happen to create less bias in the model. The most noticeable difference is with _FedVal_, which manages to almost double the accuracy of the less common Lowercase Letters, more exact from 32% with FedAvg to 53% with _FedVal_, while still managing to provide similar accuracy for the more common classes. On the other hand, other methods struggle with creating a model that predicts the Lowercase Letters correctly. In Figure (c)c, we have conducted an analysis utilizing the PUMS ACSIncome dataset [9]. This particular dataset is frequently employed to examine fairness issues [12, 15]. The focus of our study was to scrutinize the recall rate (true positive) for minority groups. Historically, ML models have demonstrated a bias towards groups with fewer data points, resulting in a lower rate of accurate positive predictions. In our experiment, we extended the dimensions of _FedVal_ to encompass discrepancies in recall rates, aiming to equalize recall across all classes. This experiment's findings underscore _FedVal's_ dynamic capacity to adjust a model where necessary. By extending the algorithm to include recall analysis, we observed a significant improvement in recall rates between the "Indigenous of North America" and "Other" groups. Notably, these are groups where other methods have previously struggled to generate accurate positive predictions based on true labels. In conclusion, to improve the accuracy of the model across all labels and ensuring we have a balanced model, we can consider using more advanced aggregation methods such as _FedVal_. Overall, it is important to consider the dataset distribution and potential biases when training and evaluating FL models to ensure that the model is accurate and fair across all classes and demographic groups. ### Summary Many existing solutions tend to overlook other critical problems in Federated Learning, often exacerbating these issues or introducing new security and privacy challenges. In contrast, our proposed method, _FedVal_, has shown resilience against both model and data poisoning attacks. It even thrives when a substantial majority of clients are malicious. For example, _FedVal_ manages to converge even with 80% of clients performing model poisoning attacks. This is a significant improvement over methods like multi-Krum and LFR, which fail to converge when faced with merely 40% malicious clients. Furthermore, when faced with 20% of clients deploying backdoor data poisoning attacks, both _FedVal_ and LFR maintain their performance levels. In the same scenario, multi-Krum's performance drops drastically, even falling behind the no-defense FedAvg aggregator. In addressing data distribution skews, _FedVal_ outshines other methods, salvaging problematic situations where others struggle, as illustrated in the CIFAR-10 experiment in Figure 10. Moreover, our fairness experiment using the ACSIncome dataset showed that _FedVal_ significantly improves the recall of underrepresented groups from 19% with FedAvg to 50%, an increase of over 30%. FedProx, which is specifically designed to handle heterogeneity, performed similarly to FedAvg's simple averaging. Other robust aggregators showed some promise, but still fell short in performance compared to _FedVal_. In conclusion, _FedVal_ consistently outperforms existing methods in terms of robustness to attacks, promoting fairness, and adapting to data distribution skews. Our work highlights the necessity of considering multiple aspects of Federated Learning, from security to fairness, to ensure the robustness and reliability of the learning system. ## 6 Conclusion In this work, we provide a robust solution, _FedVal_, which aims to solve multiple problems by analyzing and utilizing the clients' learning. We propose to do this by using a small server-side validation dataset to asses client updates and determine the optimal aggregation weights considering both robustness and fairness. This technique involves comparing average client performance with client performance over a range of dimensions. The preceding sections of this paper have underscored the potential of the _FedVal_ algorithm in the realm of FL. However, we acknowledge the necessity of extending our investigation to assess its applicability on more general regression problems. The breadth of unexplored challenges within FL suggests a wealth of potential benefits from utilizing client analysis through server-side validation data. It is therefore interesting to delve deeper into these areas, as they could provide strong justification for the broader adoption of _FedVal_ and similar algorithms. Moreover, we also perceive the potential of exploring more sophisticated aggregation methods, exploiting the data acquired through validation. The development of such methods could push the boundaries of what is currently achievable, thereby offering a new dimension of innovation. As we strive for continual improvement in FL, our future research endeavors will focus on unearthing and maximizing the potential inherent in these investigative avenues. ## Availability Code available at: [https://github.com/viktorvaladi/FedVal](https://github.com/viktorvaladi/FedVal)
2310.09282
Phonon thermal transport in UO$_2$ via self-consistent perturbation theory
Computing thermal transport from first-principles in UO$_2$ is complicated due to the challenges associated with Mott physics. Here we use irreducible derivative approaches to compute the cubic and quartic phonon interactions in UO$_2$ from first-principles, and we perform enhanced thermal transport computations by evaluating the phonon Green's function via self-consistent diagrammatic perturbation theory. Our predicted phonon lifetimes at $T=600$ K agree well with our inelastic neutron scattering measurements across the entire Brillouin zone, and our thermal conductivity predictions agree well with previous measurements. Both the changes due to thermal expansion and self-consistent contributions are nontrivial at high temperatures, though the effects tend to cancel, and interband transitions yield a substantial contribution.
Shuxiang Zhou, Enda Xiao, Hao Ma, Krzysztof Gofryk, Chao Jiang, Michael E. Manley, David H. Hurley, Chris A. Marianetti
2023-10-13T17:45:02Z
http://arxiv.org/abs/2310.09282v2
# Phonon thermal transport in \(\mathbf{U}\mathbf{O}_{2}\) via self-consistent perturbation theory ###### Abstract Computing thermal transport from first-principles in \(\mathbf{U}\mathbf{O}_{2}\) is complicated due to the challenges associated with Mott physics. Here we use irreducible derivative approaches to compute the cubic and quartic phonon interactions in \(\mathbf{U}\mathbf{O}_{2}\) from first-principles, and we perform enhanced thermal transport computations by evaluating the phonon Green's function via self-consistent diagrammatic perturbation theory. Our predicted phonon lifetimes at \(T=600\) K agree well with our inelastic neutron scattering measurements across the entire Brillouin zone, and our thermal conductivity predictions agree well with previous measurements. Both the changes due to thermal expansion and self-consistent contributions are nontrivial at high temperatures, though the effects tend to cancel, and interband transitions yield a substantial contribution. + Footnote †: preprint: APS/123-QED Uranium dioxide (\(\mathbf{U}\mathbf{O}_{2}\)) has attracted a great deal of research interest since the 1950s, both as a standard nuclear fuel and as a fundamental system of rich physics induced by the partially filled \(f\) shell [1; 2]. As thermal transport is critical in nuclear fuels, phonon thermal transport in \(\mathbf{U}\mathbf{O}_{2}\) has been extensively studied both by experiments [3; 4; 5; 6; 7] and from first-principles [8; 9; 10; 11; 12; 13; 14]. However, wide-ranging results were obtained from first-principles computations, and a robust consensus has not yet merged (see Sec. I in Supplemental Material (SM) [15] for a detailed discussion of the different approaches). While low-temperature thermal conductivity is substantially complicated by magnons, room temperature and beyond should be dominated by phonon thermal transport. However, accurately computing phonon interactions in \(\mathbf{U}\mathbf{O}_{2}\) from first-principles is complicated due to the complex interplay of Mott physics, magnetic order, and spin-orbit coupling (SOC). Here we circumvent these technical challenges by employing \(f\)-orbital occupation matrix control (OMC) [16; 17; 18; 19] and the 3\(\mathbf{k}\) antiferromagnetic (AFM) ground state obtained by our previous study [20], which provides a robust description of the ground state and phonons as compared to experiment. Phonon thermal conductivity has been reliably computed in band insulators by solving the linearized phonon Peierls-Boltzmann transport equation (BTE) from first-principles using scattering rates computed within leading order perturbation theory [21; 22; 23; 24]. This _de facto_ standard approach for computing phonon thermal conductivity, as implemented by multiple publicly available software packages [25; 26; 27; 28], solves the BTE using cubic phonon interactions and the imaginary part of the bare bubble diagram. Naturally, non-trivial inaccuracy will occur under extreme conditions (e.g., high temperatures) where perturbation theory is inadequate. More recently, quartic phonon interactions have been incorporated using the imaginary part of the sunset diagram [29; 30], and the contribution of interband phonon transitions has been addressed by a generalization of the BTE, known as the Wigner transport equation (WTE) [31; 32]. Here, we go beyond the current state of the art for computing thermal conductivity, which only uses the imaginary parts of the bubble and sunset diagrams, by using self-consistent diagrammatic perturbation theory to compute the single phonon Green's function [33]. In the present work, we use density functional theory plus \(U\) (DFT+\(U\)) [34] to compute the cubic and quartic phonon interactions, which are then used to compute the inelastic neutron scattering (INS) function and thermal conductivity. Both the scattering function and the thermal conductivity are computed using increasingly sophisticated levels of theory, including bare perturbation theory and self-consistent perturbation theory [33]. For the latter, two different levels of self-consistency are employed: the Hartree-Fock (HF) approximation for phonons and quasiparticle perturbation (QP) theory. The former is the traditional variational approach of Hooton [35], where the four phonon loop diagram is evaluated self-consistently, and the latter self-consistently evaluates both the four phonon loop diagram and the real part of the three phonon bubble diagram [33]. Following Ref. [33], the self-consistency scheme and the subsequent diagrams evaluated to construct the phonon self-energy are indicated by the notation \(\mathcal{S}^{A}_{ijk...}\), where \(A\in\{o,HF,QP\}\) labels the self-consistency scheme and \(i,j,k,...\) indicate all diagrams evaluated post self-consistency. The colloquial diagram names bubble, loop, and sunset are abbreviated as \(b\), \(l\), and \(s\), respectively, while the self-consistency schemes \(o\), \(HF\), and \(QP\) correspond to the bare, Hartree-Fock, and quasiparticle Green's function, respectively. For example, the imaginary part of the phonon self-energy used in the standard thermal conductivity approach [22] is obtained from \(\mathcal{S}^{o}_{b}\); and the approach in Ref. [30], which em ploys quartic phonon interactions using the imaginary part of the sunset diagram, is obtained from \(\mathcal{S}_{bs}^{o}\). For each scheme we employ, both BTE and WTE are applied within the relaxation time approximation (RTA). For \(\mathcal{S}_{b}^{o}\), the full solution to the BTE is also obtained, yielding results very close to the RTA (see Sec. II of SM [15]), as is consistent with previous results for ThO\({}_{2}\)[36] and CaF\({}_{2}\)[37; 36]. To include the effects of the thermal expansion, the phonons and phonon interactions are computed at three expanded volumes, according to the experimental thermal expansion coefficients at \(T=360\), 600, and 1000 K [38]. These computed results are linearly interpolated or extrapolated to temperatures from 0 to 1400 K. Our DFT+\(U\) calculations were carried out using the projector augmented-wave (PAW) method [40; 41], as implemented in the Vienna ab initio Simulation Package (VASP) code [42; 43]. The exchange correlation functional employed in our DFT+\(U\) calculations was the generalized gradient approximation (GGA) as formulated by Perdew, Burke, and Ernzerhof (PBE) [44], due to its overall better accuracy for phonons in UO\({}_{2}\) (see Sec. VIII of SM [15]). We used the rotationally invariant DFT+\(U\) approach of Dudarev _et al_. [45], which only employs a single effective interaction, and \(U=4\) eV was used throughout. SOC was included in all calculations. We customized the VASP code to initialize and monitor the occupation matrices during the calculations [20], and the initial values of the occupation matrices were taken from our previous work (i.e., \(\mathbb{S}_{0}\)) [20]. The cubic and quartic phonon interactions were calculated via the bundled irreducible derivative (BID) approach [46]. More information on the phonon interaction calculations, including supercell size, \(k\)-point mesh, and Born effective charges, is provided in Sec. II of SM [15]. Details of the thermal conductivity calculations are also provided in Sec. II of SM [15]. The scattering function \(S(\mathbf{Q},\omega)\) at \(T=600\) K was measured using the Angular Range Chopper Spectrometer (ARCS) with an incident neutron energy \(E_{i}=120\) meV and ARCS-100-1.5-AST Fermi chopper [47]. Further details of the UO\({}_{2}\) crystal and ARCS measurements have been reported previously [48]. The ARCS energy resolution functions [47] were used in fitting the phonon peaks, and the reported widths are the intrinsic full-width half-maximum (FWHM) values that have been corrected for the instrument contribution. The ARCS instrument measures a large volume in Q and E, which contains many Brillouin zones, and the data analysis allows for an adjustable size of \(q\)-voxel, a finite volume in reciprocal space associated with some \(q\)-point, in all crystallographic directions. The \(q\)-voxel size is normally chosen to be as small as possible, with the minimum being dictated by having sufficient counting statistics, and the resulting scattering function is normally inherently broadened due to this issue [36]. To make a meaningful comparison against experiment, the usage of the experimental \(q\)-voxel must be accounted for within theory. Details of the computation are reported in Ref. [36], and the \(q\)-voxel information is included in Sec. IV of SM [15]. We begin by considering the INS scattering function, which was measured at \(T=600\) K. The scattering function can be decomposed into components from \(n\)-phonon contributions, and the dominant peaks in the spectra arise from the one phonon contributions. The one phonon scattering function can be obtained from the phonon self-energy, which can be computed using standard tools from many-body physics [36]. Given that \(T=600\) K is still a modest temperature, we will demonstrate that the bare bubble and loop diagrams are still sufficient to reasonably capture the INS scattering function peak width. We begin by plotting the phonon linewidths computed using \(\mathcal{S}_{b}^{o}\), which are overlaid on the phonon dispersion (see Fig. 1 (a)). The branch naming convention follows Ref. [49]. Generally, the acoustic branches have a much smaller FWHM than the optical branches, as is expected. For the INS scattering function, we first consider the particular example of \(\mathbf{Q}=[0.6,0.6,6.6]\), where we individually illustrate the effects of the \(q\)-voxel and energy Figure 1: (a) The unfolded phonon dispersion at \(T=0\) K, computed using GGA+\(U\)+SOC (\(U=4\) eV) in the 3**k** AFM state. The hollow points were directly computed using DFT, while the corresponding curves are Fourier interpolations. The width of the line shading represents the FWHM computed at \(T=600\) K using \(\mathcal{S}_{b}^{o}\). (b) \(S(\mathbf{Q},\omega)\) at \(\mathbf{Q}=[0.6,0.6,6.6]\) and \(T=600\) K, computed using \(\mathcal{S}_{b}^{o}\). For direct comparison, the INS instrumental energy resolution (ER) is accounted for in the solid curves [39]. The \(q\)-voxel dimensions used in both the measurement and computation are 0.075, 0.2, and 0.2 reciprocal lattice units (r.l.u.) along the [L, L, L], [0, 0, L], and [-H, H, 0] directions, respectively. resolution (see Fig. 1 (b)). The peak at approximately 12 meV corresponds to the TA mode, whereas the LA mode at approximately 20 meV is barely observable due to the weighting factors in the scattering function. Clearly, both the \(q\)-voxel and energy resolution must be considered to make a meaningful comparison with INS measurements. The favorable agreement suggests that the level of theory we are using is robust, but a detailed comparison across the entire Brillouin zone is still needed. We now proceed to comprehensively compare the computational and experimental results for the FWHMs of the scattering function peaks across the Brillouin zone (see Fig. 2). Following standard INS conventions, the energy resolution is removed from the peak width, and the theoretical results are presented for both the \(q\)-point and the \(q\)-voxel that was used in INS. Overall, there is favorable agreement across all modes and \(q\)-paths, indicating that the cubic phonon interactions computed using DFT+\(U\) and the bubble diagram used to evaluate the self-energy are sufficient to describe experiment at \(T=600\) K. We reevaluated Fig. 2 using \(\mathcal{S}_{lb}^{HF}\) and \(\mathcal{S}_{lb}^{QP}\), while accounting for thermal expansion, and the net changes are found to be modest at \(T=600\) K (see Sec. VII in SM [15]). The thermal conductivity will first be explored at up to \(T=1400\) K using bare perturbation theory (see Fig. 3(a)), where the imaginary parts of the bubble and sunset diagrams will be considered. Recall that we are not accounting for magnons, and thus our results will not describe experiment below \(T\approx 400\) K. The \(\mathcal{S}_{b}^{Q}\) BTE results using the imaginary part of the bubble diagram are reasonable above \(T=400\) K as compared to experiment [4; 5; 6; 7]. This favorable agreement is anticipated from our preceding favorable comparison with INS. However, the result systematically underpredicts experiment at high temperatures, and therefore it is compelling to include the quartic phonon interactions. We begin by using the imaginary parts of both the bare bubble and sunset diagrams (\(\mathcal{S}_{bs}^{o}\)), where the latter purely uses quartic interactions, and the result is pushed further away from experiment by a small amount. This same effect was previously observed in Ref. [14] (see SM [15], Fig. S1), though the value of their results are strongly underpredicted relative to our own (see Sec. I in SM [15]). Interestingly, above room temperature, roughly 70% of the thermal conductivity arises from phonon modes with energies below 24 meV, and therefore the optical modes do play a nontrivial role in thermal transport (see Sec. V in SM [15]). We proceed by including interband phonon contributions via the WTE, which increases the thermal conductivity monotonically with temperature, yielding good agreement with experiment. At \(T=1400\) K, the interband portion contributes about 30% of the total thermal conductivity, which is non-trivial. While the current approximation yields reasonable agreement with experiment, it is important to use a logically consistent approach where the real portion of the self-energy is not discarded and the effects of thermal expansion are included (see Fig. 3(b)). When thermal expansion is included in the thermal conductivity calculation, a non-trivial decrease in thermal conductivity is observed: at \(T=1400\) K, the decrease of the \(\mathcal{S}_{bs}^{o}\) BTE result due to thermal expansion is nearly 50%. This strong decrease may explain the relatively low predicted thermal conductivity in Ref. [49], which used \(\mathcal{S}_{b}^{o}\) BTE and included thermal expansion. While the interband contribution contained in the \(\mathcal{S}_{bs}^{o}\) WTE result increases the predicted value, the thermal conductivity is still underpredicted as compared to experiment. However, it is still necessary to account for the real part of the phonon self-energy, and explore the possibility of accounting for higher order diagrams via self-consistent perturbation theory [33]. Nominally, we would expect the QP result to be better than HF as it sums additional diagrams which are known to be relevant [33]. We first consider \(\mathcal{S}_{bs}^{HF}\) WTE, which notably increases the thermal conductivity at high temperatures. This result is anticipated, given that the HF approximation nominally increases the effective harmonic frequencies, decreasing the phonon lifetime contribution from the bubble. The \(\mathcal{S}_{bs}^{QP}\) WTE result is shifted downwards from the HF result, slightly above the bare perturbation theory result using the bubble and sunset diagrams. The underestimation of the experimental thermal conductivity by \(\mathcal{S}_{bs}^{QP}\) might be accounted for by including more diagrams, possibly requiring diagrams with phonon interactions beyond fourth order. Ideally, one would sum all possible diagrams and obtain the exact phonon self Figure 2: FWHMs of the \(S(\mathbf{Q},\omega)\) peaks as a function of \(\mathbf{q}\) in various zones for \(\mathrm{U}\mathrm{O}_{2}\) at \(T=600\) K. The \(\mathcal{S}_{lb}^{o}\)\(q\)-point and \(q\)-voxel results are shown as blue and red curves, respectively. INS results are shown as black points. energy, which can be done in the classical limit using molecular dynamics [33]. Another possibility for the discrepancy is that the PBE exchange correlation functional is not sufficiently describing the phonon interactions. In summary, we have computed the scattering function and thermal conductivity of UO\({}_{2}\) from first-principles using various levels of self-consistent perturbation theory and compared to our own INS experiments and existing thermal conductivity experiments. The relevant contributions of this work include accurately describing the phonon interactions in UO\({}_{2}\) from first principles and illustrating the effects of improving the quality of the single particle phonon Green's function on the thermal conductivity. Favorable agreement between our theory and INS experiment is obtained for the FWHM of the scattering function across the Brillouin zone. In terms of quantitatively computing the thermal conductivity at high temperatures, we find that thermal expansion decreases the thermal conductivity while interband transitions increases the thermal conductivity, and these effects are of similar magnitude. Including quartic phonon interactions at the level of the bare sunset diagram causes a small decrease in thermal conductivity, while the self-consistent perturbation theory yielded moderate and appreciable increases for the quasiparticle and Hartree-Fock procedures, respectively. Aside from low temperatures where magnons play an important role, phonon thermal transport in UO\({}_{2}\) is now well characterized from first-principles. This work is supported by the Center for Thermal Energy Transport under Irradiation, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE) Office of Basic Energy Sciences. This research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the ORNL. This research made use of Idaho National Laboratory computing resources, which are supported by the DOE Office of Nuclear Energy and the Nuclear Science User Facilities under contract no. DE-AC07-05ID14517. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The unfolding of phonons and phonon interactions was supported by grant DE-SC0016507 funded by the U.S. Department of Energy, Office of Science. ## References * Lander and Caciuffo [2020]G. H. Lander and R. Caciuffo, Journal of Physics: Condensed Matter **32**, 374001 (2020). * Hurley _et al._ [2022]D. H. Hurley, A. El-Azab, M. S. Bryan, M. W. D. Cooper, C. A. Dennett, K. Gofryk, L. He, M. Khafizov, G. H. Lander, M. E. Manley, J. M. Mann, C. A. Marianetti, K. Rickert, F. A. Selim, M. R. Tonks, and J. P. Whrary, Chemical Reviews **122**, 3711 (2022). * Gofryk _et al._ [2014]K. Gofryk, S. Du, C. R. Stanek, J. C. Lashley, X.-Y. Liu, R. K. Schulze, J. L. Smith, D. J. Safarik, D. D. Byler, K. J. McClellan, B. P. Uberuaga, B. L. Scott, and D. A. Andersson, Nature Communications **5**, 4551 (2014). * Fink [2000]J. K. Fink, Journal of Nuclear Materials **279**, 1 (2000). * Bates [1965]J. L. Bates, Nuclear Science and Engineering **21**, 26 (1965). Figure 3: Thermal conductivity computed using GGA+\(U\)+SOC (\(U=4\) eV) and comparing with experiments [4; 5; 6; 7]. Panel (a) presents the bare perturbation theory results without including thermal expansion. Panel (b) presents the results with thermal expansion and self-consistent perturbation theory. * Godfrey _et al._ (1965)T. G. Godfrey, W. Ful'kerson, T. G. Kollie, J. P. Moore, and D. L. McELROY, Journal of the American Ceramic Society **48**, 297 (1965). * Ronchi _et al._ (2004)C. Ronchi, M. Sheindlin, D. Staicu, and M. Kinoshita, Journal of Nuclear Materials **327**, 58 (2004). * Yin and Savrasov (2008)Q. Yin and S. Y. Savrasov, Physical Review Letters **100**, 225504 (2008). * Kaur _et al._ (2013)G. Kaur, P. Panigrahi, and M. C. Valsakumar, Modelling and Simulation in Materials Science and Engineering **21**, 065014 (2013). * Mei _et al._ (2014)Z.-G. Mei, M. Stan, and J. Yang, Journal of Alloys and Compounds **603**, 282 (2014). * Wang _et al._ (2015)B.-T. Wang, J.-J. Zheng, X. Qu, W.-D. Li, and P. Zhang, Journal of Alloys and Compounds **628**, 267 (2015). * Torres and Kaloni (2019)E. Torres and T. P. Kaloni, Journal of Nuclear Materials **521**, 137 (2019). * Torres _et al._ (2020)E. Torres, I. CheikNijfon, T. P. Kaloni, and J. Pencer, Computational Materials Science **177**, 109594 (2020). * Yang _et al._ (2022)X. Yang, J. Tiwari, and T. Feng, Materials Today Physics **24**, 100689 (2022). * (15)See Supplemental Materials at [link] for information about previous DFT+\(U\) studies on thermal conductivity of \(\text{U}\text{O}_{2}\), the computational details and results of thermal conductivity, and the values of computed irreducible derivatives in this work. See also Refs. [50, 51, 52, 53, 54, 55, 56, 57, 58, 59]. * Dorado _et al._ (2009)B. Dorado, B. Amadon, M. Freyss, and M. Bertolus, Physical Review B **79**, 235125 (2009). * Amadon _et al._ (2008)B. Amadon, F. Jollet, and M. Torrent, Physical Review B **77**, 155104 (2008). * Jomard _et al._ (2008)G. Jomard, B. Amadon, F. Bottin, and M. Torrent, Physical Review B **78**, 075125 (2008). * Zhou and Ozolins (2011)F. Zhou and V. Ozolins, Physical Review B **83**, 085106 (2011). * Zhou _et al._ (2022)S. Zhou, H. Ma, E. Xiao, K. Gofryk, C. Jiang, M. E. Manley, D. H. Hurley, and C. A. Marianetti, Physical Review B **106**, 125134 (2022). * Broido _et al._ (2005)D. A. Broido, A. Ward, and N. Mingo, Physical Review B **72**, 014308 (2005). * Broido _et al._ (2007)D. A. Broido, M. Malorny, G. Birner, N. Mingo, and D. A. Stewart, Applied Physics Letters **91**, 231922 (2007). * Broido _et al._ (2012)D. A. Broido, L. Lindsay, and A. Ward, Physical Review B **86**, 115203 (2012). * Chaput (2013)L. Chaput, Physical Review Letters **110**, 265506 (2013). * Li _et al._ (2014)W. Li, J. Carrete, N. A. Katcho, and N. Mingo, Computer Physics Communications **185**, 1747 (2014). * Tadano _et al._ (2014)T. Tadano, Y. Gohda, and S. Tsuneyuki, Journal of Physics: Condensed Matter **26**, 225402 (2014). * Togo _et al._ (2015)A. Togo, L. Chaput, and I. Tanaka, Physical Review B **91**, 094306 (2015). * Chernatynskiy and Phillpot (2015)A. Chernatynskiy and S. R. Phillpot, Computer Physics Communications **192**, 196 (2015). * Feng and Ruan (2016)T. Feng and X. Ruan, Physical Review B **93**, 045202 (2016). * Feng _et al._ (2017)T. Feng, L. Lindsay, and X. Ruan, Physical Review B **96**, 161201 (2017). * Simoncelli _et al._ (2019)M. Simoncelli, N. Marzari, and F. Mauri, Nature Physics **15**, 809 (2019). * Simoncelli _et al._ (2022)M. Simoncelli, N. Marzari, and F. Mauri, Physical Review X **12**, 041011 (2022). * Xiao and Marianetti (2023)E. Xiao and C. A. Marianetti, Physical Review B **107**, 094303 (2023). * Anisimov _et al._ (1997)V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein, Journal of Physics: Condensed Matter **9**, 767 (1997). * Hooton (1955)D. Hooton, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science **46**, 422 (1955). * Xiao _et al._ (2022)E. Xiao, H. Ma, M. S. Bryan, L. Fu, J. M. Mann, B. Winn, D. L. Abernathy, R. P. Hermann, A. R. Khanolkar, C. A. Dennett, D. H. Hurley, M. E. Manley, and C. A. Marianetti, Physical Review B **106**, 144310 (2022). * Qi _et al._ (2016)Y.-Y. Qi, T. Zhang, Y. Cheng, X.-R. Chen, D.-Q. Wei, and L.-C. Cai, Journal of Applied Physics **119**, 095103 (2016). * Momin _et al._ (1991)A. C. Momin, E. B. Mirza, and M. D. Mathews, Journal of Nuclear Materials **185**, 308 (1991). * Abernathy _et al._ (2012)D. L. Abernathy, M. B. Stone, M. Loguillo, M. Lucas, O. Delaire, X. Tang, J. Lin, and B. Fultz, Review of Scientific Instruments **83**, 015114 (2012). * Blochl (1994)P. E. Blochl, Physical Review B **50**, 17953 (1994). * Kresse and Joubert (1999)G. Kresse and D. Joubert, Physical Review B **59**, 1758 (1999). * Kresse and Hafner (1993)G. Kresse and J. Hafner, Physical Review B **47**, 558 (1993). * Kresse and Furthmuller (1996)G. Kresse and J. Furthmuller, Physical Review B **54**, 11169 (1996). * Perdew _et al._ (1996)J. P. Perdew, K. Burke, and M. Ernzerhof, Physical Review Letters **77**, 3865 (1996). * Dudarev _et al._ (1998)S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton, Physical Review B **57**, 1505 (1998). * Fu _et al._ (2019)L. Fu, M. Kornbluth, Z. Cheng, and C. A. Marianetti, Physical Review B **100**, 014303 (2019). * Lin _et al._ (2019)J. Y. Lin, A. Banerjee, F. Islam, M. D. Le, and D. L. Abernathy, Physica B: Condensed Matter **562**, 26 (2019). * Bryan _et al._ (2020)M. S. Bryan, L. Fu, K. Rickert, D. Turner, T. A. Prusnick, J. M. Mann, D. L. Abernathy, C. A. Marianetti, and M. E. Manley, Communications Physics **3**, 1 (2020). * Pang _et al._ (2013)J. W. L. Pang, W. J. L. Buyers, A. Chernatynskiy, M. D. Lumsden, B. C. Larson, and S. R. Phillpot, Physical Review Letters **110**, 157401 (2013). * Wang _et al._ (2013)B.-T. Wang, P. Zhang, R. Lizarraga, I. Di Marco, and O. Eriksson, Physical Review B **88**, 104107 (2013). * Gonze and Lee (1997)X. Gonze and C. Lee, Physical Review B **55**, 10355 (1997). * Mathis _et al._ (2022)M. A. Mathis, A. Khanolkar, L. Fu, M. S. Bryan, C. A. Dennett, K. Rickert, J. M. Mann, B. Winn, D. L. Abernathy, M. E. Manley, D. H. Hurley, and C. A. Marianetti, Physical Review B **106**, 014314 (2022). * Idiri _et al._ (2004)M. Idiri, T. Le Bihan, S. Heathman, and J. Rebizant, Physical Review B **70**, 014113 (2004). * Santini _et al._ (2009)P. Santini, S. Carretta, G. Amoretti, R. Caciuffo, N. Magnani, and G. H. Lander, Reviews of Modern Physics **81**, 807 (2009). * Bryan _et al._ (2019)M. S. Bryan, J. W. L. Pang, B. C. Larson, A. Chernatynskiy, D. L. Abernathy, K. Gofryk, and M. E. Manley, Physical Review Materials **3**, 065405 (2019). * Agency (1965)I. A. E. Agency, _Thermodynamic and transport properties of uranium dioxide and related phases_ (1965). * Ronchi _et al._ (1999)C. Ronchi, M. Sheindlin, M. Musella, and G. J. Hyland, Journal of Applied Physics **85**, 776 (1999). * Kim _et al._ (2014)H. Kim, M. H. Kim, and M. Kaviany, Journal of Applied Physics **115**, 123510 (2014). * Pavlov _et al._ (2017)T. R. Pavlov, M. R. Wenman, L. Vlahovic, D. Robba, R. J. M. Konings, P. Van Uffelen, and R. W. Grimes, Acta Materialia **139**, 138 (2017). # Supplemental materials to Phonon thermal transport in \(\mathbf{U}\mathbf{O}_{2}\) via self-consistent perturbation theory Shuxiang Zhou\({}^{1}\), Enda Xiao\({}^{2}\), Hao Ma\({}^{3,4}\), Krzysztof Gofryk\({}^{1}\), Chao Jiang\({}^{1}\), Michael E. Manley\({}^{3}\), David H. Hurley\({}^{1}\), and Chris A. Marianetti\({}^{5}\) \({}^{1}\)Idao National Laboratory, Idaho Falls, Idaho 83415, USA \({}^{2}\)Department of Chemistry, Columbia University, New York, New York 10027, USA \({}^{3}\)Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA \({}^{4}\)Department of Thermal Science and Energy Engineering, University of Science and Technology of China, Hefei, Anhui 230026, China \({}^{5}\)Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027, USA ###### Abstract Given that phonon interactions are not normally tabulated in publications, and are not provided in the aforementioned publications, it is not possible to directly scrutinize them. However, we can assess the phonon frequencies, which are far more straightforward to compute. The phonons computed by the aforementioned studies are collected and compared with experiment (see Fig. S2) (note that Ref. [2]'s phonons are provided by Ref. [12]), with the exception of Ref. [5; 6]. Ref. [6] only provides folded phonon dispersion in a supercell, and Ref. [5] does not provide phonon dispersion calculated by DFT+\(U\). All other studies provide phonon dispersion calculated by DFT+\(U\) at zero temperature, except Ref. [3] which computes the phonons at the volume corresponding to \(T=295\) K. Multiple nontrivial anomalies can be identified, such as the inversion of the acoustic modes between \(\Gamma\) and \(L\)[1; 3], the overestimation of the LO1 and TO1 branches [1; 3; 7], the breaking of cubic symmetry [1; 4], and large oscillations in the highest optical branch [3; 12]. Given these nontrivial discrepancies in the phonon frequencies, it would not be unexpected to find corresponding discrepancies in the phonon interactions. Computational details and convergence of thermal conductivity In DFT calculations, a plane-wave cutoff energy of 550 eV was used, and the energy convergence criterion was \(10^{-6}\) eV. The phonons were calculated in our previous work [13], while in this work we increased the supercell size (see Sec. III of the Supplemental Materials). The cubic and quartic irreducible derivatives were computed at six finite displacement amplitudes which were then used to construct quadratic error tails, ensuring that the discretization error was properly extrapolated to zero. Above the Neel temperature \(T_{N}\), the space group of \(\text{U}\text{O}_{2}\) is \(Fm\bar{3}m\) and the point group symmetry of the uranium site is \(O_{h}\). The face-centered cubic lattice vectors are encoded in a \(3\times 3\) row stacked matrix \(\hat{\mathbf{a}}=\frac{a_{\text{g}}}{2}(\hat{\mathbf{J}}-\hat{\mathbf{1}})\), where \(\hat{\mathbf{1}}\) is the identity matrix and \(\hat{\mathbf{J}}\) is a matrix in which each element is 1. The Brillouin zone is discretized using a real space supercell \(\hat{\mathbf{S}}_{BZ}\hat{\mathbf{a}}\), where \(\hat{\mathbf{S}}_{BZ}\) is an invertible matrix of integers which produces superlattice vectors that satisfy the point group [14]. Below \(T_{N}\), the 3k AFM ordering breaks the symmetry of the \(Fm\bar{3}m\) space group, and has a primitive cell of \(\hat{\mathbf{S}}_{C}\hat{\mathbf{a}}\), where \(\hat{\mathbf{S}}_{C}\) generates the conventional cubic cell (multiplicity of 4) and is defined as \(\hat{\mathbf{S}}_{C}=\hat{\mathbf{J}}-2\hat{\mathbf{1}}\). When computing third order irreducible derivatives (ID), we use the supercell \(\hat{\mathbf{S}}_{O}\hat{\mathbf{a}}\) and extract all third order IDs consistent with the supercell, where \(\hat{\mathbf{S}}_{O}=\hat{\mathbf{S}}_{C}\hat{\mathbf{S}}_{C}=4\hat{\mathbf{1} }-\hat{\mathbf{J}}\). The supercell \(\hat{\mathbf{S}}_{O}\hat{\mathbf{a}}\) has a multiplicity of 16 (48 atoms). When computing fourth order IDs, we use the primitive cell \(\hat{\mathbf{S}}_{C}\hat{\mathbf{a}}\) and extract all fourth order IDs consistent with the primitive cell. For a 3k AFM calculation with a primitive cell \(\hat{\mathbf{S}}_{C}\hat{\mathbf{a}}\), a \(12\hat{\mathbf{1}}\)\(\Gamma\)-centered \(k\)-point mesh was applied; for larger supercell calculations, the \(k\)-point density was approximately held constant. In order to compare with the INS scattering function, we unfold the 3k AFM IDs back to the original primitive unit cell \(\hat{\mathbf{a}}\), averaging any translational symmetry breaking due to the antiferromagnetism. The Born effective charges (\(Z_{U}^{\star}=5.54\) and \(Z_{O}^{\star}=-2.77\)) of the U and O ions and the dielectric constant (\(\epsilon=5.69\)) were used to account for LO-TO splitting [15; 16]. We test the convergence of thermal conductivity calculations by varying the interpolation grid from \(6\hat{\mathbf{1}}\) to \(12\hat{\mathbf{1}}\) in BTE calculations (see Fig. S3). Both \(\mathcal{S}_{b}^{o}\) and \(\mathcal{S}_{s}^{o}\) results are approximately converged using \(12\hat{\mathbf{1}}\) interpolation grid, which is used in all calculations in the main manuscript. The linearized Boltzmann transport equation (BTE) is also solved for \(\mathcal{S}_{b}^{o}\), and yields approximately identical results to the RTA with different interpolation grids (see Fig. S4). ## III Phonon dispersion calculations The computational details of phonon dispersion calculations were already reported in our previous work [13], and in this work we increased the supercell from \(\hat{\mathbf{S}}_{BZ}=2\hat{\mathbf{S}}_{C}\) to \(\hat{\mathbf{S}}_{BZ}=4\hat{\mathbf{1}}\) (see Fig. S5 for a comparison). When \(\hat{\mathbf{S}}_{BZ}=4\hat{\mathbf{1}}\), the \(q\)-point \((0.25,0.25,0.25)\) is directly computed. The good agreement between \(\hat{\mathbf{S}}_{BZ}=2\hat{\mathbf{S}}_{C}\) and \(\hat{\mathbf{S}}_{BZ}=4\hat{\mathbf{1}}\) means the supercell size is sufficient to converge the phonon dispersion. \(q\)-Voxel dimension information The \(q\)-voxel dimensions used in Fig. 3 in main text are listed in Tables S1, S2, and S3. In all of the aforementioned tables, the first column specifies the phonon branch; the second column specifies a line segment in reciprocal space; the third, fourth, and fifth columns specify the three voxel dimensions in reciprocal lattice units. The spectral and cumulative thermal conductivity in BTE calculations In this section, we report the calculated spectral and cumulative thermal conductivity as functions of phonon energy at \(T=300\) K, \(600\) K, and \(1200\) K in \(\mathcal{S}_{bs}^{o}\) BTE calculations, shown in Fig. S6. At all three temperatures, roughly \(70\%\) of thermal conductivity arises from phonon modes with energies below \(24\) meV, and therefore the optical modes do play a nontrivial role in thermal transport. Our GGA+\(U\)+SOC phonon dispersion has very good agreement with experiment [13], except for a small offset in the highest optical phonon branch, but Fig. S6 indicates that this branch has a negligible influence on the thermal conductivity. Comparison between bare and self-consistent perturbation theory Here the calculated thermal conductivity using self-consistent perturbation theory, HF and QP, is analyzed separately for the bubble diagram and the bubble plus the sunset diagram. Note that for \(\mathcal{S}_{b}^{HF}\) and \(\mathcal{S}_{b}^{QP}\), the quartic phonon interactions still have a contribution due to the fact that the self-consistent Green's function is evaluated using the loop diagram. The effect of thermal expansion is not included in this section. ## VII The effect of thermal expansion on thermal conductivity To address the effect of thermal expansion on thermal conductivity of UO\({}_{2}\), the irreducible derivatives are computed at four different cell volumes, where the lattice parameters \(a=5.5461\), \(5.5602\), \(5.5717\), and \(5.5942\)A are used, corresponding to the experimental percentage change in the lattice parameter at \(T=0\), \(360\), \(600\), and \(1000\) K [17], respectively. These computed results are linearly interpolated or extrapolated to temperatures from \(0\) to \(1400\) K. We first present the phonon linewidth at \(T=600\)K, and compare with INS experiment (see Fig S8 for \(\mathcal{S}_{lb}^{a}\), Fig S9 for \(\mathcal{S}_{lb}^{HF}\), and Fig S10 for \(\mathcal{S}_{lb}^{QP}\) results). The differences between bare and self-consistent perturbation theory are modest. Finally, the thermal conductivity is calculated using bare perturbation theory and self-consistent perturbation theory (see Fig S11). While the thermal conductivity increase due to interband transition and the decrease due to thermal expansion are both non-trivial, they tend to cancel, thus combining both effects only slightly decrease the thermal conductivity in UO\({}_{2}\). Figure S11: Thermal conductivity, computed using GGA+\(U\)+SOC (\(U\) = 4 eV) within (a) bare perturbation theory, (b) HF self-consistent perturbation theory, and (c) QP self-consistent perturbation theory, and comparing with experiments [8; 9; 10; 11]. Thermal expansion is included when computing the irreducible derivatives. ## VIII Energy, structure, and phonon calculations using PBSol In our previous work [13], we presented the results of energy difference between different magnetic ordering, in addition to the lattice parameter, oxygen cage distortion, and phonon dispersion in 3**k** AFM state, using both PBE+\(U\) and LDA+\(U\). Recent studies [6; 7] reported that PBEsol+\(U\) predicts a more accurate lattice parameter than PBE+\(U\) or LDA+\(U\). Therefore, we performed PBEsol+\(U\)+SOC calculations for aforementioned energetic, structural, and phonon properties, and made a comparison with PBE+\(U\)+SOC and LDA+\(U\)+SOC. We first compare the energies of 1**k** AFM and 3**k** AFM states using PBEsol+\(U\)+SOC calculation. The occupation matrix S\({}_{0}\), computed by PBE [13], is used to initialize the PBEsol calculation. Unsurprisingly, during PBEsol calculations, the resulting occupation matrix only slightly changes from the initial value. The energy differences between 1**k** AFM and 3**k** AFM ordering are computed using \(U=2-5\) eV (see Fig. S12). In PBEsol+\(U\), the 3**k** AFM state has the lowest energy when \(U<3.6\) eV, and the 1**k** AFM state has the lowest energy when \(U>3.6\) eV. At \(U=4\) eV, the 1**k** AFM and the 3**k** AFM state are essentially degenerate, as the energy difference is only 1 meV. Additionally, the lattice parameter and oxygen cage distortion are computed in the 3**k** AFM state (see Fig. S13). Generally, the predictions of the lattice parameter and oxygen cage distortion from PBEsol are very close to LDA and PBE. For the lattice parameter, PBEsol's prediction is between PBE and LDA, and \(U=3\) eV gives excellent agreement with experiment. The oxygen cage distortion of PBEsol is slightly larger than both LDA and PBE for \(U=4-5\) eV, and \(U=5\) eV gives favorable agreement with experiment. Finally, the phonon dispersion is computed in the 3**k** AFM state using \(U=4\) eV and \(\hat{\textbf{S}}_{BZ}=2\hat{\textbf{S}}_{C}\) (see Fig. S14). The phonons computed by PBEsol are also between PBE and LDA phonons, mainly due to the different cell volume. For acoustic branches, LDA, PBE, and PBEsol have close predictions, however, for optical branches, PBE still has the overall best agreements except the highest optical branch. Therefore, we consider PBE to have the overall best description of the phonons for UO\({}_{2}\), and all results in the main text are computed using PBE. Figure S13: Calculated (a) lattice parameter and (b) \(<\)111\(>\) oxygen cage distortion in the 3**k** AFM state, using LDA+\(U\)+SOC, PBE+\(U\)+SOC, and PBEsol+\(U\)+SOC, as a function of the Hubbard \(U\). The horizontal dashed lines represent the experimental values of the lattice parameter from Ref. [18] and the distortion from Ref. [19]. Phonon thermal conductivity calculations using GGA (\(U=0\)) It is interesting to explore the thermal conductivity using GGA (i.e., \(U=0\)) in the FM state, despite the fact that it is incorrectly predicted to be a metal (though we still apply LO-TO splitting). In the Supplemental Materials of our previous work [13], the phonon dispersion computed by GGA using \(\hat{\mathbf{S}}_{BZ}=2\hat{\mathbf{i}}\) is reported. Here the supercell is increased to \(\hat{\mathbf{S}}_{BZ}=4\hat{\mathbf{i}}\) (see Fig. S15). Notably, comparing with experiments, the highest acoustic phonon branch is slightly underestimated, and the highest optical branch is largely underestimated and has large oscillations. These anomalies clearly show that \(U=0\) is not adequate to accurately capture the phonons of UO\({}_{2}\). The third order IDs are computed using supercell \(\hat{\mathbf{S}}_{BZ}=\hat{\mathbf{S}}_{O}\), and the thermal conductivity is computed using the RTA with \(\mathcal{S}^{o}_{b}\). Surprisingly, the thermal conductivity computed by GGA is very close to GGA+\(U\) (see Fig. S16). However, a significant difference between GGA and GGA+\(U\) is observed in both the spectral and cumulative thermal conductivity at \(T=300\) K (see Fig. S17). For both \(\mathcal{S}^{o}_{b}\) and \(\mathcal{S}^{o}_{bs}\) with GGA+\(U\), the optical phonons have a substantial contribution, quantified by phonon energies greater than 24 meV, accounting for -30% of the total thermal conductivity, while the corresponding contribution is -50% in GGA. Though the prediction of thermal conductivity using GGA at first seems accurate, it is achieved due to cancelling errors, rather than accurate descriptions of phonons and phonon interactions. Thermal conductivity contribution of electrons and photons In experiments above room temperature, thermal conductivity of \(\text{UO}_{2}\) decreases with increasing temperature until around \(T=2000\)K, then increases until the melting temperature [8]. As phonon thermal conductivity due to phonon-phonon interactions normally only decreases with increasing temperature, the increasing thermal conductivity at high temperature must be produced by other mechanisms. The potentially observable contributions from photon and electron (i.e., polaron, a thermally activated quasiparticle) are suggested [21]. Semi-classical models have been applied to the thermal conductivity of \(\text{UO}_{2}\), accounting for contributions of polarons [22; 23; 24] and photons [23], and their results are collected (see Fig. S18). The total thermal conductivity is presented by Ref. [8], fitting to experiments. Above \(T=2000\) K, Ref. [22; 24] claim the polaron contribution is significant, meanwhile [23] claims the photon contribution is critical. A common understanding has not yet emerged, and modeling beyond the semi-classical theory level is still lacking. Computed irreducible derivatives This section contains all computed irreducible derivatives, and all information required to specify the irreducible derivatives. ### The labels of \(q\) points and their standard displacement basis \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{2}{c}{\(\Gamma=(000)\)} & & & & & & & \\ \(\Lambda=\left(\frac{1}{4}00\right)\) & \(\bar{\Lambda}=\left(\frac{3}{4}00\right)\) & \(\Lambda_{1}=\left(0\frac{1}{4}0\right)\) & \(\bar{\Lambda}_{1}=\left(0\frac{3}{4}0\right)\) & \(\Lambda_{2}=\left(00\frac{1}{4}\right)\) & \(\bar{\Lambda}_{2}=\left(00\frac{3}{4}\right)\) \\ \(\Lambda_{3}=\left(\frac{1}{4}\frac{1}{4}\frac{1}{4}\right)\) & \(\bar{\Lambda}_{3}=\left(\frac{3}{4}\frac{3}{4}\frac{3}{4}\right)\) & & & & \\ \(L=\left(\frac{1}{2}00\right)\) & \(L_{1}=\left(0\frac{1}{2}0\right)\) & \(L_{2}=\left(00\frac{1}{2}\right)\) & \(L_{3}=\left(\frac{1}{2}\frac{1}{2}\frac{1}{2}\right)\) & & & \\ \(\Delta=\left(\frac{1}{4}\frac{1}{4}0\right)\) & \(\bar{\Delta}=\left(\frac{3}{4}\frac{3}{4}0\right)\) & \(\Delta_{1}=\left(\frac{1}{4}0\frac{1}{4}\right)\) & \(\bar{\Delta}_{1}=\left(\frac{3}{4}0\frac{3}{4}\right)\) & \(\Delta_{2}=\left(0\frac{1}{4}\frac{1}{4}\right)\) & \(\bar{\Delta}_{2}=\left(0\frac{3}{4}\frac{3}{4}\right)\) \\ \(B=\left(\frac{1}{2}\frac{1}{4}0\right)\) & \(\bar{B}=\left(\frac{1}{2}\frac{1}{4}0\right)\) & \(B_{1}=\left(\frac{1}{4}\frac{1}{2}0\right)\) & \(\bar{B}_{1}=\left(\frac{3}{4}\frac{1}{2}0\right)\) & \(B_{2}=\left(\frac{1}{2}0\frac{1}{4}\right)\) & \(\bar{B}_{2}=\left(\frac{1}{2}0\frac{3}{4}\right)\) \\ \(B_{3}=\left(\frac{3}{4}\frac{1}{4}\frac{1}{4}\right)\) & \(\bar{B}_{3}=\left(\frac{1}{4}\frac{3}{4}\frac{3}{4}\right)\) & \(B_{4}=\left(0\frac{1}{2}\frac{1}{4}\right)\) & \(\bar{B}_{4}=\left(0\frac{1}{2}\frac{3}{4}\right)\) & \(B_{5}=\left(\frac{1}{2}\frac{1}{2}\frac{1}{4}\right)\) & \(\bar{B}_{5}=\left(\frac{1}{2}\frac{1}{2}\frac{3}{4}\right)\) \\ \(B_{6}=\left(\frac{1}{2}\frac{3}{4}\frac{1}{4}\right)\) & \(\bar{B}_{6}=\left(\frac{3}{4}\frac{1}{4}\frac{3}{4}\right)\) & \(B_{7}=\left(\frac{3}{4}\frac{2}{4}\frac{1}{4}\right)\) & \(\bar{B}_{7}=\left(\frac{1}{4}\frac{1}{4}\frac{3}{4}\right)\) & \(B_{8}=\left(\frac{1}{4}0\frac{1}{2}\right)\) & \(\bar{B}_{8}=\left(\frac{2}{0}\frac{1}{2}\right)\) \\ \(B_{9}=\left(0\frac{1}{4}\frac{1}{2}\right)\) & \(\bar{B}_{9}=\left(0\frac{3}{4}\frac{1}{2}\right)\) & \(B_{10}=\left(\frac{1}{2}\frac{1}{4}\frac{1}{2}\right)\) & \(\bar{B}_{10}=\left(\frac{1}{2}\frac{3}{4}\frac{1}{2}\right)\) & \(B_{11}=\left(\frac{1}{4}\frac{1}{2}\frac{1}{2}\right)\) & \(\bar{B}_{11}=\left(\frac{3}{4}\frac{1}{2}\frac{1}{2}\right)\) \\ \(A=\left(\frac{3}{4}\frac{1}{4}0\right)\) & \(\bar{A}=\left(\frac{1}{4}\frac{3}{4}0\right)\) & \(A_{1}=\left(\frac{3}{4}0\frac{1}{4}\right)\) & \(\bar{A}_{1}=\left(\frac{1}{4}0\frac{3}{4}\right)\) & \(A_{2}=\left(\frac{1}{4}\frac{1}{4}\frac{1}{4}\right)\) & \(\bar{A}_{2}=\left(\frac{1}{2}\frac{3}{4}\frac{3}{4}\right)\) \\ \(A_{3}=\left(\frac{1}{4}\frac{1}{2}\frac{1}{4}\right)\) & \(\bar{A}_{3}=\left(\frac{3}{4}\frac{1}{2}\frac{3}{4}\right)\) & \(A_{4}=\left(0\frac{3}{4}\frac{1}{4}\right)\) & \(\bar{A}_{4}=\left(0\frac{1}{4}\frac{3}{4}\right)\) & \(A_{5}=\left(\frac{1}{4}\frac{1}{4}\frac{1}{2}\right)\) & \(\bar{A}_{5}=\left(\frac{3}{4}\frac{3}{4}\frac{1}{2}\right)\) \\ \(X=\left(\frac{1}{2}\frac{1}{2}0\right)\) & \(X_{1}=\left(\frac{1}{2}0\frac{1}{2}\right)\) & \(X_{2}=\left(0\frac{1}{2}\frac{1}{2}\right)\) & \\ \(W=\left(\frac{3}{4}\frac{1}{2}\frac{1}{2}\frac{1}{4}\right)\) & \(\bar{W}=\left(\frac{1}{4}\frac{1}{2}\frac{3}{4}\right)\) & \(W_{1}=\left(\frac{1}{2}\frac{3}{4}\frac{1}{4}\right)\) & \(\bar{W}_{1}=\left(\frac{1}{2}\frac{1}{4}\frac{3}{4}\right)\) & \(W_{2}=\left(\frac{3}{4}\frac{1}{4}\frac{1}{2}\right)\) & \(\bar{W}_{2}=\left(\frac{1}{4}\frac{3}{4}\frac{1}{2}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 10: The labels of \(q\) points. \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{x}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{z}\) & \(u_{\rm U}^{x}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) & \(u_{\rm O}^{x}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) \\ \hline \(u_{\rm T}^{T_{2g},0}\) & 0 & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) \\ \(u_{\rm T}^{T_{2g},1}\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) & 0 & 0 \\ \(u_{\rm T}^{T_{2g},2}\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) & 0 \\ \(u_{\rm T}^{T_{1},0}\) & \(-\frac{\sqrt{6}}{3}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) & 0 & 0 \\ \(u_{\rm T}^{T_{1},1}\) & 0 & \(-\frac{\sqrt{6}}{3}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) & 0 \\ \(u_{\rm T}^{T_{1},2}\) & 0 & 0 & \(-\frac{\sqrt{6}}{3}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}\) \\ \(u_{\rm A}^{A,0}\) & \(-\frac{\sqrt{3}}{3}i\) & \(\frac{\sqrt{3}}{3}i\) & \(\frac{\sqrt{3}}{3}i\) & \(0\) & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{A_{1},0}\) & 0 & 0 & 0 & \(-\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A}^{A_{1},0}\) & 0 & 0 & \(0-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A}^{E,0}\) & \(\frac{\sqrt{2}}{2}i\) & \(\frac{\sqrt{2}}{2}i\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{E,1}\) & \(-\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{6}}{3}i\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{E,0}\) & 0 & 0 & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & 0 & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & 0 \\ \(u_{\rm A}^{E,1}\) & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{12}-\frac{\sqrt{2}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{2}}{12}i\) & \(-\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{2}}{12}-\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{2}}{12}i\) & \(-\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{2}}{6}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{4}+\frac{\sqrt{2}}{4}i\) & 0 \\ \(u_{\rm A}^{A_{1},1}\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & 0 & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & 0 \\ \(u_{\rm A}^{A_{1},0}\) & 0 & 0 & \(0\) & \(\frac{\sqrt{6}}{3}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}i\) & 0 & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{A_{1},0}\) & 0 & 0 & \(0\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A}^{E,0}\) & 0 & 0 & \(0\) & \(-\frac{\sqrt{3}}{2}i-\frac{\sqrt{2}}{12}i\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{E,1}\) & \(\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{3}i\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \(u_{\rm A}^{E,0}\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & 0 & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & 0 \\ \(u_{\rm A}^{E,1}\) & 0 & 0 & \(-\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A}^{E,0}\) & 0 & 0 & \(\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{6}}{12 \begin{tabular}{l|c c c c c c c c c c} \hline \hline & \(u_{\rm U}^{x}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{z}\) & \(u_{\rm U}^{z}\) & \(u_{\rm 0}^{y}\) & \(u_{\rm 0}^{y}\) & \(u_{\rm 0}^{z}\) & \(u_{\rm 0}^{y}\) & \(u_{\rm 0}^{z}\) \\ \hline \(u_{\rm A_{1}}^{E,0}\) & \(\frac{\sqrt{2}}{2}\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{1}}^{E,1}\) & \(-\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{3}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{1}}^{E,0}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{1}}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) \\ \(u_{\rm A_{1}}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{4}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{1}}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) \\ \(u_{\rm A_{2}}^{A,0}\) & \(\frac{\sqrt{3}}{3}i\) & \(\frac{\sqrt{3}}{3}i\) & \(-\frac{\sqrt{3}}{3}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{2}}^{A,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A_{2}}^{A,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) \\ \(u_{\rm A_{2}}^{E,0}\) & \(-\frac{\sqrt{2}}{2}i\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{2}}^{E,1}\) & \(\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{3}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{2}}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{2}}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) \\ \(u_{\rm A_{2}}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{2}}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{2}}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{12}i\) & \(-\frac{\sqrt{2}}{12}+\frac{\sqrt{2}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}-\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) \\ \(u_{\rm A_{2}}^{A,0}\) & \(-\frac{\sqrt{3}}{3}i-\frac{\sqrt{3}}{3}i\) & \(\frac{\sqrt{3}}{3}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\rm A_{2}}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{6}}{6}-\frac{\sqrt{3}}{6}i\) & \(\frac{\sqrt{3}}{6}-\frac{\sqrt{3}}{6}i\) & \(-\frac{\sqrt{3}}{6}+\frac{\sqrt{3}}{6}i\) & \begin{tabular}{|c|c c c c c c c c c c|} \hline \hline & \(u_{\rm U}^{\rm x}\) & \(u_{\rm U}^{\rm y}\) & \(u_{\rm U}^{\rm z}\) & \(u_{\rm U}^{\rm z}\) & \(u_{\rm U}^{\rm z}\) & \(u_{\rm U}^{\rm y}\) & \(u_{\rm U}^{\rm z}\) & \(u_{\rm U}^{\rm z}\) & \(u_{\rm U}^{\rm z}\) \\ \hline \(u_{\rm A_{3}}^{{}_{E},0}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{3}}^{{}_{E},1}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{12}-\frac{\sqrt{2}}{12}i\) & \(\frac{\sqrt{2}}{12}-\frac{\sqrt{2}}{12}i\) & \(-\frac{\sqrt{6}}{6}+\frac{\sqrt{6}}{6}i\) & \(-\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) \\ \(u_{\rm A_{3}}^{{}_{E},0}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) \\ \(u_{\rm A_{3}}^{{}_{E},1}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{2}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(\frac{\sqrt{6}}{12}+\frac{\sqrt{6}}{12}i\) & \(-\frac{\sqrt{6}}{6}-\frac{\sqrt{6}}{6}i\) \\ \(u_{L}^{{}_{E},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) \\ \(u_{L}^{{}_{E},0}\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(0\) \\ \(u_{L}^{{}_{E},1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{3}\) & \(-\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) \\ \(u_{L}^{{}_{2},0}\) & \(-\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{3}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{L}^{{}_{E},0}\) & \(0\) & \(0\) & \(\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) \\ \(u_{L}^{{}_{E},0}\) & \(\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{3}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{L}^{{}_{E},u}\) & \(\frac{\sqrt{2}}{2}\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{L}^{{}_{E},u}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) \\ \(u_{L}^{{}_{2},u}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{6}\) \\ \(u_{L}^{{}_{E},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) \\ \(u_{L}^{{}_{E},1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) \\ \(u_{L}^{{}_{E},1}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) \\ \(u_{L}^{{}_{2},u}\) & \(\frac{\sqrt{3}}{3}\) & \(-\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{3}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{L}^{{}_{E},u}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{6}}{3}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{L}^{{}_{E},u}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{6}}{6}\) & \(\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) & \(-\frac{\sqrt{3}}{3}\) \\ \(u_{L}^{{}_{E},u}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) \\ \(u_{L}^{{}_{E},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) \\ \(u_{L}^{{}_{E},1}\) & \(-\frac{\sqrt{2}}{2}\) & \(-\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{ \begin{tabular}{|c|c c c c c c c c c|} \hline \hline & \(u_{\bar{U}}^{\pm}\) & \(u_{U}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) & \(u_{\bar{U}}^{\pm}\) \\ \hline \(u_{L_{3}}^{\bar{E}_{u},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{3}\) & \(\frac{\sqrt{3}}{6}\) & \(\frac{\sqrt{3}}{6}\) & \(-\frac{\sqrt{3}}{3}\) \\ \(u_{L_{3}}^{\bar{E}_{u},1}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) \\ \(u_{\Delta}^{A_{1},0}\) & \(0\) & \(0\) & \(1i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta}^{E,0}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{E,1}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta}^{E,1}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta}^{A_{1},0}\) & \(0\) & \(0\) & \(-1i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta}^{E,0}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{E,1}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{1}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta_{1}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta_{1}}^{E,0}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{E,0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta_{1}}^{E,2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{1}}^{E,2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{x}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{z}\) & \(u_{\rm U}^{x}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) & \(u_{\rm O}^{z}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) \\ \hline \(u_{\Delta_{2}}^{A_{1},0}\) & \(1i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{A_{1},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(-1i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},1}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{\Delta_{2}}^{B_{2},2}\) & \(0\) & \(0\) & \(0\) & \ \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{\pi}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\tilde{\gamma}}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{\tilde{\gamma}}\) \\ \hline \({}^{2}\)\({}^{2}\)\ \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{\pi}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\tilde{\gamma}}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{\tilde{\gamma}}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\tilde{\gamma}}\) \\ \hline \({}^{3}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{\pi}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) \\ \hline \({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({}^{4}\)\({4}\ \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{\pi}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) & \(u_{\rm O}^{\tilde{\rm U}}\) \\ \hline \({}^{5}\)\({}^{ \begin{tabular}{|c|c c c c c c c c c c|} \hline \hline & \(u_{\rm U}^{*}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\dagger}\) & \(u_{\rm O}^{*}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{*}\) & \(u_{\rm O}^{*}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{*}\) \\ \hline \({}^{u}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{tabular}{c|c c c c c c c c c c} \hline \hline & \(u_{\mathbb{U}}^{a}\) & \(u_{U}^{y}\) & \(u_{U}^{\widehat{\imath}}\) & \(u_{\mathbb{O}}^{\widehat{\imath}}\) & \(u_{\mathbb{O}}^{y}\) & \(u_{\mathbb{O}}^{\widehat{\imath}}\) & \(u_{\mathbb{O}}^{\widehat{\imath}}\) & \(u_{\mathbb{O}}^{y}\) & \(u_{\mathbb{O}}^{\widehat{\imath}}\) \\ \hline \({}^{\imath}\!{}^{\!B}_{B,0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \({}^{\!B}_{B,1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \({}^{\!A}_{B,1}\) & \(-1i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!A}_{B,1}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) \\ \({}^{\!A}_{B,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) \\ \({}^{\!A}_{B,1}\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!A}_{B,1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \({}^{\!A}_{B,1}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) \\ \({}^{\!B}_{B,0}\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!B}_{B,0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) \\ \({}^{\!A}_{B,0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) \\ \({}^{\!A}_{A}\) & \(\frac{\sqrt{2}}{2}i\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(\frac{1}{2}i\) & \(0\) & \(-\frac{1}{2}i\) & \(\frac{1}{2}i\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(\frac{1}{2}i\) & \(0\) \\ \({}^{\!A}_{A}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) \\ \({}^{\!B}_{B}\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \({}^{\!B}_{B}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) \\ \({}^{\!A}_{A}\) & \(-\frac{\sqrt{2}}{2}i\) & \(-\frac{\sqrt{2}}{2}i\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \ \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{x}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{z}\) & \(u_{\rm U}^{z}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) & \(u_{\rm O}^{z}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{z}\) \\ \hline \(u_{A_{1}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(0\) & \(-\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) & \(-\frac{1}{2}i\) \\ \(u_{A_{1}}^{\mbox{\tiny A}_{1},0}\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{1}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}\) & \(0\) \\ \(u_{A_{1}}^{\mbox{\tiny A}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) \\ \(u_{A_{1}}^{\mbox{\tiny A}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(-\frac{1}{2}i\) & \(0\) & \(-\frac{1}{2}i\) \\ \(u_{A_{1}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}\) & \(0\) \\ \(u_{A_{1}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{1}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}i\) & \(0\) & \(\frac{1}{2}i\) & \(\frac{1}{2}i\) & \(0\) & \(-\frac{1}{2}i\) \\ \(u_{A_{1}}^{\mbox{\tiny B}_{2},0}\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny A}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(-\frac{\sqrt{2}}{2}i\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{2}}^{\mbox{\tiny B}_{2},0}\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(0\) & \(-\frac{1}{2}\) \\ \(u_{A_{3}}^{\mbox{\tiny A}_{1},0}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{2}i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(u_{A_{3}}^{\mbox{\tiny B}_{1},0}\) & \(0\) & \(0\) & \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{a}\) & \(u_{\rm U}^{y}\) & \(u_{\rm U}^{\uparrow}\) & \(u_{\rm O}^{\uparrow}\) & \(u_{\rm O}^{y}\) & \(u_{\rm O}^{\uparrow}\) & \(u_{\rm O}^{\uparrow}\) & \(u_{\rm O}^{\uparrow}\) & \(u_{\rm O}^{\uparrow}\) \\ \hline \({}^{i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{tabular}{|c|c c c c c c c c c c|} \hline \hline & \(u_{\rm U}^{\pm}\) & \(u_{\rm U}^{\pm}\) & \(u_{\rm U}^{\pm}\) & \(u_{\rm U}^{\pm}\) & \(u_{\rm O}^{\pm}\) & \(u_{\rm O}^{\pm}\) & \(u_{\rm O}^{\pm}\) & \(u_{\rm O}^{\pm}\) & \(u_{\rm O}^{\pm}\) \\ \hline \({}^{2}\)\({}^{2 \begin{tabular}{c|c c c c c c c c c} \hline \hline & \(u_{\rm U}^{\tilde{u}}\) & \(u_{\rm U}^{\tilde{u}}\) & \(u_{\rm U}^{\tilde{u}}\) & \(u_{\rm O}^{\tilde{u}}\) & \(u_{\rm O}^{\tilde{u}}\) & \(u_{\rm O}^{\tilde{u}}\) & \(u_{\rm O}^{\tilde{u}}\) & \(u_{\rm O}^{\tilde{u}}\) \\ \hline \(u_{W}^{B,0}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(u_{W}^{E,0}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\begin{array}{c}e,2\\ u_{W}^{\tilde{u}}\end{array}\) & \(0\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\begin{array}{c}u_{W}^{\tilde{u}},0\\ u_{W}^{\tilde{u}}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(\begin{array}{c}u_{W}^{E,2}\\ u_{W}^{A_{1},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(\begin{array}{c}u_{W}^{A_{1},0}\\ u_{W}^{B_{1},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{E,0}\\ u_{W}^{E,2}\end{array}\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\begin{array}{c}u_{W}^{E,2}\\ u_{W}^{E,0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}-\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}-\frac{1}{2}i\) & \(0\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},2}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{1}{2}+\frac{1}{2}i\) & \(0\) & \(0\) & \(\frac{1}{2}+\frac{1}{2}i\) & \(0\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{B_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{A_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{A_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(0\) & \(\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) & \(-\frac{\sqrt{2}}{4}+\frac{\sqrt{2}}{4}i\) \\ \(\begin{array}{c}u_{W}^{A_{2},0}\\ u_{W}^{A_{2},0}\end{array}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & ### The irreducible derivatives computed at \(a=5.5461\) A, corresponding to \(T=0\) K ### The irreducible derivatives computed at \(a=5.5461\) A, corresponding to \(T=0\) K \begin{table} \begin{tabular}{l|c|c c c|c c c} \hline \hline \multicolumn{1}{c|}{Derivative Value} & \multicolumn{1}{c|}{Derivative Value} & \multicolumn{1}{c|}{Derivative Value} & \multicolumn{1}{c}{Derivative Value} & \multicolumn{1}{c}{Derivative Value} \\ \hline \(d_{\Gamma\Gamma}^{{}^{2}g_{T2g}}\) & \(11.639047\) & \(d_{\Gamma\Gamma}^{{}^{2}t_{\Gamma}}\)\({}^{1u}\) & \(10.913052\) & \(d_{LL}^{A_{1g}A_{1g}}\) & \(16.114994\) & \(d_{LL}^{E_{g}E_{g}}\) & \(8.693366\) \\ \(d_{LL}^{A_{2u}A_{2u}}\) & \(30.499107\) & \(d_{LL}^{A_{2u}A_{2u}}\) & \(6.627570\) & \(d_{LL}^{A_{2u}A_{2u}}\) & \(8.720557\) & \(d_{LL}^{E_{g}E_{g}}\) & \(7.929498\) \\ \(d_{LL}^{E_{g}E_{u}}\) & \(-2.621590\) & \(d_{LL}^{E_{g}E_{u}}\) & \(6.885507\) & \(d_{\Delta\Delta}^{A_{1}A_{1}}\) & \(33.905151\) & \(d_{\Delta\Delta}^{A_{1}A_{1}}\) & \(14.878378\) \\ \(d_{\Delta\Delta}^{A_{1}A_{1}}\) & \(17.470587\) & \(d_{\Delta\Delta}^{B_{2}B_{2}}\) & \(6.978432\) & \(d_{\Delta\Delta}^{EE}\) & \(8.300181\) & \(d_{\Delta\Delta}^{EE}\) & \(-3.576226\) \\ \(d_{\Delta\Delta}^{B_{2}E}\) & \(0.337451\) & \(d_{\Delta\Delta}^{E_{2}E}\) & \(3.552432\) & \(d_{\Delta\Delta}^{E_{2}E}\) & \(0.878305\) & \(d_{\Delta\Delta}^{E_{2}E}\) & \(11.411557\) \\ \(d_{AA}^{A_{1g}A_{1}}\) & \(26.599985\) & \(d_{AA}^{A_{1}A_{1}}\) & \(-2.311048\) & \(d_{AA}^{A_{1}A_{1}}\) & \(9.091444\) & \(d_{AA}^{A_{1}A_{1}}\) & \(13.935454\) \\ \(d_{AA}^{A_{1}A_{1}A_{1}}\) & \(-2.689506\) & \(d_{AA}^{A_{1}A_{1}}\) & \(14.939225\) & \(d_{AA}^{A_{2}A_{2}}\) & \(7.844466\) & \(d_{AA}^{B_{1}B_{1}}\) & \(3.090018\) \\ \(d_{AA}^{B_{1}B_{2}}\) & \(-2.629364\) & \(d_{AA}^{B_{1}B_{2}}\) & \(0.40138\) & \(d_{AA}^{B_{1}B_{1}}\) & \(14.404513\) & \(d_{AA}^{B_{1}B_{2}}\) & \(-1.423090\) \\ \(d_{AA}^{B_{1}B_{2}B_{1}}\) & \(7.050796\) & \(d_{AA}^{B_{2}B_{2}}\) & \(7.918148\) & \(d_{AA}^{B_{2}B_{2}}\) & \(-2.591161\) & \(d_{AA}^{B_{2}B_{2}}\) & \(7.804500\) \\ \(d_{XX}^{A_{1g}A_{1g}}\) & \(19.735607\) & \(d_{XX}^{E_{g}E_{g}}\) & \(3.701554\) & \(d_{XX}^{A_{2u}A_{2u}}\) & \(37.251911\) & \(d_{XX}^{B_{1g}B_{1u}}\) & \(1.554955\) \\ \(d_{XX}^{E_{g}E_{u}}\) & \(11.376504\) & \(d_{XX}^{E_{g}E_{u}}\) & \(-0.244553\) & \(d_{XX}^{E_{g}E_{u}}\) & \(9.402084\) & \(d_{WW}^{A_{1}A_{1}}\) & \(18.742481\) \\ \(d_{WW}^{B_{1}B_{1}}\) & \(9.527885\) & \(d_{WW}^{B_{1}B_{1}}\) & \(-0.187366\) & \(d_{WW}^{B_{1}B_{1}}\) & \(12.580827\) & \(d_{WW}^{A_{2}A_{2}}\) & \(2.210993\) \\ \(d_{WW}^{B_{2}B_{2}}\) & \(2.952647\) & \(d_{WW}^{E_{g}E}\) & \(21.581551\) & \(d_{WW}^{E_{g}E}\) & \(1.463814\) & \(d_{WW}^{E_{g}E}\) & \(7.858648\) \\ \hline \hline \end{tabular} \end{table} Table 10: The matrix \(d_{\Gamma\Gamma}^{{}^{2}g_{T2g}}\) and \(d_{\Gamma\Gamma}^{{}^{2}g_{T2g}}\) for \(d_{\Gamma \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Value \\ \hline \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-3.463657i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-20.947786\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(21.622804i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-6.655611\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(16.447136i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-5.080140\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-10.954692i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(4.202561i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-2.588961\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-4.36469\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-20.523154i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(8.508160i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(4.815259i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-2.011553\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-7.266283i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(19.585752\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-9.181684i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(3.419240\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-16.407615i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(13.237104i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(19.057483\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-10.398067i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-11.266406i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-0.665866\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(3.680827i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-17.994006\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-0.485522i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-24.803886\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(0.927493i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-3.921297i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}}\) & \(-10.918285\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-11.110287i\) \\ \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(17.757510i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-1.987101i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(0.061393i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(19.323545\) \\ \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-9.123476i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(-14.843513\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-2.250924\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.997253\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(8.437978i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-4.679397\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(10.516970i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(9.280641i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-7.264566i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-3.140319\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(9.620465\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(4.906092i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-13.814091i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.229019i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.579265\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(1.611802i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.375595i\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(-16.897529\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(11.458700\) & \(d_{\Gamma AA}^{T_{1u}B_{2}B_{2}}\) & \(-0.180492i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-9.844975\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{2}}\) & \(-18.121684\) & \(d_{\Gamma XX}^{T_{1u}A_{1}B_{2}}\) & \(31.928014\) & \(d_{\Gamma XX}^{T_{1u}A_{1}B_{2}}\) & \(-27.367393\) \\ \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(28.391480\) & \(d_{\Gamma XX}^{T_{2g}E_{g}B_{2}}\) & \(20.257494\) & \(d_{\Gamma XX}^{T_{1u}E_{g}A_{2}}\) & \(31.715331\) & \(d_{\Gamma XX}^{T_{1u}B_{1u}E_{g}}\) & \(-15.085820\) \\ \(d_{\Gamma XX}^{T_{1u}E_{g}E_{u}}\) & \(-15.834390\) & \(d_{\Gamma XX}^{T_{1u}E_{g}}\) & \(16.673865\) & \(d_{\Gamma XX}^{T_{2g}B_{1}B_{2}}\) & \(26.472622\) & \(d_{\Gamma XX}^{T_{2g}A_{2u}}\) & \(13.357311\) \\ \(d_{\Gamma XX}^{T_{2g}2A_{2u}E_{u}}\) & \(-27.989037\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(-10.423603\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(12.293724\) & \(d_{\Gamma XX}^{T_{2g} \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(5.113335i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(6.755321i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(4.595433i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(0.173537i\) \\ \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(3.953460i\) & \(d_{XAA}^{E_{a}B_{2}\cdotp_{1}}\) & \(1.331933i\) & \(d_{XAA}^{E_{a}B_{2}\cdotp_{2}}\) & \(0.885242i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(-10.186677i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-2.111071i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(1.971823i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-10.848286i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(7.219042i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-5.160860i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(10.255986i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(7.534326i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-10.620938i\) \\ \(d_{XAA}^{E_{a}A_{2}\cdotp_{1}}\) & \(-8.576110i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.434678i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(4.152471i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(27.049938i\) \\ \(d_{XAA}^{E_{a}A_{1}A_{1}A_{1}}\) & \(-5.698692i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.446993i\) & \(d_{XAA}^{E_{a}A_{1}A_{2}}\) & \(-2.480596i\) & \(d_{XAA}^{E_{a}A_{2}A_{2}}\) & \(-1.258621i\) \\ \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(0.080903i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-8.191550i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(-7.417263i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-20.897227i\) \\ \(d_{XAA}^{E_{a}\cdotp_{1}}\) & \(-1.154407i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(1.681571i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(3.386078i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(2.439345i\) \\ \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(0.238193i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(12.472326i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-4.790784i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(12.774035i\) \\ \(d_{XAA}^{E_{a}\cdotp_{1}A_{1}B_{2}}\) & \(-9.453377i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-0.466996i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-2.800439i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-7.952841i\) \\ \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(13.895949i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-1.591580i\) & \(d_{XAA}^{A_{1}A_{1}}\) & \(-4.107359\) & \(d_{AAA}^{A_{1}A_{1}A_{1}}\) & \(2.397484\) \\ \(d_{A_{2}A_{1}A_{2}}^{A_{1}A_{2}}\) & \(21.194093\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-0.978726i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(-7.814461i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(11.631521i\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(1.918786\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{1}A_{1}}\) & \(-9.197783\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(1.610455i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(3.125753i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(-9.540155i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(-11.949361i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(12.484410\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{2}}\) & \(-9.879928\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-0.477049\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(0.761103i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-12.478835i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(11.259918i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}B_{1}}\) & \(1.172326i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(2.440350\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(-0.075983\) & \(d_{A_{2}A_{1}A_{2}A_{2}}^{A_{1}A_{2}A_{2}}\) & \(-3.704834\) \\ \(d_{A_{2}A_{1}A_{2}}^{A_{1}A_{2}B_{1}}\) & \(1.376612\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{2}A_{1}}\) & \(1.229324\) & \(d_{A_{2}A_{1}A_{1}}^{A_{2}B_{1}}\) & \(1.0933201\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{1}A_{2}B_{2}}\) & \(16.006692i\) \\ \(d_{A_{2}A_{ \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}}\) & \(4.579291i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{1}}\) & \(-9.426034i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-5.407204i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-8.555600\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(6.655125\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(4.151161i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(8.285432\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-5.063695\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.219170\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.875688\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(0.147463i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(6.929371i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.1121i2i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-4.66058i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(-5.397875i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(4.176724i\) \\ \(d_{\tilde{B}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(1.575566\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-0.271263\) & \(d_{\tilde{B}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.038049i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.382931\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-1.912163\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-1.515589i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(0.890824i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(-2.292044i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}}\) & \(3.594706i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{1}}\) & \(-0.260768i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(5.55557i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(9.379461\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-5.667645\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{2}B_{2}}\) & \(-6.540000i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(3.201716i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(-2.383053i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{1}}\) & \(8.253432i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{2}B_{2}}\) & \(7.527234i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{1}}^{B_{1}B_{2}B_{2}}\) & \(-5.024228i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{2}}^{B_{1}B_{2}B_{2}}\) & \(3.966840i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(4.14^{A_{1}A_{1}A_{1}}_{A_{2}A_{3}}\) & \(10.937174\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(0.509508\) & \(d_{\tilde{A}_{2}A_{2}A_{3}}^{A_{1}A_{1}A_{1}A_{1}}\) & \(-24.015816i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(-11.631159\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(4.399322\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(8.572059i\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(-10.723570i\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(10.365197i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(0.398845i\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(-0.121381\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(-1.110909\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(1.1982856\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}A_{1}A_{1}A_{1}}\) & \(-4.187226i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(0.506711i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(-13.340649i\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(-10.759095i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(-20.229261\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(13.942787\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{2}A_{1}}\) & \(-0 \begin{tabular}{l l|l l|l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{A_{2}A_{3}}^{B_{2}B_{2}A_{2}}\) & \(7.988640i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}A_{2}}\) & \(-6.451660i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-7.488164i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-3.931738\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(2.179696\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(5.422574\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-4.144351\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(0.508823i\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-5.379193\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(3.467506\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(-0.513533i\) & \(d_{A_{2}A_{3}}^{B_{2}B_{2}B_{1}}\) & \(0.123179i\) \\ \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-1.087139\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(0.944735\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-1.170888\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(25.369338\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(25.42_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-9.014408i\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}}\) & \(-19.821463i\) & \(d_{A_{2}A_{3}}^{A_{2}A_{1}}\) & \(-6.117837i\) \\ \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-1.521953i\) & \(d_{A_{2}A_{3}}^{A_{2}A_{1}}\) & \(-3.083800i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(-19.228616\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.632608\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(3.632608\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-2.754358\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(1.002149\) & \(d_{A_{1}A_{3}}^{B_{3}B_{ \begin{table} \begin{tabular}{l l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{X_{1}A_{5}}^{2\text{a}_{2a}\cdot\text{B}_{1}B_{1}}\) & \(-3.214671i\) & \(d_{X_{1}A_{5}}^{2\text{a}_{2a}\cdot\text{B}_{1}B_{1}}\) & \(0.603258i\) & \(d_{X_{1}A_{5}}^{B_{1}B_{2}B_{1}}\) & \(0.621423\) & \(d_{X_{1}A_{5}}^{B_{1}B_{2}B_{1}}\) & \(-3.260072\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-12.030706\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-7.096928\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(11.652491i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-9.456773i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-0.636696i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-2.909157i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.191878\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(7.055560\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.332471i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.802955i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-4.367296\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.559088\) \\ \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(23.751215\) & \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(8.750319\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-14.692094i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(13.252132i\) \\ \(d_{X_{1}A_{5}}^{F_{1}B_{2}B_{1}}\) & \(4.094737\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-1.595549\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(5.636494\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-10.523491i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(7.615138i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-8.143930i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(1.776310\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-1.686488\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(15.322701i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-14.305635\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.869131\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(1.778279\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(19.317670i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-8.326033\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(6.313325\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.272378i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(2.244521i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-7.951860\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(6.199552\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(10.416179\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-6.931145\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(2.411835\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-3.915082i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-1.999352i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-0.377999\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.459073i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(0.662809i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-8.139616\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(4.441519i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-21.566980\) & \(d_{X_{2}A_{1}X_{5}}^{F_{9}B_{2}B_{1}}\) & \(18.944329\) \\ \(d_{X_{2}X_{1}X_{5}}^{F_{9}B_{2}B_{1}}\) & \(18.637183\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-24.379249\) & \(d_{X_{2}X_{1}X_{5}}^{F_{9}B_{1}B_{1}}\) & \(-16.917982\) & \(d_{X_{2}A_{1}X_{5}}^{F_{9}B_{2}B_{1}}\) & \(-14.278758\) \\ \(d_{X_{2}X_{1}X_{5}}^{A_{1}F_{9}B_{2}}\) & \(20.600551\) & \(d_{X_{2}X_{1}X_{5}}^{A_{1}F_{9}B_{2}}\) & \(-27.042913\) & \(d_{X_{2}X_{1}X_{5}}^{F_{9}B_{2}B_{2}}\) & \(7.961135\) & \(d_{X_{2}X_{1}X_{5}}^{F_{9}B_{2}B_{1}}\) & \(-21.096092\) \\ \(d_{X_{2}X_{1}X_{5}}^{F_{9}B_{1}}\) & \(-9.990545\) & \(d_{X_{2}X_{3}X_{1}X_{5}}^{F_{9}B_{1}B_{2}}\) & \(11.733456\) & \(d_{X_{2}X_{1}X_{5}}^{F_{9}A_{2}A_{2 \begin{tabular}{l l|l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(81.992626\) & \(1^{d_{T_{2g}T_{2g}E_{u}E_{u}}}_{TXXX}\) & \(-34.989287\) & \(2^{d_{T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXX}\) & \(-31.565045\) & \(3^{d_{T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-81.741624\) \\ \(d_{\Gamma TXX}^{T_{1g}T_{2g}E_{u}E_{u}}\) & \(46.278777\) & \(1^{d_{T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(71.433673\) & \(2^{d_{T_{1g}T_{1g}E_{u}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-6.946780\) & \(3^{d_{T_{1g}T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-43.60946\) & \(1^{d_{T_{2g}T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-40.607340\) & \(2^{d_{T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-14.698645\) & \(3^{d_{T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-7.230084\) \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-93.276920\) & \(1^{d_{T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-77.790659\) & \(2^{d_{T_{1g}T_{1g}E_{u}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-105.887359\) & \(3^{d_{T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-171.433874\) \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(38.747797\) & \(1^{d_{T_{2g}T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-0.780419\) & \(2^{d_{T_{2g}T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(13.308978\) & \(3^{d_{T_{2g}T_{2g}T_{2g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(115.808376\) \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(120.090413\) & \(1^{d_{T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(-79.872816\) & \(2^{d_{T_{1g}T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(188.210124\) & \(3^{d_{T_{1g}T_{1g}E_{u}E_{u}}}_{\rm ITXXX}\) & \(638.594224\) \\ \(d_{T_{2g}A_{1g}A_{1g}E_{g}}\) & \(-33.796895\) & \(d_{\Gamma TX_{2}X_{1}X}^{T_{1g}A_{1g}A_{1g}A_{2u}}\) & \(-69.609242\) & \(d_{T_{1g}A_{1g}A_{1g}A_{1g}}^{2}\) & \(39.713322\) & \(d_{T_{2g}A_{1g}A_{1g}}^{T_{1u}A_{1g}A_{1g}E_{u}}\) & \(-45.740071\) \\ \(d_{T_{2g}A_{1g}E_{g}}^{T_{2g}A_{1g}E_{g}}\) & \(8.997380\) & \(1^{d_{T_{2g}A_{1g}E_{g}E_{g}}}_{\rm TXX,XXX}\) & \(-7.720726\) & \(d_{T_{1g}A_{1g}E_{g}A_{2u}}^{T_{1g}A_{1g}E_{g}}\) & \(-88.095048\) & \(1^{d_{T_{1g}A_{1g}E_{g}E_{1u}}}_{\rm ITXXXX}\) & \(-41.426988\) \\ \(d_{T_{2g}A_{1g}E_{g}}^{T_{1u}A_{1g}E_{g}B_{1u}}\) & \(38.984515\) & \(d_{\Gamma TX_{2}X_{1}X}^{T_{1g}A_{1g}E_{g}E_{u}}\) & \(37.764357\) & \(1^{d_{T_{1g}A_{1g}E_{g}E_{g}}^{T_{1g}E_{u}}}_{\rm ITX_{2}X_{1}XX}\) & \(-32.601862\) & \(2^{d_{T_{1g}A_{1g}E_{g}E_{g}}^{T_{1g}A_{1g}E_{u}}}_{\rm ITX_{2}X_{1}XX}\) & \(22.785809\) \\ \(d_{T_{2g}A_{1g}E_{g}}^{T_{1g}A_{1g}E_{g}}\) & \(-71.168301\) & \(d_{\Gamma TX_{2}X_{1}X}^{T_{1g}A_{1g}A_{1g}E_{g}}^{2}\) & \(47.008853\) & \(2^{d_{T_{1g}A_{1g}E_{g}E_{g}}^{2}}_{\rm ITX_{2}X_{1}X}\) & \(41.140229\) & \(d_{T_{2g}A_{1g}A_{2u}A_{2u}}^{T_{2g}A_{1g}A_{2u}}\) & \(36.107797\) \\ \(d_{T_{2g}A_{1g}A_{2u}E_{1u}}^{2}\) & \(-48.937211\) & \(d_{T_{2g}A_{1g}A_{2u}E_{u}}^{2}\) & \(-39.278723\) & \(d_{T_{2g}A_{1g}A_{2u}E_{u}}^{2}\) & \(62.614130\) & \(d_{T_{2g}A_{1g}A_{2u}E_{u}}^{2}\) & \(9.232504\) \\ \(d_{T_{2g}A_{1g}A_{1g}A_{1g}E_{u}}^{2}\) & \(-1.338691\) & \(d_{T_{2g}A_{1g}A_{1g}E_{u}}^{2}\) & \(-12.485291\) & \(d_{T_{2g}A_{1g}A_{1g}E_{1u}E_{u}}^{2}\) & \(-6.936718\) & \(1^{d_{T_{2g}A_{1g}E_{g}E_{g}E_{u}}^{2}}_{\rm ITX_{2}X_{1}X}\) & \(13.273570\) \\ \(d_{T_{2g}A_{1g}A_{1g}E_{u}}^{2}\) & \(100.788247\) & \(d_{T_{2g}A_{1g}A_{1g}E_{u}}^{2}\) & \(-32.681471\) & \(1^{d_{T_{2g}A_{1g}A_{1g}E_{u}}^{2}}_{\rm ITX_{2}X_{1}X}\) & \(-21.367906\) & \(2^{d_{T_{2g}A_{1g}E_{g}E_{u}}^{2}}_{\rm ITX_{2}X_{1}X}\) & \(-29.685010\) \\ \(d_{T_{2g}A_{1g}A_{2g}E_{g}}^{2}\) & \(55.359119\) & \(1^{d_{T_{2g}A_{1g}A_{2g}A_{1g}E_{u}}^{2}}_{\rm TXX_{2}X_{1}X}\) & \(41.365281\) & \(1^{d_{T_{2g}E_{g}E_{g}E_{g}E_{g}}^{2}}_{\rm TX_{2}X_{1}X}\) & \(42.275280\) & \begin{tabular}{l l l|l l|l l|l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value & \\ \hline \({}^{2}d^{F_{E}E_{E}E_{u}E_{u}}_{XXXX}\) & \(-22.677407\) & \(d^{E_{E}E_{E}E_{u}}_{XXXX}\) & \(-31.870956\) & \({}^{1}d^{F_{E}E_{E}E_{u}}_{XXXX}\) & \(-28.865489\) & \({}^{2}d^{F_{E}E_{E}E_{u}}_{XXXX}\) & \(23.561785\) \\ \(d^{E_{E}E_{E}E_{u}E_{u}}_{XXXX}\) & \(70.001674\) & \({}^{1}d^{F_{E}E_{E}E_{u}}_{XXXX}\) & \(-28.555125\) & \({}^{2}d^{F_{E}E_{E}E_{u}E_{u}}_{XXXX}\) & \(-143.815512\) & \(d^{A_{E}E_{E}E_{u}}_{XXXX}\) & \(-143.815512\) & \(d^{A_{E}E_{E}E_{u}}_{XXXX}\) \\ \(d^{B_{1}B_{1}B_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \(-10.018271\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \(-94.771124\) & \({}^{2a_{2}}_{XXXX}\) \\ \(d^{B_{1}B_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXXXX}\) & \({}^{2a_{2}}_ ### The irreducible derivatives computed at \(a=5.5602\) A, corresponding to \(T=360\) K \begin{table} \begin{tabular}{l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma\Gamma}^{{}^{2}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{ B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{ X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^ \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-2.846732i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-20.277643\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(20.882804i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-7.387762\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(15.518296i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-3.488960\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-11.050003i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(4.429284i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-2.226293\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-4.029561\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-19.684185i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(8.348201i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(4.599375i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-1.823212\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-6.577345i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(19.949119\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}B_{1}}\) & \(-8.649491i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(3.530474\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-15.624527i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(12.577997i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(18.409799\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(-10.628412i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-10.749300i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-1.215483\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(3.785480i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-17.604878\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-0.502118i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-24.824612\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(1.038515i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-4.007112i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}}\) & \(-9.490329\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-10.494218i\) \\ \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(16.931094i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-15.57345i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(0.475407i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(18.360474\) \\ \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-9.285945i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(-13.863714\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-1.755579\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.923575\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(10.923575\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-4.538825\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(9.972223i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.883439i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-6.652976i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-2.463514\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(8.310475\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(4.792810i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-13.147720i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(12.09371i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.588215\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(1.615871i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.245801i\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(-15.941523\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(11.089347\) & \(d_{\Gamma AA}^{T_{1u}B_{2}B_{2}}\) & \(-0.121430i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-9.5538800i\) & \(d_{\Gamma XX}^{T_{2g}A_{1}B_{2}}\) & \(-17.285696\) & \(d_{\Gamma XX}^{T_{1u}A_{1}B_{2}}\) & \(31.010597\) & \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(-25.864761\) \\ \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(27.329659\) & \(d_{\Gamma XX}^{T_{2g}E_{2}B_{2}}\) & \(21.386661\) & \(d_{\Gamma XX}^{T_{2g}A_{2u}}\) & \(30.684545\) & \(d_{\Gamma XX}^{T_{1u}B_{1}E_{g}}\) & \(-14.646310\) \\ \(d_{\Gamma XX}^{T_{1u}E_{g}E_{u}}\) & \(-14.403271\) & \(d_{\Gamma XX}^{T_{1u}E_{u}}\) & \(15.324818\) & \(d_{\Gamma XX}^{T_{2g}B_{1}B_{2}}\) & \(25.88571\) & \(d_{\Gamma XX}^{T_{2g}A_{2u}}\) & \(12.653388\) \\ \(d_{\Gamma XX}^{T_{2g}A_{2u}E_{u}}\) & \(-26.801862\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(-10.181429\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(11.978083\) & \(d_{\Gamma XX}^{T_{2 \begin{tabular}{l l|l l|l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value & \\ \hline \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(4.796087i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(6.610407i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(3.997117i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(0.088589i\) \\ \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(3.941816i\) & \(d_{XAA}^{E_{a}B_{2}\cdotp_{1}}\) & \(1.409079i\) & \(d_{XAA}^{E_{a}B_{2}\cdotp_{2}}\) & \(0.834617i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(-10.354308i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-1.816229i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(2.317451i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-10.228313i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(6.696361i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-1.519044i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(10.247798i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(7.340838i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-9.848991i\) \\ \(d_{XAA}^{E_{a}A_{2}\cdotp_{1}}\) & \(-8.179963i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.589283i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.589283i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(4.359032i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(26.021020i\) \\ \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-5.818023i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.132478i\) & \(d_{XAA}^{E_{a}A_{1}A_{2}}\) & \(-2.106298i\) & \(d_{XAA}^{E_{a}A_{2}A_{2}}\) & \(-1.009243i\) \\ \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(0.242213i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-8.068093i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(-6.988502i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-20.858921i\) \\ \(d_{XAA}^{E_{a}\cdotp_{1}}\) & \(-1.081947i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(1.232219i\) & \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(2.766094i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(2.472026i\) \\ \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(0.663267i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(11.964010i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-5.412356i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(12.306954i\) \\ \(d_{XAA}^{E_{a}\cdotp_{1}}\) & \(-8.924452i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-0.658878i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-2.579690i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-7.417712i\) \\ \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(12.946462i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-1.539491i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-4.925698\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(2.144605\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(20.953884\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-1.744422i\) & \(d_{A_{2}A_{1}A}^{A_{1}B_{1}}\) & \(-7.879177i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(11.248064i\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(1.1600887\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-9.531367\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(1.610087i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(2.999762i\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(-9.093527i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(-11.839198i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(12.337910\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(-9.987055\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-0.451577\) & \(d_{A_{2}A_{1}A_{1}A_{2}}^{A_{1}A_{1}A_{2}}\) & \(0.682013i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(-12.430768i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(11.158517i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(1.271591i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{2}}\) & \(3.027565\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(0.022706\) & \(d_{A_{2}A_{1}A_{2}A_{2}}^{A_{1}A_{2}A_{2}}\) & \(-2.944403\) \\ \(d_{A_{2}A_{1}A_{2}B_{1}}\) & \(1.268479\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(1.2458650\) & \(d \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}}\) & \(4.012029i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{1}}\) & \(-9.450927i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-5.851087i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-7.765229\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(6.303114\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(3.898993i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(7.834408\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-4.968342\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(0.245449\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-1.120434\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(0.004336i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}}\) & \(8.041847i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.417130i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-3.98777i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-5.319757i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(3.909961i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.761617\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-0.399271\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(3.041599i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.508150\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-1.965512\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-1.870202i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(0.556421i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-2.317387i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}}\) & \(5.242398i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}}\) & \(-0.035760i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{2}}\) & \(5.261601i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(9.061149\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-5.755825\) & \(d_{\tilde{A}_{2}A_{1}A_{1}B_{2}}^{B_{1}B_{1}B_{2}}\) & \(-6.735929i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{2}}\) & \(3.56436i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{2}}\) & \(-1.880588i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{1}}\) & \(8.610494i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}B_{2}}\) & \(6.904771i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-4.766010i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(3.848178i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(25.608472\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(10.624852\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-0.122199\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-23.420438i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(-10.871512\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(4.764524\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(8.450554i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-10.385420i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(9.580293i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(0.508124i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-0.456040\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-1.185905\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(1.1626060\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(-3.527242i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(0.736765i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-12.982769i\) \\ \(d_{\tilde{A}_{2}A_{1}B_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-10.267390i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-20.166178\) & \(d_{\tilde{A}_{2}A_{2}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(13.656399\) & \(d_{\tilde{A}_{2}A_{2}A_{1}}^{A_{1}B_{1}A_{1 \begin{tabular}{l l|l l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{A_{2}A_{3}}^{B_{2}B_{2}A_{2}}\) & \(7.716041i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{2}A_{2}}\) & \(-6.000579i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{1}}\) & \(-7.278908i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{1}}\) & \(-4.180537\) \\ \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{1}}\) & \(2.003920\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(5.333544\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-4.124746\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(0.581165i\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-5.000277\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(3.477596\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(-5.051886i\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(0.293646i\) \\ \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-2.367234\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(1.203723\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-0.360231\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(24.406010\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(24.406014\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}A_{1}}\) & \(-8.641449i\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-19.708056i\) & \(d_{A_{2}A_{3}}^{A_{1}A_{1}}\) & \(-5.717164i\) \\ \(d_{A_{2}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-1.582463i\) & \(d_{A_{2}A_{3}}^{A_{2}A_{1}}\) & \(-2.676196i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(-18.831750\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.310892\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.3108244\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(-2.357025\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-18.831750\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.310892\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-23.284980\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.2514852\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-1.0003373\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-0.175228\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(20.535878i\) & \(d_{A_{1}A_{3}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.296184i\) & \(d_{A_{1}A_{3}}^{E_{1}A_{1}}\) & \(-5.200045i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(1.768662i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{3}}\) & \(-4.946635i\) & \(d_{A_{1}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(1.29163149\) & \(d_{A_{1}A_{3}}^{E_{1}A_{1}}\) & \(-9.648532i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(2.051527i\) \\ \(d_{A_{1}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(-4.946635i\) & \(d_{A_{1}A_{3}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(3.042642i\) & \(d_{A_{1}A_{3}}^{E_{1}A_{3}}\) & \(15.402672\) & \(d_{A_{1}A_{3}}^{E_{1}B_{1}A_{1}}\) & \(2.693721\) \\ \(d_{A_{1}A_{3}}^{E_{1}B_{1}A_{1}}\) & \(0.538097\) & \(d_{A_{1}A_{3}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(2.955122\) & \(d_{A_{1}A_{3}}^{E_{1}A_{1}}\) & \(1.7068855i\) & \(d_{A_{1}A_{3}}^{E_{1}B_{1}A_{1}}\) & \(1.385880i\) \\ \(d_{A_{1}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(10.857169i\) & \(d_{A_{1}A_{3}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(-2.296169i\) & \(d_{A_{1}A_{3}}^{E_{1}A_{2}A_{1}}\) & \(-25.193660\) & \(d_{A_{1}A_{3}}^{E_{1}B_{1}A_{1}}\) & \(-3.271878\) \\ \(d_{A_{1}A_{3}}^{E_{1}B_{1}A_{1}}\) & \(-3.271878\) & \(d_{A_{1}A_{3}}^{E_{1}B_{2}A_{1}}\) & \(-1.104952\) & \(d_{A_{1}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(-3.946804\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(12.953866\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(-14.476374\) & \(d_{A_{1}A_{3}}^{A_{1}A_{3}B_{2}A_{1}}\) & \(11.1889734\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}A_{1}}\) & \(1.16.325998i\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}A_{1}}\) & \(1.6325989i\) \\ \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(0.200692i\) & \(d_{A_{1}A_{3}}^{A_{2}B_{1}A_{1}}\) & \(-1.751842i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.751842i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(8.311873\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-5.35 \begin{table} \begin{tabular}{c c c c|c c|c c} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{X_{1}A_{5}}^{2^{\mathrm{Z}_{2}}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-3.615684i\) & \(d_{X_{1}A_{5}}^{2^{\mathrm{Z}_{2}}\cdot\mathrm{Z}_{1}B_{1}}\) & \(0.559010i\) & \(d_{X_{1}A_{5}}^{B_{1}\cdot\mathrm{Z}_{2}B_{1}}\) & \(0.722279\) & \(d_{X_{1}A_{5}}^{B_{1}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-3.129635\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-11.620500\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-6.772964\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(11.894472i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-7.757109i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-0.816587i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-2.845703i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.268756\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(6.850260\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.757803i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.886840i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-4.290978\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.614720\) \\ \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(23.102230\) & \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(8.717462\) & \(d_{X_{1}A_{5}}^{F_{4}A_{2}}^{F_{9}B_{1}B_{1}}\) & \(-14.282582i\) & \(d_{X_{1}A_{5}}^{F_{4}A_{2}}\cdot\mathrm{Z}_{1}B_{1}}\) & \(12.768506i\) \\ \(d_{X_{1}A_{5}}^{F_{1}B_{1}B_{1}}\) & \(3.980848\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-1.542607\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(5.526937\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-10.743502i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(7.692413i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-8.400786i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(1.882574\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-1.936174\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(15.177944i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-14.557862\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(3.457213\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(1.665438\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(18.636864i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-7.995532\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(6.040521\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.004237i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(2.233972i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-7.405482\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(5.704702\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(10.099818\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-6.591574\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(2.081466\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(0.578465\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-3.537192i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-2.006220i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-0.706388\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.370559i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(0.713618i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-9.587627\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(4.670125i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(4.670125i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(19.717342\) \\ \(d_{X_{2}X_{1}X_{2}X_{1}X_{2}}^{F_{9}B_{2}}\) & \(18.157375\) & \(d_{X_{1}A_{5}}^{F_{4}A_{2}}\) & \(-23.177722\) & \(d_{X_{2}X_{1}X_{1}X_{1}}^{F_{9}B_{1}B_{1}}\) & \(-16.006225\) & \(d_{X_{2}A_{5}X_{1}X_{1}}^{F_{9}B_{2}B_{1}}\) & \(-13.575482\) \\ \(d_{X_{2}X_{2}X_{1}X_{2}}^{A_{1}F_{9}B_{2}}\) & \(20.646176\) & \(d_{X_{2}X_{1}X_{2}X_{1}}^{A_{1}F_{9}B_{2}}\) & \(-28.210450\) & \(d_{X_{2}X_{1}X_{2}}^{F_{9}B_{2}B_{1}}\) & \(8.662641\) & \(d_{X_{2}X_{1}X_{1}X_{2}}^{F_{9}B_{1}}\) & \(-19.898745\) \\ \(d_{X_{2}X_{1}X_{2}X_{1}X_{2}}^{F_{9}B_{1}}\) & \(-9.415798\) & \(d_{X_{2}X_{3 \begin{tabular}{l l|l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Value \\ \hline \(d_{\Gamma\chi XX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 127.030954 & \({}^{1}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 12.332864 & \({}^{2}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)14.127003 & \({}^{3}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)110.830960 \\ \(d_{\Gamma TXX}^{T_{1g}T_{2g}E_{u}E_{u}}\) & 47.732199 & \({}^{1}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 69.602306 & \({}^{2}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 16.564702 & \({}^{3}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 248.179381 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)22.975345 & \({}^{1}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)3.664265 & \({}^{2}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)17.803367 & \({}^{3}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)4.309245 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)111.173011 & \({}^{1}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)90.062600 & \({}^{2}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)107.392925 & \({}^{3}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)155.222316 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 33.125709 & \({}^{1}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)19.063992 & \({}^{2}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 38.396681 & \({}^{3}d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 126.993991 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 83.129009 & \({}^{1}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 178.459169 & \({}^{2}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 145.859192 & \({}^{3}d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 450.335857 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & \(-\)29.153073 & \(d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}^{T_{1g}A_{1g}A_{2u}}\) & \(-\)73.055571 & \({}^{1}d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}^{T_{1g}A_{1g}A_{2u}}\) & 36.698455 & \({}^{4}d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}^{T_{1g}A_{1g}A_{2u}}\) & \(-\)36.874416 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & 29.892100 & \({}^{1}d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & 16.189224 & \({}^{1}_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}}\) & \(-\)72.533954 & \({}^{4}d_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}E_{u}}\) & \(-\)41.352822 \\ \(d_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}B_{1u}}\) & 42.741456 & \({}^{1}_{\Gamma TXX_{2}X_{1}X}\) & 39.675487 & \({}^{1}_{\Gamma TXX_{2}X_{1}X}\) & \(-\)35.650996 & \({}^{2}d_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}E_{u}}\) & 0.8031337 \\ \(d_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}E_{u}}\) & \(-\)67.147981 & \({}^{1}_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}E_{u}E_{u}}\) & 34.920774 & \({}^{2}_{\Gamma TXX}^{T_{1g}A_{1g}E_{g}E_{u}}\) & 45.474195 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}A_{2u}A_{2u}}\) & 54.623488 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}A_{2u}E_{1u}}\) & \(-\)33.409062 & \(d_{\Gamma TXX}^{T_{2g}A_{1g}A_{2u}E_{u}}\) & \(-\)49.984762 & \(d_{\Gamma TXX}^{T_{2g}A_{1g}A_{2u}E_{u}}\) & 78.480300 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & 0.092988 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & 11.858487 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & \(-\)15.965277 & \({}^{1}_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & \(-\)7.912826 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & 9.835952 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & 86.143014 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}B_{1u}E_{u}}\) & \(-\)20.914333 & \({}^{1}_{\Gamma TXX}^{T_{2g}A_{1g}E_{u}E_{u}}\) & \(-\)18.461221 & \({}^{2}_{\Gamma TXX}^{T_{2g}A_{1g}E_{u}E_{u}}\) & \(-\)39.908938 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{2u}E_{2u}}\) & 62.239793 & \({}^{1}_{\Gamma TXX}^{T_{2g}A_{1g}E_{u}E_{u}}\) & \(-\)23.15273 & \({}^{1}_{\Gamma TXX}^{T_{2g}E_{g}E_{g}E_{u}}\) & 33.853553 & \({}^{1}_{\Gamma TXX}^{T_{1g}E_{g}E_{g}B_{1u}}\) & 13.224657 \\ \(d_{\Gamma TXX}^{T_{1g}E_{g}E_{g}B_{1u}}\) & \(-\)5.077131 & \({}^{1}_{\Gamma TXX}^{T_{1g}E_{g}E_{g}E_{u}}\) & \(-\)5.155741 & \({}^{1}_{\Gamma TXX}^{T_{1g}E_{g}E_{g}E_{u}}\) & 12.449372 & \({}^{2}_{\Gamma TXX}^{T_{1g}E_{g}E_{g}E_{u}}\) & \(-\) \begin{tabular}{l l l|l l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value & \\ \hline \({}^{2}d^{F_{E}E_{E}E_{u}E_{u}}_{XXXX}\) & \(-\)76.313721 & \(d^{E_{E}^{_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & \(-\)49.979881 & \({}^{1}d^{E_{E}^{_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & \(-\)23.431600 & \({}^{2}d^{F_{E}E_{E}^{_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & 14.844790 \\ \(d^{E_{E}^{_{E}E_{E}E_{u}}^{_{E}E_{u}}E_{u}}_{XXXX}\) & 91.278317 & \({}^{1}d^{E_{E}^{_{E}E_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & 19.204987 & \({}^{2}d^{E_{E}^{_{E}E_{E}E_{u}}^{_{E}E_{u}}E_{u}}_{XXXX}\) & \(-\)126.882375 & \(d^{A_{E}^{_{E}E_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & 79.921914 \\ \(d^{B_{E},B_{E},B_{E}}_{XXXX}\) & \({}^{2}d^{E_{E},B_{E}^{_{E}E_{u}}^{_{E}E_{u}}}_{XXXX}\) & 49.67179 & \(d^{2}_{XXXX^{2}Xu ### The irreducible derivatives computed at \(a=\) 5.5717 A, corresponding to \(T=600\) K \begin{table} \begin{tabular}{l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma\Gamma}^{{}^{2}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{} ^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{ A}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{ }^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F} {}_{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}^{F}{ }_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{ }^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}_{X}{X}{}^{F} {}_{X}{}^{F}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{X}{}^{F} {}_{X}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}^{F}{}_{X}^{F}{}_{X}{ }^{F}{}_{X}^{F}{F}_{X}{X}^{F}{}_{X}^{F}{}_{X}{}^{F}_{X}{X}^{F}{}_{X}^{F}{F}_{X}{}^{F} {}_{X}^{F}{F}_{X}{}^{F}{}_{X}^{F}{}_{X}{}^{F}{F}_{X}^{F}{}_{X}{X}^{F}{}_{X}{}^{F}_{X}{ }^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{}^{F}_{X}{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}_{X}{X}^{F} {}_{X}{}^{F}_{X}{X}^{F}{}_{X}^{F}{F}_{X}{}^{F}_{X}{}^{F}_{X}{X}^{F}_{X}{X}{}^{F}_{X}{X}^{F} {}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}{}_{X}^{F}{F}_{X}{}^{F}{}_{X}{}^{F}_{X}^{F}{F} {}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}{}^{F}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{X}^{F}{}_{X}{}^{F}{}_{X}{}^{F} {}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}_{X}{X}{}^{F}_{X}{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{X}^{F} {}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{}^{F}_{X}{}^{F}{}_{X}{ }^{F}{} \begin{tabular}{l l|l l|l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-3.437231i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-19.959802\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(20.636224i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-6.772564\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(15.493271i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-4.182787\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-10.659395i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(4.376051i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-2.443805\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-4.215682\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-19.495775i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(7.645277i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(5.016893i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-1.767120\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-7.235246i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(19.426930\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-8.478635i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(3.512393\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-15.161648i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(11.748173i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(18.224529\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(-10.577489i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-10.615402i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-0.600303\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(3.714904i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-16.652687\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-0.501740i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-23.691434\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(1.117154i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-3.957171i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}}\) & \(-9.639033\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-10.122166i\) \\ \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(16.585910i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-2.046391i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-0.329139i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(17.717075\) \\ \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-9.483614i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(-13.705680\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-2.759796\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(11.105241\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(1.562593i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-4.513054\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(9.465907i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.492542i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-6.551348i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-2.815295\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(8.563839\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(4.762712i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-12.100424i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(9.607532i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(9.614172\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(1.625592i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.072565i\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(-14.640168\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(10.721512\) & \(d_{\Gamma AA}^{T_{1u}B_{2}B_{2}}\) & \(-0.140951i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(-9.059690\) & \(d_{\Gamma XX}^{T_{2g}A_{1}B_{2}}\) & \(-17.277490\) & \(d_{\Gamma XX}^{T_{1u}A_{1g}A_{2}}\) & \(30.599811\) & \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(-25.595934\) \\ \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(26.140793\) & \(d_{\Gamma XX}^{T_{2g}E_{2}B_{1}}\) & \(19.617882\) & \(d_{\Gamma XX}^{T_{1u}E_{u}}\) & \(29.497541\) & \(d_{\Gamma XX}^{T_{1u}E_{u}}\) & \(-13.871535\) \\ \(d_{\Gamma XX}^{T_{1u}E_{u}}\) & \(-15.037630\) & \(d_{\Gamma XX}^{T_{1u}E_{u}}\) & \(14.558549\) & \(d_{\Gamma XX}^{T_{2g}B_{1}B_{2}}\) & \(25.953516\) & \(d_{\Gamma XX}^{T_{2g}A_{2u}E_{u}}\) & \(12.246199\) \\ \(d_{\Gamma XX}^{T_{2g}A_{2u}E_{u}}\) & \(-26.614293\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(-9.706279\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(11.723389\) & \(d_{\Gamma XX}^{T_{2g \begin{tabular}{l l|l l|l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value & \\ \hline \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(4.569618i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{2}}\) & \(6.381738i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(4.020446i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(0.423695i\) \\ \(d_{XAA}^{E_{a}B_{1}\cdotp_{1}}\) & \(3.578219i\) & \(d_{XAA}^{E_{a}B_{2}\cdotp_{2}}\) & \(1.217310i\) & \(d_{XAA}^{E_{a}B_{1}\cdotp_{2}}\) & \(0.830625i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(-9.372759i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-1.471103i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(1.796170i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-10.555032i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(7.061681i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-5.120292i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(10.161648i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(7.044401i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-9.657297i\) \\ \(d_{XAA}^{E_{a}A_{2}\cdotp_{1}}\) & \(-8.122170i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.706901i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-2.34841i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(25.747366i\) \\ \(d_{XAA}^{E_{a}A_{1}\cdotp_{1}}\) & \(-5.614036i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-10.991465i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-1.330761i\) & \(d_{XAA}^{E_{a}A_{2}}\) & \(-1.098705i\) \\ \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(0.132012i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-7.580069i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(-6.956706i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-20.901661i\) \\ \(d_{XAA}^{E_{a}\cdotp_{1}}\) & \(-1.040090i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(1.278417i\) & \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(2.881971i\) & \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(2.246645i\) \\ \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(0.978801i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(11.849311i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-4.418796i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(11.947013i\) \\ \(d_{XAA}^{E_{a}A_{1}\cdotp_{1}}\) & \(-8.989563i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-0.062929i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-2.543544i\) & \(d_{XAA}^{E_{a}A_{1}B_{1}}\) & \(-7.667315i\) \\ \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(12.938757i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-1.24799i\) & \(d_{A_{2}A_{1}A}^{A_{1}A_{1}}\) & \(-5.034354\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(2.205763\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(20.471205\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-1.32042i\) & \(d_{A_{2}A_{1}A}^{A_{1}A_{1}B_{1}}\) & \(-7.651076i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(11.451896i\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(1.338444\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-9.254780\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{2}}\) & \(1.653180i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(2.797897i\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(-8.891449i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{2}}\) & \(-11.347445i\) & \(d_{A_{2}A_{1}A_{1}B_{2}}\) & \(11.713202\) & \(d_{A_{2}A_{1}A_{1}A}^{A_{1}A_{1}B_{2}}\) & \(-9.402189\) \\ \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-9.402374\) & \(d_{A_{2}A_{1}A_{1}A_{2}}^{A_{1}A_{2}}\) & \(0.523130i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(-12.112346i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(11.001446i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}B_{1}}\) & \(1.167500i\) & \(d_{A_{2}A_{1}A_{1}A_{2}}^{A_{1}A_{1}B_{2}}\) & \(3.382867\) & \(d_{A_{2}A_{1}A_{1}A_{2}}^{A_{1}A_{1}B_{2}}\) & \(-0.074000\) & \(d_{A_{2}A_{1}A_{2}A_{1}A_{2}}^{A_{1}A_{2}A_{2}}\) & \(-3.047898\) \\ \(d_{A_{2}A_{1}A_{2}B_{1}}\) & \(1.547808\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{2}B_{1}}\) & \(12.351110i\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{2}B_{1}}\) & \(10.501810\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{ \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}}\) & \(4.033649i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{1}}\) & \(-9.273411i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-5.219007i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-7.510793\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(6.358849\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(3.988475i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(7.287276\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-4.854909\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.053665\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.809346\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(0.449806i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}}\) & \(6.632233i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.017813i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-4.759622i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-4.793228i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(3.900992i\) \\ \(d_{\tilde{B}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.548647\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-0.290196\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(0.219539i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.203566\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-1.843931\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-2.023290i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(0.693979i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-2.323632i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(4.888048i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(0.573982i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(5.055873i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{2}}\) & \(8.991114\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-5.429562\) & \(d_{\tilde{A}_{2}A_{1}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-6.279616i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(3.277455i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}}\) & \(-2.191704i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{1}B_{1}}\) & \(7.853808i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{B_{1}B_{1}B_{2}B_{2}}\) & \(6.950861i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-4.575895i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(3.447222i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(4.419018\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(10.037567\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(0.791159\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-23.852604i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(-9.556746\) & \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(4.399625\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(9.024976i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-10.166143i\) \\ \(d_{\tilde{A}_{2}A_{1}B_{1}B_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(9.665470i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(0.565037i\) & \(d_{\tilde{A}_{2}A_{1}B_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-0.011518\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(-0.749620\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(1.564935\) & \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(-3.633586i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(0.745300i\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-12.941819i\) \\ \(d_{\tilde{A}_{2}A_{1}B_{1}A_{1}}^{A_{1}B_{1}A_{1}}\) & \(-10.171685i\) & \(d_{\tilde{A}_{2}A_{2}A_{1}A_{1}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(-19.009288\) & \(d_{\tilde{A}_{2}A_{2}A_{1}}^{A_{1}B_{2}A_{1}}\) & \(1.347223\) & \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A \begin{tabular}{l l|l l|l l|l l} \hline \hline \multicolumn{1}{c|}{Derivative} & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{A_{2}A_{3}}^{B_{2}B_{2}A_{2}}\) & \(7.976370i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{2}A_{2}}\) & \(-5.833784i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-7.105243i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-3.879850\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(2.205007\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(4.994341\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-3.719131\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(0.526101i\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-4.893032\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(3.048517\) & \(d_{A_{2}A_{3}}^{B_{2}B_{2}B_{1}}\) & \(-0.339190i\) & \(d_{A_{2}A_{3}}^{B_{2}B_{2}B_{1}}\) & \(0.389067i\) \\ \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-7.721075\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(6.055473\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-1.227411\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(24.620483\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(-19.175336\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-8.445087i\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-19.018197i\) & \(d_{A_{2}A_{3}}^{A_{1}A_{1}}\) & \(-5.847739i\) \\ \(d_{A_{2}A_{2}B_{2}A_{1}}^{A_{2}B_{2}A_{1}}\) & \(-1.920846i\) & \(d_{A_{1}A_{3}}^{A_{1}A_{2}}\) & \(-3.142402i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(-18.232319\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.697846\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-4.803228\) & \(d_{A_{1}A_{3}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-2.267792\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(0.981441\) & \(d_{A_{3}A_{3}}^{B_{1}A_{1}}\) & \(-0.467756\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-22.724183\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(15.726241\) & \(d_{A_{1}A_{3}}^{B_{2}A_{1}}\) & \(-5.126779i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(1.899808i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(19.705951i\) & \(d_{A_{1}A_{3}}^{B_{2}B_{1}A_{1}}\) & \(13.010342i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}A_{1}}\) & \(-9.174003i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}A_{3}}^{B_{1}A_{1}A_{3}}\) & \(2.409145i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-5.034407i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(3.025518i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(15.433331\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(2.471579\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(0.103993\) & \(d_{A_{1}A_{3}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(2.425027\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(16.603158i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.616478i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(11.344910i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-2.575399i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-24.112085\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-3.389415\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-3.389415\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-0.753616\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-3.525751\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(12.427018\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(-14.960365\) & \(d_{A_{1}A_{3}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(10.874004\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}A_{1}A_{1}}\) & \(16.032427i\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}A_{1}}\) & \(-2.917239i\) \\ \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(0.471801i\) & \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-1.729430i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.729430i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(7.818073\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-3.585639\) & \(d_{A_{1}A_{3}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(2.800249\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-11.952758\) & \(d_{A_{1}A_{3}}^{B_{2}A_{1}A_{1}}\) & \(9.2 \begin{table} \begin{tabular}{c c c c|c c|c} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{X_{1}A_{5}}^{2^{\mathrm{Z}_{2}}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-3.580173i\) & \(d_{X_{1}A_{5}}^{2^{\mathrm{Z}_{2}}\cdot\mathrm{Z}_{1}B_{1}}\) & \(0.549368i\) & \(d_{X_{1}A_{5}}^{B_{1}\cdot\mathrm{Z}_{2}B_{1}}\) & \(0.338739\) & \(d_{X_{1}A_{5}}^{B_{1}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-3.227715\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-11.475596\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-6.716480\) & \(d_{X_{1}A_{5}}^{F_{2}B_{1}}\) & \(11.667347i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-7.831551i\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-0.758337i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-2.562753i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(0.293624\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(6.740778\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(2.603841i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(2.553624i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-4.268675\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(0.843049\) \\ \(d_{X_{1}A_{5}}^{A_{1}\cdot\mathrm{Z}_{1}B_{1}}\) & \(23.203768\) & \(d_{X_{1}A_{5}}^{A_{1}\cdot\mathrm{Z}_{1}B_{1}}\) & \(7.959942\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-14.192847i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(12.547849i\) \\ \(d_{X_{1}A_{5}}^{F_{1}\cdot\mathrm{Z}_{1}B_{1}}\) & \(3.952506\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-1.843203\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(4.565804\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-8.688414i\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(8.218054i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-8.11101i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(1.760416\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-1.937615\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(14.744475i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-14.277698\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(3.108635\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(1.745804\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(18.301419i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-7.646549\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(5.996768\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-0.675105i\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(2.620767i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-7.184374\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(5.792089\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(9.460617\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-6.264795\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(2.235498\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{2}}\) & \(0.759132\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{2}}\) & \(-3.630128i\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-2.678706i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-0.582312\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{1}}\) & \(-0.469088i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{2}}\) & \(0.394198i\) \\ \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{2}}\) & \(-8.392318\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}B_{2}}\) & \(4.461697i\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}A_{5}}\) & \(-20.754326\) & \(d_{X_{2}A_{1}X_{5}}^{F_{2}\cdot\mathrm{Z}_{1}}\) & \(17.715218\) \\ \(d_{X_{2}X_{1}X_{5}}^{F_{2}\cdot\mathrm{Z}_{2}\cdot\mathrm{Z}_{1}}\) & \(18.0161111\) & \(d_{X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{2}\cdot\mathrm{Z}_{1}}\) & \(-22.974110\) & \(d_{X_{2}X_{1}A_{5}}^{F_{2}\cdot\mathrm{Z}_{1}B_{1}}\) & \(-15.385235\) & \(d_{X_{2}A_{1}X_{5}}^{F_{2}\cdot\mathrm{Z}_{1}}\) & \(-13.752770\) \\ \(d_{X_{2}X_{1}X_{5}}^{A_{1}\cdot\mathrm{Z}_{2}\cdot\mathrm{Z}_{1}}\) & \(19.630257\) & \(d \begin{tabular}{l l|l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Value \\ \hline \(d_{\Gamma\chi XX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 109.431581 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 1.707447 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)13.861316 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)37.273370 \\ \(d_{\Gamma TXX}^{T_{1g}T_{2g}E_{u}E_{u}}\) & 37.510618 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 65.036465 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 5.199286 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 234.174262 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)39.44350 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)21.021433 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)10.688053 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)41.292111 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)108.729751 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)87.470367 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)111.693512 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)115.069314 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}E_{u}}\) & 28.061243 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 16.959320 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 19.288004 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 193.500255 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 60.641997 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 165.854048 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 152.386650 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 403.737890 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & \(-\)9.210105 & \(d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}\) & \(-\)61.003446 & \(d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}}\) & 42.474431 & \({}^{T_{1g}A_{1g}A_{1g}A_{1g}}\) & \(-\)33.284132 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & 20.445108 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & \(-\)13.890203 & \({}^{1}\)\({}^{ \begin{tabular}{l l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \({}^{2}d^{F_{E_{E}}^{\prime}E_{E_{u}}E_{u}}_{XXXX}\) & \(-75.641093\) & \(d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-65.550040\) & \({}^{1}d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-22.949505\) & \({}^{2}d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(21.503431\) \\ \(d^{E_{E_{E}}^{\prime}E_{E_{u}}^{\prime}E_{u}}_{XXXX}\) & \(63.440950\) & \({}^{1}d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(9.163107\) & \({}^{2}d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-98.924500\) & \({}^{3}d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) \\ \(d^{B_{1},B_{1},B_{2}}_{2}{}^{2}{}_{2u}{}^{2}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u} {}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{} _{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u} {}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ }{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ 2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{ }_{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u}{2u}{}_{2u}{}_{2u}{}_{2u}{}_{2u ### The irreducible derivatives computed at \(a=5.5942\) A, corresponding to \(T=1000\) K \begin{table} \begin{tabular}{l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma\Gamma}^{{}^{2}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }}\) & \(-\)31.738854 & \({}^{1}d_{\Gamma\Gamma}^{{}^{2}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{ }_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{ }_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B} {}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{ }_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B} {}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{ }_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}{}_{B} {}^{F}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}{}_{B}^{F}{ }_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}{}_{B} {}^{F}_{B}{}^{F}{}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{ }^{F}{}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}{}^{F}_{B}{}^{F} {}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{ }^{F}{}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B} {}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F} {}_{B}^{F}{}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{ }^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{ }^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}{}^{F}{}_{B}^{F}{}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}{}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}{}^{F}_{B}{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}_{B}{B}^{F}{}_{B}^{F}_{B}{}^{F}_{B}{}^{F}{}_{B}^{F}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}_{B}^{F}{ }_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F} {}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F}_{B}{ }^{F}_{B}^{F}{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F} {}_{B}^{F}{}_{B}^{F}{}_{B}^{F}_{B}{}^{F}_{B}{}^{F}_{B}^{F}{F}_{B}{}^{F}_{B}{}^{F}_{B} {}^{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F} {}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{}_{B}^{F}{F}_{B}^{F}{}_{B}^{F}{}_{B}^{F}_{B}^{ \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-3.032661i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-18.639198\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(19.323621i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-6.047773\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(14.985033i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-3.730084\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-10.876785i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(3.940739i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-2.202430\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(-4.388093\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-18.478635i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(7.367873i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(4.688464i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-2.442757\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-6.413091i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(18.538562\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-7.597330i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(3.530727\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-14.168384i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(11.392909i\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}A_{1}}\) & \(17.564634\) & \(d_{\Gamma AA}^{T_{2g}A_{1}A_{2}}\) & \(-10.163873i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-10.524558i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(-0.789059\) \\ \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(3.009722i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-15.774285\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-3.071657i\) & \(d_{\Gamma AA}^{T_{2g}A_{1}B_{1}}\) & \(-22.816850\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(1.159677i\) & \(d_{\Gamma AA}^{T_{1u}A_{1}B_{2}}\) & \(-3.739193i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}}\) & \(-7.622601\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-9.396965i\) \\ \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(15.164400i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(-1.315245i\) & \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-0.087712i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{1}}\) & \(16.755475\) \\ \(d_{\Gamma AA}^{T_{2g}A_{2}B_{2}}\) & \(-9.184368i\) & \(d_{\Gamma AA}^{T_{1u}A_{2}B_{2}}\) & \(-13.235787\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-1.678203\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(10.638891\) \\ \(d_{\Gamma AA}^{T_{1u}A_{1}B_{1}}\) & \(1.7319025i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-4.499338\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(8.803861i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.053934i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-6.015857i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(-3.097348\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(8.605666\) & \(d_{\Gamma AA}^{T_{1u}B_{1}B_{1}}\) & \(5.138043i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-11.562011i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.864867i\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{1}}\) & \(9.299192\) & \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(1.803894i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(8.260967i\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(-13.965495\) & \(d_{\Gamma AA}^{T_{2g}B_{2}B_{2}}\) & \(10.208569\) & \(d_{\Gamma AA}^{T_{1u}B_{2}B_{2}}\) & \(-0.250851i\) \\ \(d_{\Gamma AA}^{T_{2g}B_{1}B_{2}}\) & \(-7.988398\) & \(d_{\Gamma XXA}^{T_{2g}B_{1}B_{2}}\) & \(-16.929580\) & \(d_{\Gamma XX}^{T_{1u}A_{1}B_{2}}\) & \(30.058595\) & \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(-24.850003\) \\ \(d_{\Gamma XX}^{T_{1u}A_{1g}E_{u}}\) & \(25.518315\) & \(d_{\Gamma XX}^{T_{2g}E_{g}E_{g}}\) & \(15.838472\) & \(d_{\Gamma XX}^{T_{1u}E_{g}A_{2u}}\) & \(29.011362\) & \(d_{\Gamma XX}^{T_{1u}B_{1u}E_{g}}\) & \(-13.391128\) \\ \(d_{\Gamma XX}^{T_{1u}E_{g}E_{u}}\) & \(-13.523136\) & \(d_{\Gamma XX}^{T_{1u}E_{g}E_{u}}\) & \(13.699118\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(24.748066\) & \(d_{\Gamma XX}^{T_{2g}B_{2}A_{2u}}\) & \(11.989822\) \\ \(d_{\Gamma XX}^{T_{2g}E_{g}E_{u}E_{u}}\) & \(-24.860560\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(-8.813991\) & \(d_{\Gamma XX}^{T_{2g}B_{1u}E_{u}}\) & \(10.924336\) & \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{XAA}^{E_{a}B_{1}\cdot 19}\) & \(4.448974i\) & \(d_{XAA}^{E_{a}B_{1}\cdot 29}\) & \(5.788781i\) & \(d_{XAA}^{E_{a}B_{1}\cdot 19}\) & \(4.508732i\) & \(d_{XAA}^{E_{a}B_{1}\cdot 29}\) & \(0.271267i\) \\ \(d_{XAA}^{E_{a}B_{1}\cdot 29}\) & \(3.675345i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(1.021279i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(0.628193i\) & \(d_{XAA}^{E_{a}B_{2}}\) & \(-9.148796i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-1.973349i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(2.379833i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-9.656110i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(6.435238i\) \\ \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-4.868709i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(9.636382i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(5.566678i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-9.384495i\) \\ \(d_{XAA}^{E_{a}A_{2}\cdot 19}\) & \(-8.125545i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-11.081139i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(4.685956i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(24.934484i\) \\ \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-5.581672i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-10.885556i\) & \(d_{XAA}^{E_{a}A_{1}A_{1}}\) & \(-2.108468i\) & \(d_{XAA}^{E_{a}A_{2}}\) & \(-1.147890i\) \\ \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-0.032955i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-7.154815i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(-6.581011i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(-19.525654i\) \\ \(d_{XAA}^{E_{a}\cdot 19}\) & \(-9.096709i\) & \(d_{XAA}^{E_{a}B_{1}B_{1}}\) & \(1.277192i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(2.695996i\) & \(d_{XAA}^{E_{a}B_{1}B_{2}}\) & \(2.909191i\) \\ \(d_{XAA}^{E_{a}B_{2}B_{2}}\) & \(0.634006i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(11.638481i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-4.169391i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(11.109871i\) \\ \(d_{XAA}^{E_{a}\cdot 19}\) & \(-8.341822i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-0.097076i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-2.67372i\) & \(d_{XAA}^{E_{a}A_{1}B_{2}}\) & \(-7.460094i\) \\ \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(11.872509i\) & \(d_{XAA}^{E_{a}A_{2}B_{1}}\) & \(-1.41621i\) & \(d_{A_{a}A_{1}}^{A_{1}A_{1}}\) & \(-3.052065\) & \(d_{A_{a}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(2.003737\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(20.294914\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-1.538586i\) & \(d_{A_{a}A_{2}A_{1}}^{A_{1}A_{1}}\) & \(-7.869799i\) & \(d_{A_{2}A_{1}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(10.733719i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(1.461667\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-9.190938\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(0.827422i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}B_{1}}\) & \(3.188536i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(-8.651286i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{2}}\) & \(-10.912591i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(11.449491\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}B_{1}}\) & \(-9.645241\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-9.645241\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}}\) & \(0.446220i\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(-11.478756i\) & \(d_{A_{2}A_{1}A_{1}B_{1}}^{A_{1}B_{1}}\) & \(10.282807i\) \\ \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}B_{1}}\) & \(0.861999i\) & \(d_{A_{2}A_{1}A_{1}A_{2}}^{A_{1}A_{2}}\) & \(2.671860\) & \(d_{A_{2}A_{1}A_{1}B_{2}}^{A_{1}A_{1}B_{2}}\) & \(0.066090\) & \(d_{A_{2}A_{1}A_{2}A_{1}A_{2}}^{A_{1}A_{2}A_{2}}\) & \(-3.211639\) \\ \(d_{A_{2}A_{1}A_{2}B_{1}}\) & \(1.413501\) & \(d_{A_{2}A_{1}A_{2}A_{1}}^{A_{1}B_{1}}\) & \(11.998412\) & \(d_{A_{2}A_{1}A_{1}}^{A_{1}A_{2}B_{1}}\) & \(-11.700186\) & \(d_{A_{2}A_{1}A_{1}A_{2}A_{1}}^{A_{1}A_{2}B_{1}}\) & \(14.266631i\) \\ \(d_{A_{2}A_{1}A_{2}B_{2}}\) & \(-8.253840i\) & \(d_{A \begin{tabular}{l l|l l|l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}}\) & \(3.897832i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{1}}\) & \(-8.877686i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-4.910828i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-7.386387\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(5.758994\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(3.857188i\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(7.186933\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-4.508222\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.264337\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-1.156509\) & \(d_{\tilde{A}_{2}A_{1}A}^{A_{2}B_{1}B_{2}}\) & \(-0.090057i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(7.523385i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.373768i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(-3.838051i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(-4.860726i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(3.892874i\) \\ \(d_{\tilde{B}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(1.222752\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.090747\) & \(d_{\tilde{B}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.729952i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(0.908464\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-1.890844\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{2}B_{1}}\) & \(-1.641777i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{2}}\) & \(0.711170i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-2.551477i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(3.632331i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}}\) & \(0.223856i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(4.912430i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{2}}\) & \(8.704984\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{2}}\) & \(-5.237809\) & \(d_{\tilde{A}_{2}A_{1}A_{2}B_{2}}^{B_{1}B_{2}B_{2}}\) & \(-6.364104i\) & \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(3.506779i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{2}}^{B_{1}B_{1}B_{2}}\) & \(-2.337992i\) \\ \(d_{\tilde{A}_{2}A_{1}A}^{B_{1}B_{1}B_{1}B_{1}}\) & \(7.459430i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{1}A_{1}}^{B_{1}B_{2}B_{2}}\) & \(6.014023i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{1}}^{B_{1}B_{1}B_{2}}\) & \(-4.163307i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{2}}^{B_{1}B_{2}B_{2}}\) & \(3.275396i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}}^{A_{1}A_{1}}\) & \(4.14,A_{A_{2}A_{3}}\) & \(4.306409\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(9.885315\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(0.101348\) & \(d_{\tilde{A}_{2}A_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(-21.263387i\) \\ \(d_{\tilde{A}_{2}A_{1}A_{1}A_{1}}^{A_{1}A_{1}A_{1}}\) & \(-10.330803\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{1}A_{1}A_{1}}\) & \(4.297233\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}A_{2}A_{1}}\) & \(8.107154i\) & \(d_{\tilde{A}_{2}A_{1}A_{2}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(-10.506465i\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(9.317083i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}A_{1}}\) & \(0.053715i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(0.506014\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}B_{1}A_{1}}\) & \(-0.489746\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}A_{1}A_{1}}\) & \(1.861592\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}A_{2}A_{3}}\) & \(-3.326245i\) & \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(0.864546i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(-12.443388i\) \\ \(d_{\tilde{A}_{2}A_{3}}^{A_{1}B_{1}B_{1}A_{1}}\) & \(-9.631518i\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(-18.635260\) & \(d_{\tilde{A}_{2}A_{3}A_{3}}^{A_{1}B_{1}A_{1}}\) & \(12.434414\) \begin{tabular}{l l|l l l|l l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{A_{2}A_{3}}^{B_{2}B_{2}A_{2}}\) & \(7.261601i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{2}A_{2}}\) & \(-5.130537i\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-6.909420i\) & \(d_{A_{2}A_{3}}^{A_{1}B_{2}B_{1}}\) & \(-3.802844\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(2.584552\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(4.341198\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-3.113817\) & \(d_{A_{2}A_{3}}^{B_{2}B_{1}}\) & \(0.564934i\) \\ \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(-4.950710\) & \(d_{A_{2}A_{3}}^{B_{1}B_{2}B_{1}}\) & \(3.656334\) & \(d_{A_{2}A_{3}}^{B_{2}B_{2}B_{1}}\) & \(-6.618281i\) & \(d_{A_{2}A_{3}}^{B_{2}B_{2}B_{1}}\) & \(-0.028820i\) \\ \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-0.881323\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(0.846532\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-0.817198\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(23.383080\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(23.833080\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}A_{1}}\) & \(-5.289105i\) & \(d_{A_{1}A_{3}}^{A_{1}A_{1}}\) & \(-18.179006i\) & \(d_{A_{2}A_{3}}^{A_{2}A_{1}}\) & \(-5.327637i\) \\ \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-1.31764i\) & \(d_{A_{2}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-2.242674i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(-17.935628\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(3.419587\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(4.602203\) & \(d_{A_{1}A_{3}A_{4}}^{B_{1}B_{2}A_{1}}\) & \(-2.29842\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(0.588572\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-1.025799\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-21.803791\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(15.679982\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-4.802049i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(2.303905i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{3}}\) & \(19.052320i\) & \(d_{A_{1}A_{3}A_{4}}^{B_{1}B_{2}A_{1}}\) & \(12.176683i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-9.273023i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(2.102605i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-4.553713i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(2.88881i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(14.990716\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(2.593699\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(0.128028\) & \(d_{A_{1}A_{3}A_{4}}^{B_{1}B_{2}A_{1}}\) & \(2.829782\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(15.425514i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(1.570523i\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(10.969030i\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-1.772768i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(-22.934480\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-3.221988\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-3.22198831\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(-0.649885\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-3.705079\) & \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(12.179211\) \\ \(d_{A_{1}A_{3}}^{A_{1}B_{2}A_{1}}\) & \(-1.8336328\) & \(d_{A_{1}A_{3}A_{4}}^{A_{1}B_{2}A_{1}}\) & \(10.550660\) & \(d_{A_{2}A_{3}}^{A_{2}A_{1}A_{1}}\) & \(15.211758i\) & \(d_{A_{1}A_{3}}^{A_{2}A_{1}A_{1}}\) & \(15.211758i\) \\ \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(0.048905i\) & \(d_{A_{1}A_{3}}^{A_{2}B_{2}A_{1}}\) & \(-1.823062i\) & \(d_{A_{1}A_{3}}^{B_{1}A_{2}A_{1}}\) & \(18.214205\) & \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(7.468998\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{1}A_{1}}\) & \(-2.515314\) & \(d_{A_{1}A_{3}}^{B_{1}B_{2}A_{1}}\) & \(2.660768\) & \(d_{A_{1}A_{3}}^{B_{1}A_{1}}\) & \(-11.706646\) & \(d_{A_{1}A_{3}}^{B_{2}A_{1}A_{1}}\) & \(8.615577\) \\ \(d_{A_{1}A_{3}}^{B_{1}B_{2}A \begin{table} \begin{tabular}{c c c c|c c|c} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{X_{1}A_{5}}^{2^{a}_{2a}{}_{1}B_{1}}\) & \(-3.722620i\) & \(d_{X_{1}A_{5}}^{2^{a}_{2a}{}_{1}B_{1}}\) & \(0.617500i\) & \(d_{X_{1}A_{5}}^{B_{1}B_{2}B_{1}}\) & \(0.516653\) & \(d_{X_{1}A_{5}}^{B_{1}B_{2}B_{1}}\) & \(-3.019806\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-10.563140\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-5.990870\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(10.435853i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-7.276010i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-0.651768i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-2.674596i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.749651\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(6.287915\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.384047i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(2.625411i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-4.277840\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(0.536499\) \\ \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(21.433260\) & \(d_{X_{1}A_{5}}^{A_{1}F_{9}B_{1}B_{1}}\) & \(7.850363\) & \(d_{X_{1}A_{5}}^{F_{4}A_{2}F_{9}B_{1}B_{1}}\) & \(-14.183510i\) & \(d_{X_{1}A_{5}}^{F_{4}A_{2}F_{9}B_{1}}\) & \(11.728439i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(3.837635\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-1.905681\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(4.743222\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-9.209701i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(6.480651i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-8.388469i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(1.577033\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-2.104387\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(14.522511i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-13.968541\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(3.121188\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(1.915315\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(17.723036i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{1}B_{1}}\) & \(-7.27820\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(5.101785\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.238667i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(2.709350i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-6.902903\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(5.020059\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(9.123193\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-5.955356\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(3.262885\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(0.427355\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-3.013817i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-2.424010i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-0.090354\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.429239i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(1.043874i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(1.043874i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-0.093054\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{1}}\) & \(-0.429293i\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(1.043874i\) \\ \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(-8.421872\) & \(d_{X_{1}A_{5}}^{F_{9}B_{2}B_{2}}\) & \(4.221717i\) & \(d_{X_{1}A_{5}}^{F_{9}A_{1}A_{5}}\) & \(-18.805444\) & \(d_{X_{2}A_{1}X_{5}}^{F_{9}B_{2}B_{2}}\) & \(16.937868\) \\ \(d_{X_{2}X_{1}X_{2}X_{1}X}^{F_{9}B_{2}B_{2}}\) & \(17.323507\) & \(d_{X_{1}A_{5}}^{F_{4}B_{2}B_{2}}\) & \(-22.009487\) & \(d_{X_{2}X_{1}X_{1}}^{F_{9}B_{1}B_{1}}\) & \(-14.927812\) & \(d_{X_{2}X_{1}X_{1}}^{A_{1}B_{1}B_{1}}\) & \(-12.195754\) \\ \(d_{X_{2}X_{1}X_{2}}^{A_{1}F_{9}B_{2}}\) & \(18.625777\) & \(d_{X_{2}X_{1}X_{2}}^{A_{1}F_{9}B_{2}}\) & \(-26.731892\) & \(d_{X_{2}X_{1}X_{1}}^{F_{9}B_{2}} \begin{tabular}{l l|l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 99.206241 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)4.42396 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)38.836926 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)63.207151 \\ \(d_{\Gamma TXX}^{T_{1g}T_{2g}E_{u}E_{u}}\) & 48.296988 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 68.564904 & \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 2.482266 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 265.657900 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)35.653872 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)23.118671 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)26.203961 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)7.736050 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)92.646643 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)79.175409 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)84.582631 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & \(-\)148.29040 \\ \(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}E_{u}}\) & 23.216567 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 22.665277 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & \(-\)7.175182 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{2g}T_{2g}E_{u}E_{u}}\) & 103.692950 \\ \(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 61.482169 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 126.381807 & \({}^{2}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 149.537853 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}T_{1g}E_{u}E_{u}}\) & 475.631713 \\ \(d_{\Gamma TXX}^{T_{2g}A_{1g}A_{1g}E_{g}}\) & \(-\)23.050621 & \(d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}\) & \(-\)71.635857 & \({}^{3}\)\(d_{\Gamma TXX}^{T_{1g}A_{1g}A_{1g}A_{2u}}^{T_{1g}A_{1g}A_{2u}}\) & \(-\)22.385103 \\ \(d_{\Gamma X_{2}X_{1X}}^{T_{2g}A_{1g}E_{g}}\) & 20.072740 & \({}^{1}\)\(d_{\Gamma TXX}^{T_{2g}A_{1g}E_{g}}\) & 20.097059 & \begin{tabular}{l l|l l l|l l|l} \hline \hline Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value & Derivative & Value \\ \hline \({}^{2}d^{F_{E_{E}}g_{E_{E}}E_{u}}_{XXXX}\) & \(-107.748049\) & \(d^{E_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-46.230724\) & \({}^{1}d^{F_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-17.671206\) & \({}^{2}d^{F_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(27.428538\) \\ \(d^{E_{E_{E}}^{\prime}E_{E}}_{XXXX}\) & \(67.791357\) & \({}^{1}d^{F_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-16.515757\) & \({}^{2}d^{F_{E_{E}}^{\prime}E_{u}}_{XXXX}\) & \(-163.498603\) & \(d^{A_{E_{E_{E}}^{\prime}}^{\prime}E_{u}}_{XXXX}\) & \(-163.498603\) & \(d^{A_{E_{E_{E}}^{\prime}}^{\prime}E_{u}}_{XXXX}\) \\ \(d^{B_{1}B_{1,2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXX}\) & \({}^{2a_{2}}_{XXXXXX}\) &
2308.13090
Inter-species spin-noise correlations in hot atomic vapors
We report an experimental and theoretical study of spin noise correlations in a $^{87}$Rb-$^{133}$Cs unpolarized alkali-metal vapor dominated by spin-exchange collisions. We observe strong unequal-time inter-species correlations and account for these with a first-principles theoretical model. Since the two atomic species have different spin precession frequencies, the dual-species vapor enables the use of an additional experimental handle, the applied magnetic field, for untangling various sub-types of spin correlations. In particular, the measured cross-correlation and auto-correlation spectra shed light on a number of spin-dynamic effects involving intra-atom, inter-atom, intra-species and inter-species correlations. Cross-correlation coefficients exceeding $60\%$ have been observed at low-magnetic fields, where the two spin species couple strongly via spin-exchange collisions. The understanding of such spontaneously generated correlations can motivate the design of quantum-enhanced measurements with single or multi-species spin-polarized alkali-metal vapors used in quantum sensing applications.
K. Mouloudakis, F. Vouzinas, A. Margaritakis, A. Koutsimpela, G. Mouloudakis, V. Koutrouli, M. Skotiniotis, G. P. Tsironis, M. Loulakis, M. W. Mitchell, G. Vasilakis, I. K. Kominis
2023-08-24T21:23:13Z
http://arxiv.org/abs/2308.13090v2
# Inter-species spin-noise correlations in hot atomic vapors ###### Abstract We report an experimental and theoretical study of spin noise correlations in a \({}^{87}\)Rb-\({}^{133}\)Cs unpolarized alkali-metal vapor dominated by spin-exchange collisions. We observe strong unequal-time inter-species correlations and account for these with a first-principles theoretical model. Since the two atomic species have different spin precession frequencies, the dual-species vapor enables the use of an additional experimental handle, the applied magnetic field, for untangling various subtypes of spin correlations. In particular, the measured cross-correlation and auto-correlation spectra shed light on a number of spin-dynamic effects involving intra-atom, inter-atom, intra-species and inter-species correlations. Cross-correlation coefficients exceeding 60% have been observed at low-magnetic fields, where the two spin species couple strongly via spin-exchange collisions. The understanding of such spontaneously generated correlations can motivate the design of quantum-enhanced measurements with single or multi-species spin-polarized alkali-metal vapors used in quantum sensing applications. ## I Introduction Quantum measurements are at the basis of quantum technologies, from atomic magnetometers [1], atomic clocks [2] and quantum optical measurements [3], to quantum computers [4], quantum simulators [5; 6], and even the detection of gravitational waves [7; 8]. Quantum uncertainty and its dynamic manifestation imposes limitations to the precision of quantum measurements [9]. Thus a major effort of modern quantum technology has been to engineer quantum noise, for example by generating squeezed states of light or atoms, in order to surpass what are broadly known as the standard quantum limits to measurement precision [10]. These are limits usually applied to many-body systems, following under the working assumption of the particles being in separable quantum states, i.e. sharing no correlations. Yet, correlations are at the heart of the second quantum revolution [11]. Clearly, engineering quantum noise towards advancing the capabilities of quantum technology requires a profound understanding of the physics of noise, taking correlations into account [12]. Here we unravel multifaceted spin correlations spontaneously generated in a dual-species hot alkali-metal vapor. Hot atomic vapors are instrumental in quantum sensing of magnetic fields [13], as well as in atomic vapor clocks, while noble gas ensembles have been recently used for implementing quantum information protocols [14; 15; 16], including quantum memories [17]. Moreover, numerous quantum systems can be mapped to spin, hence spin-noise studies in atomic vapors find direct analogies to other quantum technologies [18; 19]. We present a precision spin-noise measurement in an unpolarized \({}^{87}\)Rb-\({}^{133}\)Cs alkali-metal ensemble dominated by spin-exchange interactions. We demonstrate strong inter-species correlations at low magnetic fields, which fade away at increasing magnetic fields, as the two atomic species have different gyromagnetic ratios. The developed theoretical framework takes advantage of the long-standing physical description of spin-exchange collisions [20], and leads to a formal description of spin-noise correlations, showing excellent agreement with the measurements. In particular, the magnetic-field dependence of the correlations in conjunction with the theoretical framework allows to resolve intra-atom from inter-atom correlations, and for each type discern intra-hyperfine from inter-hyperfine correlations. This work has direct ramifications for understanding quantum limits to sensing technologies. This is because the observed correlations are created spontaneously by the ubiquitous spin-exchange collisions. Hence, what might have been understood as a vapor consisting of un correlated atoms _in the absence_ of external perturbations, is actually an atomic vapor rich in correlations. These directly influence spin-noise benchmarks against which any noise engineering protocols which _do involve_ external perturbations have to be compared. Moreover, while this measurement is performed with unpolarized atomic vapors, the developed theoretical framework allows us to extrapolate to polarized vapors pertinent to quantum sensing applications. In this regime we expect strong inter-species quantum correlations, which could further advance quantum metrology with multi-species hot atomic vapors. The structure of the paper is the following. In the next section we provide a detailed perspective of this work in the context of previous work on quantum sensing with hot atomic vapors and spin-noise spectroscopy. In Sec. III we describe the experimental setup and define the measured observables. In Sec. IV we present a comprehensive theoretical analysis of spin-noise correlations. We then analyze the data in Sec. V, present the main results in Sec. VI, and conclude with Sec. VII. Technical derivations are left for the appendices A-E. ## II This work in the context of previous work Composite quantum systems, like a collection of interacting atoms or molecules and their interface with light, have been widely used as a realization of quantum sensing technologies [21; 22; 23; 24; 25]. In particular, hot alkali-metal vapors form the core quantum system in optical magnetometry [26; 27; 28; 29; 30], comagnetometry [31; 32; 33; 34], magnetic field gradiometry [35; 36], frequency standards [37] and quantum communications [17]. The long coherence times in combination with technical advantages, like optical accessibility with commercially available resonant laser light, and the high reliability of accommodating experimental setups, render hot alkali vapors favorable in many quantum sensing applications, including magnetoencephalography [38; 39], time keeping [40], inertial sensing [41; 42] and imaging (THz imaging, biomagnetic imaging) [43; 44]. Spin-exchange (SE) collisions, deriving from the Pauli exchange interaction during binary atomic encounters, are central in the physics of hot alkali vapors. The early understanding of SE collisions [45; 46] accounted for several experimental observables [47]. Comprehension of more subtle aspects of SE [48; 49; 50] led to the development of SERF (spin-exchange-relaxation-free) magnetometers [51; 52; 53], SE optically-pumped hyperpolarized noble gases utilized in medical applications [54; 55; 56], and hydrogen magnetometry [57; 58; 59]. Rarely did the concept of correlations appear in these works, due to the intuitive expectation that, in hot vapors dominated by random binary spin-dependent collisions, correlations can be hardly sustained for meaningful time-scales [60]. However, several works have recently made such a counter-intuitive case on the non-trivial role of correlations, pointing to the possibility of performing quantum-enhanced measurements with hot vapors [61; 62; 63; 64; 65; 66; 67; 68; 69; 70]. ### Spin-noise spectroscopy In such studies, unpolarized alkali-metal vapors are rather useful, because unlike spin-polarized states, they are not sensitive to technical (e.g. magnetic) noise, hence intrinsic spin fluctuations can be readily measured. Since correlations and fluctuations are intimately related, as will be elaborated in detail in this work, unpolarized vapors are a natural testbed for studying atomic correlations. In more detail, unpolarized vapors provide easy access to the spontaneous fluctuations of atomic spin driven by atomic collisions. The field studying such fluctuations, spin-noise spectroscopy (SNS) [71; 72; 73; 74; 75; 76; 77; 78; 79], is interesting in its own right, as spin noise reveals spectroscopic information about the atomic [80] or even the solid-state system under consideration [81; 82; 83] in a non-perturbing way. Further, the study of spin noise in atomic vapors has elucidated quantum-non-demolition measurements [61; 67; 84; 85], effects of atomic diffusion [86; 87; 88], noise studies at low-fields [89; 90], optical spin-noise amplification [91], SNR enhancement by squeezed-light [92], non-equilibrium SNS [93] and spin-alignment noise in a \({}^{133}\)Cs vapor [94]. With respect to this work, however, the study of spontaneous spin noise in unpolarized vapors offers two additional advantages. First, it allows us to extrapolate the underlying physics to spin-projection noise in spin-polarized vapors [95], and thus helps understand and define benchmarks against which any quantum sensing enhancements are compared with. Second, by using unpolarized vapors, in particular, a dual-species vapor, one can untangle various sorts of spin correlations, since they leave a clear signature in measurable spectral distributions of spin-noise power [96]. Overlapping (dual-species) alkali-metal ensembles have been explored a while ago in the context of hybrid optical pumping [97; 98], specifically addressing the deterministic spin dynamics at non-zero spin-polarizations. More recently, two works [99; 100] studied spin-noise correlations and spin-noise transfer between two alkali species in an unpolarized vapor, arriving at conflicting results, while the authors in [101] studied the strong SE-coupling of two alkali-metal species in a polarized vapor. Before summarizing the results of this work, we introduce some basic notions regarding atomic spin correlations. ### Atomic Correlations When it comes to correlations, one needs to define (I) the correlated degrees of freedom, (II) their dynamic as pect, (III) the various sub-types, and (IV) their quantum/classical character [12]. #### ii.2.1 Correlated degrees of freedom As we will show in detail in Section IV, what is experimentally accessible is the collective spin correlator \[\langle\hat{\mathcal{F}}(t+\tau)\hat{\mathcal{F}}(t)\rangle \tag{1}\] where \(\hat{\mathcal{F}}(t)=\sum_{i=1}^{N_{\text{at}}}\hat{\mathfrak{f}}^{(i)}(t)\) is the collective spin of the ensemble along the direction of the laser beam, probing in total \(N_{\text{at}}\) atoms, with \(\hat{\mathfrak{f}}^{(i)}(t)\) being derived from the total spin of the \(i\)-th atom, \(\hat{\mathfrak{s}}+\hat{\mathfrak{I}}\), where \(\hat{\mathfrak{s}}\) is the electronic and \(\hat{\mathfrak{I}}\) the nuclear spin. #### ii.2.2 Dynamic aspect of correlations We distinguish equal-time (\(\tau=0\)) from unequal-time correlations (\(\tau\neq 0\)), where \(\tau\) is the time delay between the correlated observables. The former reflect total spin-noise power readily derived from the Wiener-Khinchin theorem, which relates the power spectral density of a signal with its correlation function. The total spin-noise power is obtained by the integral over frequencies of the power spectral density. #### ii.2.3 Sub-types of correlations The survival of correlations spontaneously building up by SE collisions in a hot rubidium vapor was studied in [68], where it was theoretically shown that equal-time inter-atom correlations in specific states can be sustained amidst sequential SE collisions of the correlated partners with other atoms. In [96] the authors studied unequal-time correlations generated by the dynamics of binary SE collisions at thermal equilibrium pertinent to spin-noise experiments with unpolarized vapors. The authors considered several kinds of correlations, schematically depicted in Fig. 1. In particular, the correlator (1) is composed of two terms \(\sum_{j}\left(\hat{\mathfrak{f}}^{(j)}(t+\tau)\hat{\mathfrak{f}}^{(j)}(t)\right)\) and \(\sum_{i\neq j}\left\langle\hat{\mathfrak{f}}^{(i)}(t+\tau)\hat{\mathfrak{f}}^{ (j)}(t)\right\rangle\), describing intra- and inter-atom correlations, respectively. For each such kind one can further distinguish intra-hyperfine from inter-hyperfine correlations. To describe those, the total single-atom spin operators are written as \(\hat{f}^{(i)}_{\alpha}(t)\), where \(\hat{f}_{\alpha}\) denotes the projection of the total atom's spin, \(\hat{\mathfrak{s}}+\hat{\mathfrak{I}}\), onto the hyperfine manifold \(\alpha\), with \(\alpha\in\{a,b\}\) denoting either the upper (\(a\equiv I+1/2\)) or the lower (\(b\equiv I-1/2\)) ground-state hyperfine manifold. Thus, as shown in Fig. 1, we resolve the original correlator in four kinds of correlations: (a) intra-atom and intra- or inter-hyperfine correlations, and (b) inter-atom and intra- or inter-hyperfine correlations. In this work we bring into the picture an additional type, the inter-species correlations. In a single-species vapor the inter-atom correlations always contribute to the spin-noise spectrum simultaneously with the intra-atom correlations, and the two are hardly distinguishable. _Here comes the usefulness of a dual-species vapor, in which the inter-species inter-atom correlations can be experimentally distinguished and thus shed light on inter-atom correlations even for the single-species case_. #### ii.2.4 Character of correlations The quantum/classical character of the correlations is a subtle issue which appears to have a parametric dependence, the parameter being the vapor's spin-polarization. An example, drawn from quantum information science, demonstrating a parametric dependence of correlations is the two-qubit Werner state \(\rho=\frac{1}{4}(\mathbbm{1}-\alpha\hat{\mathbf{\sigma}}\otimes\hat{\mathbf{\sigma}})\), where \(\hat{\mathbf{\sigma}}\otimes\hat{\mathbf{\sigma}}=\hat{\sigma}_{x}\otimes\hat{\sigma} _{x}+\hat{\sigma}_{y}\otimes\hat{\sigma}_{y}+\hat{\sigma}_{z}\otimes\hat{ \sigma}_{z}\), \(\mathbbm{1}\) is the identity and \(\hat{\sigma}_{i}\), \(i\in\{x,y,z\}\) are the Pauli operators [102; 103]. Since the term \(\hat{\mathbf{\sigma}}\otimes\hat{\mathbf{\sigma}}\) is traceless, the spin populations of both qubits in each of the two states along the quantization axis (e.g. the states with \(\sigma_{z}=\pm 1\)) are \(1/2\). Thus the average spin is zero. Yet the state \(\rho\) exhibits correlations for any \(\alpha\neq 0\), since \(\langle\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}\rangle=-\alpha\). In particular, the correlations are quantum and violate the CHSH inequality for \(\alpha>1/\sqrt{2}\). Thus, the character of the correlations of this 2-qubit state depends on the parameter \(\alpha\). In Sec. VI.3 we will further comment on this point in regard to this work. ### Summary of the results of this work In this work we undertake (A) an experimental and (B) a formal theoretical study of spin-noise correlations in a \({}^{87}\)Rb-\({}^{133}\)Cs hot vapor. (A1) We unambiguously demonstrate the existence of spontaneously generated unequal-time inter-species spin-noise correlations, driven by spin-exchange collisions. (A2) Under certain conditions, i.e. finite measurement bandwidth, we also demonstrate the existence of inter-species correlations reflected in a total spin-noise power different from what would be obtained were the atoms uncorrelated. (B1) Using the full density matrix description of spin-exchange and spin-relaxation dynamics, we develop a first-principles theoretical framework that captures the subtle physics of all four sub-types of correlations and resolves inconsistencies of previous works [99; 100]. (B2) We draw qualitative conclusions about the character of inter-species spin-noise correlations in spin-polarized dual-species vapors. ## III Experimental setup, raw data and observables In this section we describe the experimental setup, and define the experimental observables to be used in the theoretical treatment of the following section. ### Experimental setup The experimental setup is shown in Fig. 2. A cylindrical cell of diameter \(12.7\,\mathrm{mm}\) and length \(47\,\mathrm{mm}\), with anti-reflection coated windows, contains 330 Torr of N\({}_{2}\) buffer gas and a metallic droplet of \({}^{133}\)Cs and \({}^{87}\)Rb in molar ratio 28:72. The cell resides in a ceramic oven heated with \(200\,\mathrm{kHz}\) current to \(160\,\mathrm{\SIUnitSymbolCelsius}\), resulting in number densities \(n_{\mathrm{C_{f}}}\approx n_{\mathrm{Rb}}\approx 10^{14}\,\mathrm{cm}^{-3}\). In the presence of nitrogen the optical linewidths are pressure-broadened (FWHM) to \(7.34\) GHz for \({}^{133}\)Cs and \(7.08\) GHz for \({}^{87}\)Rb, therefore the two ground-state hyperfine levels are moderately resolved. The collective atomic spins of the alkali-metal species are probed by two single-mode external cavity diode lasers (Toptica DL-780 pro and DL-850 pro) with their wavelengths blue-detuned from the D\({}_{2}\) transition by several tens of GHz. Both linearly polarized laser beams enter in the same single-mode optical fiber producing a two-color Gaussian beam with diameter \(\sim 2\) mm (\(1/e^{2}\) intensity width) at the position of the cell (with a small difference in the two colors of less than 10% due to diffraction). The vapor cell is placed inside a 4-layer mu-metal shield, protecting the atomic spins from ambient magnetic fields. Within the shields, a coil system generates a DC magnetic field \(\mathbf{B}=(0,0,B)\) transverse to the beam propagation axis, ranging from 4 mG to 92 mG, with the corresponding spin precession frequencies reaching several tens of kHz. At the exit of the cell the two-color beam is incident on a dichroic mirror, which directs each color to a separate balanced polarimeter, the two polarimeters being otherwise identical. The use of 780 nm and 852 nm filters further reduces light leakage from one wavelength to the other, resulting in negligible cross-talk in the polarization detection of the two beams. Each balanced photodetector has detection efficiency \(\eta\). The cesium and rubidium polarimeter signals are acquired and analyzed in the frequency domain by feeding them into a spectrum analyzer (Stanford Research Systems SR780 Dynamic Signal Analyzer), which provides the cross-correlation power spectral density and the two single-channel auto-correlation power spectra. The frequency resolution of the FFT is 800 bins, with a sampling rate of 262 kHz. A Hann window is applied and an analog anti-aliasing filter applied before digitization eliminates all frequency components above 102.4 kHz. ### Auto-correlation and cross-correlation observables Each balanced photodetector receives a continuous optical signal and generates a corresponding photo-current, which we denote by \(i_{\beta}(t)\), with \(\beta\in\{\mathrm{Rb},\mathrm{Cs}\}\). Spin noise (SN) and photon-shot-noise render \(i_{\beta}(t)\) a classical and real stochastic process. We record the signals for long enough time such that both spin-noise and photon-shot-noise have stationary statistical moments. We are interested in the cross-correlation between the two balanced photodetector signals: \[C_{\mathrm{Rb},\mathrm{Cs}}(\tau)\equiv\langle\langle i_{\mathrm{Rb}}(t+\tau )i_{\mathrm{Cs}}(t)\rangle\rangle, \tag{2}\] where \(\langle\langle\cdot\rangle\rangle\) denotes the average over all possible realizations of the stochastic measurement outcomes. Stationarity implies that the correlation does not depend on the absolute time \(t\). The quantum-mechanical operator associated with the measured photocurrent is: \[\begin{split}\hat{I}_{\beta}(t)&=q_{e}\eta_{\beta} \left(\hat{a}^{\dagger}_{k,+45}\hat{a}_{k,+45}-\hat{a}^{\dagger}_{k,-45}\hat{a} _{k,-45}\right)_{\beta}\\ &=2q_{e}\eta_{\beta}\hat{\mathcal{S}}_{2,\beta}(t),\end{split} \tag{3}\] where \(\hat{a}_{k,\ell}\equiv\hat{a}_{k,\ell}(t)\) and \(\hat{a}^{\dagger}_{k,\ell}\equiv\hat{a}^{\dagger}_{k,\ell}(t)\) are respectively the travelling-wave annihilation and creation operators for the mode \(k\) and polarization \(\ell\), normalized so that \(\hat{a}^{\dagger}_{k,\ell}\hat{a}_{k,\ell}\) is the photon flux in the respective mode [104]. The index \(k\) labels the longitudinal as well as the spatial mode, the latter being defined by the spatial mode of the probe beam (Gaussian TEM\({}_{00}\) in the experiment). The \(\pm 45^{\circ}\) linear polarizations are characterized with respect to the polarization direction of the probe beam before the interaction with the atomic medium, chosen to be along the \(\mathbf{\hat{y}}\)-direction. Finally, \(\hat{\mathcal{S}}_{2}(t)\) is the optical Stokes operator given by: \[\hat{\mathcal{S}}_{2}(t) =\frac{\hat{a}^{\dagger}_{k,+45}\hat{a}_{k,+45}-\hat{a}^{\dagger }_{k,-45}\hat{a}_{k,-45}}{2}\] \[=\frac{\hat{a}^{\dagger}_{k,y}\hat{a}_{k,z}+\hat{a}^{\dagger}_{k,z}\hat{a}_{k,y}}{2}, \tag{4}\] Figure 2: Experimental setup for the measurement of \({}^{87}\)Rb-\({}^{133}\)Cs spin-noise correlations. Two single-mode external cavity diode lasers, blue-detuned from the corresponding atomic resonances, are combined in a single-mode optical fiber and directed towards the vapor cell, held inside a 4-layer mu-metal magnetic shield. A dichroic mirror after the cell is used to spatially separate the two overlapping wavelengths exiting the atomic medium, while optical filters further prevent leakage of the unwanted light to the detectors. Two identical balanced polarimeters comprised of a half-waveplate, a polarizing beam splitter cube, and a balanced photodetector provide the signals feeding the data acquisition system. Both wavelengths were monitored within 10 MHz resolution using a Fizeau wavelength-meter (High Finesse WS7). Single-mode operation was additionally monitored by a second scanning Fabry-Perot interferometer. where the coordinate system is defined in Fig. 2. The operators \(\hat{\mathcal{S}}_{2,\beta}\) refer to the Stokes operator \(\hat{\mathcal{S}}_{2}\) for each of the two wavelenghts probing atom species \(\beta\). The cross-correlation can be therefore expressed as [9; 105]: \[C_{\text{Rb,Cs}}(\tau) =\langle:\hat{f}_{\text{Rb}}(t+\tau)\hat{I}_{\text{Cs}}(t):\rangle \tag{5}\] \[=4q_{e}^{2}\eta_{\text{Rb}}\eta_{\text{Cs}}\langle:\hat{\mathcal{ S}}_{2,\text{Rb}}(t+\tau)\hat{\mathcal{S}}_{2,\text{Cs}}(t):\rangle,\] where \(::\) denotes normal ordering. In the above equation, the quantum operators are written in the Heisenberg picture and the single brackets \(\langle\cdot\rangle\) denote quantum mechanical expectation value. Taking into account the bosonic commutation relations [9; 105], and considering that the off-resonance interaction with the atomic vapor is to a very good approximation a linear optical process and thus does not mix annihilation and creation operators, we can make use of the unequal-time commutation relations: \[\Big{[}\hat{a}^{\dagger}_{k,\ell}(t),\hat{a}^{\dagger}_{k^{\prime},\ell^{ \prime}}(t^{\prime})\Big{]}=[\hat{a}_{k,\ell}(t),\hat{a}_{k^{\prime},\ell^{ \prime}}(t^{\prime})]=0, \tag{6}\] and show \(\langle:\hat{I}_{\text{Rb}}(t+\tau)\hat{I}_{\text{Cs}}(t):\rangle=\langle: \hat{I}_{\text{Cs}}(t)\hat{I}_{\text{Rb}}(t+\tau):\rangle\). In order to emphasize that the function is invariant under the preceding exchange of operators we express Eq.(5) in the symmetrized form [105]: \[C_{\text{Rb,Cs}}(\tau) =2q_{e}^{2}\eta_{\text{Rb}}\eta_{\text{Cs}}\Big{[}\langle\hat{ \mathcal{S}}_{2,\text{Rb}}(t+\tau)\hat{\mathcal{S}}_{2,\text{Cs}}(t)\rangle\] \[+\langle\hat{\mathcal{S}}_{2,\text{Cs}}(t)\hat{\mathcal{S}}_{2, \text{Rb}}(t+\tau)\rangle\Big{]}. \tag{7}\] The cross-spectral density function is the Fourier transform of the cross-correlation: \[\tilde{C}_{\text{Rb,Cs}}(\nu)=\int_{-\infty}^{\infty}C_{\text{Rb,Cs}}(\tau)e^ {-i2\pi\nu\tau}d\,\tau. \tag{8}\] The spectrum analyzer estimates the power spectral density (PSD) by performing a finite Fourier transform for a specified record time \(T\) and averages the product of the Fourier components over the different realizations of the stochastic processes [106]: \[\tilde{C}^{\text{SA}}_{\text{Rb,Cs}}(\nu)=\] \[\frac{1}{T}\langle\langle\left(\int_{0}^{T}i_{\text{Rb}}(t)e^{-i 2\pi\nu t}\,dt\right)^{*}\int_{0}^{T}i_{\text{Cs}}(s)e^{-i2\pi\nu s}\,ds \rangle\rangle\] \[=\int_{-T}^{T}C_{\text{Rb,Cs}}(\tau)e^{-i2\pi\nu\tau}\left(1- \frac{|\tau|}{T}\,d\tau\right). \tag{9}\] To derive the second line, the variables of integration were changed from \((t,s)\) to \((t,\tau=t-s)\) and the region of integration was modified accordingly [107]. Thus, the spectrum analyzer performs an exact evaluation of the spectral density (Eq. (8)) in the limit of infinite long sample realization \(T\rightarrow+\infty\). In practice, for an accurate approximation, it is sufficient to choose the sample length to be much larger than the time difference \(\tau\) at which the correlation function becomes negligible. Similarly, single-channel auto-correlation functions and their corresponding power spectral densities are defined as \(C_{\beta,\beta}(\tau)=\langle\langle i_{\beta}(t)i_{\beta}(t+\tau)\rangle\rangle\) and \(\tilde{C}_{\beta,\beta}(\nu)=\int_{-\infty}^{\infty}C_{\beta,\beta}(\tau)e^{-i 2\pi\nu\tau}\,d\tau\), respectively, with \(\beta\in\{\text{Rb,Cs}\}\). As before, the auto-correlation function can be expressed in the symmetrized form: \[C_{\beta,\beta}(\tau) =2q_{e}^{2}\eta_{\beta}\times\] \[\left(\langle\hat{\mathcal{S}}_{2,\beta}(t)\hat{\mathcal{S}}_{2, \beta}(t+\tau)\rangle+\langle\hat{\mathcal{S}}_{2,\beta}(t+\tau)\hat{\mathcal{ S}}_{2,\beta}(t)\rangle\right). \tag{10}\] ## IV Theoretical derivation of cross- and auto-correlation spectra The theoretical description of the cross correlations is based on treating the atom-light coupling, and the atomic state evolution due to the atom's hyperfine structure hamiltonian, atom-atom spin-exchange collisions, and a number of additional spin relaxation phenomena. Such topics have been described in detail elsewhere [108; 109; 110; 111], and we recapitulate the description for the sake of completeness in Appendix A. In the following two subsections we present the results of extending such treatments to the case of the two-species atomic vapor treated herein. ### Atom-light coupling Based on the light-atom interaction dynamics, it follows that for probe light linearly polarized in the \(y\)-direction, the solution of the Heisenberg equation of motion for the Stokes operator yields in the small-angle approximation[112; 113] \[\hat{\mathcal{S}}_{2}^{\text{(out)}}(t)\approx\hat{\mathcal{S}}_{2}^{\text{(in) }}(t)+\frac{1}{2}\Phi\bar{g}\hat{\mathcal{F}}(t), \tag{11}\] where \(\Phi\) is the photon-flux (photons per unit time). The out (in) superscripts denote the operator after (before) interacting with the atomic sample, \(\bar{g}=(|g_{a}|+|g_{b}|)/2\) is the mean coupling constant to the two ground-state hyperfine manifolds and \(\hat{\mathcal{F}}(t)=\sum_{i=1}^{N_{a}}\hat{t}_{x}^{(i)}(t)\) is the probed collective atomic spin with \[\hat{t}_{x}^{(i)}(t)=\frac{g_{a}}{\bar{g}}\hat{f}_{a,x}^{(i)}(t)-\frac{g_{b}}{ \bar{g}}\hat{f}_{b,x}^{(i)}(t) \tag{12}\] Here \(\hat{f}_{a,x}^{(i)}\) and \(\hat{f}_{b,x}^{(i)}\) are the projections of the \(x\)-component of the total spin of the \(i\)-th atom, \(\hat{s}_{x}+\hat{I}_{x}\), onto the upper (\(a=I+1/2\)) and lower (\(b=I-1/2\)) hyperfine multiplet, respectively, with the corresponding coupling constants being \(g_{a}\) and \(g_{b}\) (see Appendix A, where we also generalize Eq. (12) to include a non-uniform intensity distribution of the light). The in \(\rightarrow\) out change of \(\hat{\mathcal{S}}_{2}\) described by Eq. (11) reflects polarization rotation of light (Faraday rotation) at an angle \(\phi_{\text{PR}}=\frac{1}{2}\bar{g}\,\langle\hat{\mathcal{F}}\rangle\). Thus, Eq. (11) connects the Stokes operator \(\mathcal{S}_{2}(t)\) to the underlying spin operator \(\hat{\mathcal{F}}(t)\). Using Eq. (7), Eq. (10), and Eq. (12), we arrive at the following expressions connecting the measured correlations (cross-correlation and auto-correlations) to the underlying collective spin correlators: \[C_{\mathrm{Rb},\mathrm{Cs}}(\tau) =\frac{q_{e}^{2}\eta_{\mathrm{Rb}}\eta_{\mathrm{Cs}}\Phi_{\mathrm{ Rb}}\Phi_{\mathrm{Cs}}\bar{g}_{\mathrm{Rb}}\bar{g}_{\mathrm{Cs}}}{2}\times\] \[\langle\hat{\mathcal{F}}_{\mathrm{Rb}}(t+\tau)\hat{\mathcal{F}}_ {\mathrm{Cs}}(t)+\hat{\mathcal{F}}_{\mathrm{Cs}}(t)\hat{\mathcal{F}}_{ \mathrm{Rb}}(t+\tau)\rangle, \tag{13}\] \[C_{\beta,\beta}(\tau) =2q_{e}^{2}\eta_{\beta}\Phi_{\beta}\Big{[}\delta(\tau)\] \[+\frac{1}{2}\eta_{\beta}\Phi_{\beta}\bar{g}_{\beta}^{2}\langle \hat{\mathcal{F}}_{\beta}(t+\tau)\hat{\mathcal{F}}_{\beta}(t)\rangle\Big{]}, \tag{14}\] where \(\beta\in\{\mathrm{Rb},\mathrm{Cs}\}\). We note that we have taken into account that the polarization properties of the two beams before the interaction with the atoms are uncorrelated, i.e. \(\langle\hat{\mathcal{S}}_{2,\mathrm{Rb}}^{(\mathrm{in})}(t)\hat{\mathcal{S}}_ {2,\mathrm{Cs}}^{(\mathrm{in})}(t^{\prime})\rangle=0\). ### Dynamics of mean spin By considering the various processes that affect the atomic spin evolution (see Appendix B), we arrive at two coupled density matrix evolution equations for Rb and Cs: \[\frac{d}{dt}\rho_{\mathrm{Rb}} =A_{\mathrm{Rb}}\hat{\mathbf{I}}_{\mathrm{Rb}}\cdot\hat{\mathbf{ s}}_{\mathrm{Rb}}+g_{s}\mu_{B}\hat{\mathbf{s}}_{\mathrm{Rb}}\cdot\mathbf{B}+R \left(\phi_{\mathrm{Rb}}-\rho_{\mathrm{Rb}}\right)\] \[+R_{\mathrm{se}}^{\mathrm{Rb,Rb}}\left\{\phi_{\mathrm{Rb}}\left( 1+4\langle\hat{\mathbf{s}}_{\mathrm{Rb}}\rangle\cdot\hat{\mathbf{s}}_{ \mathrm{Rb}}\right)-\rho_{\mathrm{Rb}}\right\} \tag{15}\] \[\frac{d}{dt}\rho_{\mathrm{Cs}} =A_{\mathrm{Cs}}\hat{\mathbf{I}}_{\mathrm{Cs}}\cdot\hat{\mathbf{ s}}_{\mathrm{Cs}}+g_{s}\mu_{B}\hat{\mathbf{s}}_{\mathrm{Cs}}\cdot\mathbf{B}+R \left(\phi_{\mathrm{Cs}}-\rho_{\mathrm{Cs}}\right)\] \[+R_{\mathrm{se}}^{\mathrm{Cs,Cs}}\left\{\phi_{\mathrm{Cs}}\left( 1+4\langle\hat{\mathbf{s}}_{\mathrm{Cs}}\rangle\cdot\hat{\mathbf{s}}_{\mathrm{ Cs}}\right)-\rho_{\mathrm{Cs}}\right\}\] \[+R_{\mathrm{se}}^{\mathrm{Cs,Rb}}\left\{\phi_{\mathrm{Cs}}\left( 1+4\langle\hat{\mathbf{s}}_{\mathrm{Rb}}\rangle\cdot\hat{\mathbf{s}}_{ \mathrm{Cs}}\right)-\rho_{\mathrm{Cs}}\right\} \tag{16}\] where \(A_{\beta}\) is the hyperfine coupling, the rate \(R\) includes all the relaxation processes, other than the spin-exchange relaxation, that destroy electron polarization without affecting the nucleus, \(\phi_{\beta}\) is the atom's density matrix with zero electronic polarization, and \(R_{\mathrm{se}}^{\beta,\gamma}\) is the spin-exchange rate transferring spin-polarization from atomic species \(\gamma\) to species \(\beta\), with \(\beta,\gamma\in\{\mathrm{Rb},\mathrm{Cs}\}\). The diffusion of atoms out of the probe beam must also be included in these dynamics (see Appendix B). Multiplying both sides of Eqs. (15) and (16) by the spin operator \(\hat{f}_{i}\) (here \(i\) is a general index for identifying the atomic species, the hyperfine manifold and the Cartesian component) and taking the trace, the dynamics of \(\langle\hat{f}_{i}\rangle\) are determined and a closed system of equations can be derived for the time evolution \(\langle\hat{f}_{i}\rangle\). Like in [49; 101; 96], we define the vector \(\mathbf{\hat{X}}(t)\) for the transverse collective-spin components of each species and each hyperfine level: \[\mathbf{\hat{X}}(t)\equiv\left[\hat{f}_{a,x}^{\mathrm{Rb}},\hat{f}_{a,y}^{ \mathrm{Rb}},\hat{f}_{b,x}^{\mathrm{Rb}},\hat{f}_{b,y}^{\mathrm{Cs}},\hat{f}_{a,x}^{\mathrm{Cs}},\hat{f}_{a,y}^{\mathrm{Cs}},\hat{f}_{b,x}^{\mathrm{Cs}},\hat{ f}_{b,y}^{\mathrm{Cs}}\right]^{\intercal}, \tag{17}\] where \(\hat{f}_{a,q}^{\beta}\) refers to the total atomic spin of species \(\beta\in\{\mathrm{Rb},\mathrm{Cs}\}\) along the \(q\)-axis with \(q\in\{x,y\}\), and in the hyperfine state \(\alpha\in\{a,b\}\). The density matrix evolution Eqs. (15) and (16) contain nonlinear terms proportional to \(\langle\hat{\mathbf{s}}_{\beta}\rangle\cdot\hat{\mathbf{s}}_{\gamma}\) associated with the SE interaction; however, for noise measurements around zero mean spin-polarization we linearize such terms by keeping only first-order contributions from the fluctuations. This approximation leads to the linear evolution equation \[\frac{d}{dt}\langle\mathbf{\hat{X}}(t)\rangle=A\langle\mathbf{\hat{X}}(t)\rangle, \tag{18}\] where the drift matrix \(A\) is comprehensively derived in Appendices B and C. ### Spin correlations in the time domain The estimation of the spin-noise spectrum requires the evaluation of correlators \(\langle\hat{f}_{i}(t+\tau)\hat{f}_{j}(t)\rangle\). To find those, we use the quantum regression theorem [114; 9], which states that if the expectation values of a set of observables \(\hat{M}_{\mu}\), \(\mu=1,2,...\) follow a coupled set of linear equations: \[\frac{d}{dt}\langle\hat{M}_{\mu}(t)\rangle=\sum_{\lambda}A_{\mu\lambda}\langle \hat{M}_{\lambda}(t)\rangle, \tag{19}\] then the two-time correlation functions satisfy the equations (for \(\tau\geq 0\)): \[\frac{d}{d\tau}\langle\hat{M}_{\kappa}(t)\hat{M}_{\mu}(t+\tau)\rangle =\sum_{\lambda}A_{\mu\lambda}\langle\hat{M}_{\kappa}(t)\hat{M}_{ \lambda}(t+\tau)\rangle, \tag{20}\] \[\frac{d}{d\tau}\langle\hat{M}_{\mu}(t+\tau)\hat{M}_{\kappa}(t)\rangle =\sum_{\lambda}A_{\mu\lambda}\langle\hat{M}_{\lambda}(t+\tau)\hat{M}_ {\kappa}(t)\rangle. \tag{21}\] The regression theorem holds when the equations of motion for the expectation values are linear and the system-environment correlations can be neglected [115; 116]. The justification for linearity arises from considering small fluctuations as noted previously. For the second requirement, we argue that the environment, where spin-information is lost, is associated with the (abstract) space spanned with all the collisional parameters. As long as a large number of particles are probed, when the ensemble average is taken over all the different types of collisions, the correlations between the collective spin-system and the environment are lost, thus rendering the regression theorem applicable. We thus arrive at the symmetrized and real covariance matrix: \[\mathcal{R}(\tau)=\frac{1}{2}\Big{(}\langle\mathbf{\hat{X}}(t)\mathbf{\hat{X}}^{ \intercal}(t+\tau)\rangle+[\langle\mathbf{\hat{X}}(t+\tau)\mathbf{\hat{X}}^{ \intercal}(t)\rangle]^{T}\Big{)}. \tag{22}\] The quantum regression approach yields for \(\tau\geq 0\): \[\frac{d}{d\tau}\mathcal{R}(\tau)=A\mathcal{R}(\tau)\rightarrow\mathcal{R}(\tau) =e^{A\tau}\mathcal{R}(0). \tag{23}\] Given that \(\mathcal{R}(0)\) is symmetric, for \(\tau<0\) it is \(\mathcal{R}(\tau)=\mathcal{R}(-\tau)^{T}\). In the following we extend the analysis of [96] to the current case of a dual-species vapor. The matrix \(A\) is diagonalizable, so we can write \(e^{A\tau}=Ve^{\Lambda\tau}V^{-1}\), where \(V\) is the matrix whose \(i\)-th column is the \(i\)-th eigenvector of \(A\), and \(\Lambda\) is a diagonal matrix with its diagonal elements being the corresponding eigenvalues of \(A\). Therefore, according to Eq.(23), the time dependence for any spin correlation of the form \(\langle\hat{f}^{\beta}_{\alpha,q}(0)\hat{f}^{\beta^{\prime}}_{\alpha^{\prime},q^{\prime}}(0)\rangle\) manifests as a summation over all the eigenvalues of \(A\) of exponentials of the form \(e^{\lambda_{k}\tau}\), where \(\lambda_{k}\) is the \(k\)-th eigenvalue of \(A\). The eigenvalues of \(A\) are in general complex numbers, but since \(A\) has real elements, the eigenvalues appear in complex conjugate pairs. Since \(A\) is an \(8\times 8\) matrix, there are 4 pairs of eigenvalues, hence the elements of the covariance matrix can be written (for \(\tau\geq 0\)): \[\mathcal{R}^{\beta,\beta^{\prime}}_{\alpha q,\alpha^{\prime}q^{\prime}}(\tau )=\sum_{k=1}^{4}\Re\left[c_{k}(\beta\alpha q;\beta^{\prime}\alpha^{\prime}q^{ \prime})e^{-\Gamma_{k}\tau+i\Omega_{k}\tau}\right], \tag{24}\] where \(\Gamma_{k}=\Re\left[\lambda_{k}\right]+R_{D}\) and \(\Omega_{k}=\Im\left[\lambda_{k}\right]\) are the decoherence rate and the precession frequency, respectively, of mode \(k\). In the rates \(\Gamma_{k}\) we took into account the relaxation rate, \(R_{D}\), due to atoms diffusing out of the probe beam. The coefficients \(c_{k}\) are complex numbers that depend on the steady-state (\(\tau=0\)) covariance \(R(0)\). ### Spin correlations in the frequency domain The spectrum, obtained by the Fourier transform of the correlation function (see Appendix C), is then given as a sum of complex Lorentzians or equivalently as the sum of dispersive and Lorentzian functions: \[S^{\beta,\beta^{\prime}}_{\alpha q,\alpha^{\prime}q^{\prime}}( \nu)=\sum_{k=1}^{4}\zeta_{k}(\beta\alpha q;\beta^{\prime}\alpha^{\prime}q^{ \prime})\frac{\Gamma_{k}}{(\nu-\nu_{k})^{2}+(\Gamma_{k}/2\pi)^{2}}+\] \[\zeta^{\prime}_{k}(\beta\alpha q;\beta^{\prime}\alpha^{\prime}q^ {\prime})\frac{\nu-\nu_{k}}{(\nu-\nu_{k})^{2}+(\Gamma_{k}/2\pi)^{2}}, \tag{25}\] where \(\nu_{k}=\Omega_{k}/2\pi\). In contrast to the auto-correlation spectrum, the cross-correlation spectrum of Eq.(IV.2) is in general complex, i.e. the coefficients \(\zeta_{k}\), \(\zeta^{\prime}_{k}\) can be complex for \(\beta\neq\beta^{\prime}\). Therefore, in order to obtain all the information related to the spectrum, both the real and imaginary components should be recorded. Finally we note that the covariance matrix \(\mathcal{R}(0)\) entering Eq.(23) can be evaluated by integrating the spectrum over all frequencies (see Appendices A and E). In practice however, the measurement is performed over a limited frequency range either due to the finite-bandwidth of the electronics/detectors or because deliberately the experimental application requires a limited sampling rate. Depending on the experimental conditions, this may affect the apparent correlations as discussed in Sec. VI. ## V Experimental data with theoretical fits We acquire \({}^{87}\)Rb-\({}^{133}\)Cs cross-correlation spectra, as well as single-species \({}^{87}\)Rb and \({}^{133}\)Cs power spectra, for six different magnetic fields: 4 mG, 6 mG, 12 mG, 24 mG, 46 mG and 92 mG. We verify that the sign of the cross-correlation signal makes physical sense. Indeed, the experimental observable (balanced polarimeter output) is a product of the measured atomic spin with an atom-light coupling factor. The latter has a sign depending on the probe detuning from the atomic resonance (see Appendix A). The data are acquired with both probe beams being blue detuned from the corresponding atomic resonance. As a result, the atom-light scaling factors do not alter the overall sign of the cross-correlation. Nevertheless, we do verify that by flipping either one wavelength detuning from blue to red, the cross-spectra change sign. For the accurate interpretation of the positivity or negativity of the dual-species spin correlations, it is also necessary to measure the direction of optical rotation consistently for both beams. We do this by introducing an achromatic waveplate in the common path of the two beams, confirming that the two polarimeter outputs change in the same way with the rotation of the waveplate. To fit the theoretical model to the data we use as model spectrum the expression \[S^{\text{Rb,Cs}}_{\text{model}}(\nu)\propto g^{\text{Rb}}_{a}g^{\text{Cs}}_{a}s^{\text{Rb,Cs}}_{ax,ax}(\nu)-g^{ \text{Rb}}_{a}g^{\text{Cs}}_{b}s^{\text{Rb,Cs}}_{ax,bx}(\nu)\] \[-g^{\text{Rb}}_{b}g^{\text{Cs}}_{a}s^{\text{Rb,Cs}}_{bx,ax}(\nu)+g ^{\text{Rb}}_{b}g^{\text{Cs}}_{b}s^{\text{Rb,Cs}}_{bx,bx}(\nu), \tag{26}\] where the atom-light coupling factor \(g^{\beta}_{\alpha}\) depends on the wavelength of the laser probing the \(\alpha\) hyperfine spin of the \(\beta\) species. We remind the reader that \(S^{\beta,\beta^{\prime}}_{\alpha x,\alpha^{\prime}x}(\nu)\), given by Eq. (IV.2), corresponds to the cross-spectrum between the \(\alpha\) and \(\alpha^{\prime}\) hyperfine spins of the \(\beta\) and \(\beta^{\prime}\) species, respectively, measured along the \(x\)-axis. Moreover, the model also requires as input the steady-state covariance matrix \(R(0)\). We choose a diagonal \(R(0)\), because as discussed in Appendix C, if the imaginary part of the cross spectrum is zero and the spin-variances follow the scaling outlined in Eq. (C14), then \(R(0)\) must be diagonal. We have verified that the measured imaginary part of the cross spectrum is zero within the measurement resolution. We did so not only for the operating wavelengths of the two probe lasers, but for four different probe-wavelength pairs (combination of two different wavelengths for \({}^{133}\)Cs and \({}^{87}\)Rb probing). This way, and given the dependence of the coupling factors \(g^{\beta}_{\alpha}\) on the probe light wavelength, we alter the contribution of each of the four terms of Eq. (IV.2), showing that the zero imaginary part of the cross spectrum is not accidental, but reflects an underlying property of all cross-spectra appearing in Eq. (IV.2). Additionally, assuming the aforementioned scaling of the spin variances, we arrive at the result that \(R(0)\) is diagonal. Both the single-species power spectra and the dual-species cross-spectrum are used for the optimization of the fitted parameters. That is, the parameters are adjusted to achieve a minimum in the merit function: \[\sum_{j=1}^{800}\Bigg{\{}\left[S_{\text{meas}}^{\text{Rb,Cs}}(\nu_{j})-S_{\text{ model}}^{\text{Rb,Cs}}(\nu_{j};\mathcal{P})\right]^{2}+\left[S_{\text{meas}}^{ \text{Rb,Rb}}(\nu_{j})-S_{\text{model}}^{\text{Rb,Rb}}(\nu_{j};\mathcal{P}) \right]^{2}+\left[S_{\text{meas}}^{\text{Cs,Cs}}(\nu_{j})-S_{\text{model}}^{ \text{Cs,Cs}}(\nu_{j};\mathcal{P})\right]^{2}\Bigg{\}}, \tag{27}\] where \(S_{\text{meas}}^{\beta,\beta^{\prime}}(\nu_{j})\) and \(S_{\text{model}}^{\beta,\beta^{\prime}}(\nu_{j};\mathcal{P})\) is the experimental and the model's prediction for the noise spectrum between spin species \(\beta\) and \(\beta^{\prime}\), measured and calculated at the \(j\)-th frequency bin of the spectrum analyzer, respectively, where \(j=1,2,...,800\). The symbol \(\mathcal{P}\) represents the set of all fitted parameters, including the spin-exchange rates (\(R_{\text{se}}^{\text{Rb,Rb}}\), \(R_{\text{se}}^{\text{Cs,Cs}}\)), the S-damping rate (\(R\)), the spin relaxation rate due to diffusion (\(R_{\text{D}}\)), the magnetic field (\(B_{0}\)), the photon shot noise levels (\(\text{PSN}_{\text{Rb}}\), \(\text{PSN}_{\text{Cs}}\)), and the scaling factors (\(\mathcal{K}_{\text{Rb}}\), \(\mathcal{K}_{\text{Cs}}\)) for each of the Rb and Cs power spectrum. The scaling factor of the Rb-Cs cross-spectrum is \(\sqrt{\mathcal{K}_{\text{Rb}}\mathcal{K}_{\text{Cs}}}\) and does not appear as independent variable for the fitting. In Fig. 3 we present the data (Rb-Cs cross-correlation spectrum and single-species power spectra) with the result of the global fit, showing very good agreement with the theoretical model. In the table shown in Fig. 3m we summarize the fit parameters. The fitted value of \(B_{0}\) follows the expected value based on the applied current to the coil within the magnetic shields, with a deviation of only a few percent at the smallest field values. The fitted values of the spin-exchange rates also agree within 10% with the values derived from the spin-exchange cross-sections reported in literature [117]. Importantly, the fitted values for the spin-exchange rates, photon shot noise levels, and scaling factors were consistent across the six different magnetic field values, demonstrating the internal consistency of the theoretical model, which captures Figure 3: Measured cross correlation spectra (a)-(f) and auto-correlation power spectra (g)-(l) for the \({}^{87}\)Rb (green, higher frequencies) and \({}^{133}\)Cs (cyan, lower frequencies) spin-ensembles at six different magnetic fields, together with theoretical fits (solid lines). Each spectrum is the average of 5000 runs. With increasing magnetic field, the magnitude of the cross-correlation peak is seen to drop, since when the difference in precession frequencies of the two species is larger than the magnetic linewidth, the spin-exchange coupling of the two different states is averaged out. The narrowing of the auto-correlation spectra is also evident, as by reducing the magnetic field spin dynamics gradually enter the SERF regime. (m) Collection of fitting parameters for magnetic field, self spin-exchange rates, photon shot noise levels and scaling factors. The magnetic field aside, all parameters are consistent across the six magnetic field values. the global magnetic-field dependence of the data. Lastly, the S-damping and diffusion rates could not be accurately determined from the fit, likely because they are two orders of magnitude smaller than the spin-exchange rates. To address this issue, the fitting of those rates was constrained within a range spanning from a factor of 3 below to a factor of 3 above the expected rates based on the relevant experimental parameters. ## VI Results and discussion In the cross-correlation spectra (Figs. 3a-f), a prominent positive peak is observed at low magnetic fields in the frequency range near the average of the resonance frequencies of the two species, as determined by their power spectra (Figs. 3g-l). The peak height drops and becomes broader with increasing magnetic field. This behavior is primarily caused by the spin-exchange collisions between the alkali-metal atoms and reflects how the effect of these collisions changes with the magnetic field. The positive and negative swings of the cross-correlation spectrum indicate that band-limited measurements can exhibit either positive or negative correlations between the two spin species. In other words, the value and sign of the correlations depend on the bandwidth of the measurement and on the central measurement frequency. To illustrate this point, we consider the cosine quadratures \(\widetilde{\mathcal{F}_{\beta}}=(1/T_{\mathrm{BW}})\int_{0}^{T}\mathcal{F}_{ \beta}(t)e^{(T-t)/T_{\mathrm{BW}}}\cos(\Omega t)dt\) of the (collective, transverse) spin for each of species \(\beta\in\{\mathrm{Rb},\mathrm{Cs}\}\), and examine their correlation, \(\widetilde{\langle\widetilde{\mathcal{F}_{\mathrm{Rb}}\mathcal{F}_{\mathrm{ Cs}}}\rangle}\), where \(T\) is the measurement time, \(\Omega\) is the frequency of the harmonic quadrature, and \(1/(2\pi T_{\mathrm{BW}})\) is the integration bandwidth. We parenthetically note that such cosine (or sine) quadratures are typically measured in AC magnetometers with a lock-in amplifier. This correlation can be expressed as an integral of the inter-species cross-correlation spectrum, \(\widetilde{\langle\mathcal{F}_{\mathrm{Rb}}\mathcal{F}_{\mathrm{Cs}}}\rangle =\int_{0}^{\infty}S_{\mathrm{meas}}^{\mathrm{Rb},\mathrm{Cs}}(\nu)\phi(\nu)d\nu\), where \(\phi(\nu)\) is a kernel function that accounts for the effective measurement bandwidth and depends on the quadrature frequency \(\Omega\) and the integration bandwidth \(T_{\mathrm{BW}}\), with a negligible effect from the measurement time \(T\) when \(T\gg T_{\mathrm{BW}}\) and \(\Omega T_{\mathrm{BW}}\gg 1\) (see Appendix E for an explicit formula for the kernel function). We characterize the strength of this cross-correlation with the coefficient \[\mathcal{C}_{\mathrm{Rb},\mathrm{Cs}}=\frac{\int_{0}^{\infty}S_{\mathrm{meas} }^{\mathrm{Rb},\mathrm{Cs}}(\nu)\phi_{\mathrm{RbCs}}(\nu)d\nu}{\sqrt{\int_{0} ^{\infty}S_{\mathrm{meas}}^{\mathrm{Rb},\mathrm{Rb}}(\nu)\phi_{\mathrm{Rb}}( \nu)d\nu\int_{0}^{\infty}S_{\mathrm{meas}}^{\mathrm{Cs},\mathrm{Cs}}(\nu) \phi_{\mathrm{Cs}}(\nu)d\nu}},\] where the kernel functions \(\phi_{\mathrm{Rb}}(\nu)\), \(\phi_{\mathrm{Cs}}(\nu)\), and \(\phi_{\mathrm{Rb}\mathrm{Cs}}(\nu)\) are centered around \(\nu_{0}^{\mathrm{Rb}}\), \(\nu_{0}^{\mathrm{Cs}}\) and \(\nu_{0}^{\mathrm{Rb}\mathrm{Cs}}\), respectively, i.e. the frequency where the corresponding spectrum is maximum (additionally, all three kernel functions depend on the measurement time \(T\)). In Fig. 4 we plot \(\mathcal{C}_{\mathrm{Rb},\mathrm{Cs}}\) as a function of the magnetic field for a measurement bandwidth of 100 Hz, using the experimentally acquired spectra. It is seen that at low magnetic fields, the cross-correlation between the two spin-species can be a significant fraction of the measured spin-noise power, while it drops at larger magnetic fields. Alternatively, if the center frequencies of the harmonic quadratures are chosen in the region where the cross-spectrum is negative, the measured correlation (for appropriate integration time) will correspondingly appear to be negative. Overall, the sign and strength of the cross-correlation can be adjusted with readily controllable experimental parameters, like the magnetic field or the measurement bandwidth. ### Cross-correlation spin-noise power Of particular interest is the equal-time (\(\tau=0\)) cross-correlation, i.e. the total cross-correlation power. The power is related to the noise terms that enter into the stochastic differential equations [118] describing the time evolution of the observables. Previously, a debate about the value of cross-correlation power has emerged in the literature. In particular, Dellis and coworkers [99] used a \({}^{85}\)Rb-\({}^{87}\)Rb spin ensemble and measured a non-zero cross-correlation power, which increased at low magnetic fields. In constrast, Roy and coworkers [100] used a \({}^{85}\)Rb-\({}^{133}\)Cs spin ensemble and found the cross-correlation power to be zero, irrespective of the magnetic field. We will here resolve the aforementioned debate, noting that the measurement bandwidth has a subtle effect on the observed cross-correlation power. Firstly, as noted previously, a zero imaginary part of the cross-spectrum and a physically justifiable scaling of the single species spin-noise variance indeed imply a zero cross-correlation power. This, however, corresponds to the integration of the cross-correlation spectrum from frequency zero to infinity. In a realistic experiment, all measurements are conducted within a finite frequency range. Frequency components exceeding this range do not contribute to the Figure 4: Cross-correlation coefficient between \({}^{87}\)Rb and \({}^{133}\)Cs spins, both estimated at the frequency where the cross-correlation spectrum is maximum using a bandwidth of 100 Hz. measured cross-correlation power. Consequently, if the cross-correlation spectrum contains considerable power in the frequency range beyond the measurable bandwidth, the detected cross-correlation power may indeed appear to be non-zero. This is particularly the case when the high-frequency tail of the spectrum extends to frequencies larger than the spin-exchange rate. To demonstrate this, we calculate the \({}^{87}\)Rb-\({}^{133}\)Cs cross-correlation coefficient when acquiring the spin signals at a finite sampling rate. Each recorded data point is modelled as the average of the corresponding signal over the sampling duration: \(\tilde{y}_{\alpha}=\frac{1}{\Delta t}\int_{t}^{t+\Delta t}y_{\alpha}(t^{\prime })dt^{\prime}\), where \(\tilde{y}_{\alpha}\) and \(y_{\alpha}\) represent respectively the sampled and the underlying continuous-time signals of species \(\alpha\), and \(1/\Delta t\) denotes the sampling rate. We define the cross-correlation coefficient \(\tilde{\mathcal{C}}_{\text{Rb,Cs}}=\langle\tilde{y}_{\text{Rb}}\tilde{y}_{ \text{Cs}}\rangle/\sqrt{\langle\tilde{y}_{\text{Rb}}^{2}\rangle\langle\tilde{y }_{\text{Cs}}^{2}\rangle}\), and calculate it from the theoretical fits to the data employing Eq.(14). The coefficient \(\tilde{\mathcal{C}}_{\text{Rb,Cs}}\) is shown in Fig. 5a as a function of the magnetic field for two different sampling rates. It is seen that we obtain a significantly smaller correlation for a large sampling rate, as in this case a larger part of the negative high-frequency tail of the cross-correlation spectrum is included. These observations explain the positive correlations observed in [99], where the bandwidth of the measurement was 50 kHz. Regarding the experimental result in [100], we note that a crucial physical behavior is the dependence of the correlation coefficient on the spin-exchange relaxation rates \(R_{\text{se}}\), and all other relaxation rates. In Fig. 5b we plot the correlation coefficient as a function of the ratio \(R_{D}/R_{\text{se}}\), where \(R_{D}\) is the relaxation due to atoms diffusing out of the probe beam (see Appendix 2.A.f). It is seen that for a large \(R_{D}\) the correlation drops to zero. This is because for correlations to be observable, the magnetic linewidths should be dominated by spin-exchange broadening. If other broadening mechanisms prevail, the correlation effect will be suppressed. Diffusion out of the probe beam, or the so-called transit-time broadening, is indeed one such possibility, apparently responsible for the zero cross-correlation power measured in [100]. The authors in [100] focused the probe beam to a 50 \(\mu\)m waist. Moreover, they had 3 times less buffer gas pressure than in this experiment. This renders the transit time broadening about 300 times larger than our case, amounting to \(8.4\times 10^{5}\) s\({}^{-1}\). A subtle difference of this kind of broadening is that it equally affects the nucleus and the electron, since it is just a Fourier broadening of the signal's limited-time observation. Hence, there is no slowing down factor like in the other broadening mechanisms, and the aforementioned rate directly appears in the measured linewidth. In fact, it can be seen from Fig. 2 of [100] that the linewidth is about 100 kHz, even though the cesium and rubidium densities reported in [100] are similar to our experiment, hence the spin-exchange broadening should be about 4 kHz. ### Intra-species correlations in the context of mean field theory Using the measurements on two district species, it is clearly demonstrated that spin-exchange collisions result in unequal-time correlations among the atoms involved in collisions. Among those, there are also inter-atomic unequal-time correlations between atoms of the same species. It seems remarkable that a mean field theory employing essentially a single-atom description captures the intricate correlations emerging from spin-exchange collisions among atoms of identical species. To address this issue and underscore the internal consistency of the presented theory, we proceed to demonstrate the reduction of the equations of motion governing two colliding atoms engaging in spin-exchange interactions within the same species. This reduction elegantly transforms the dynamics of the interacting pair into the equation of motion characterizing a mean single atom, which is effectively measured in an experiment. This elucidates the connections between the microscopic behavior Figure 5: (a) Cross-correlation coefficient as a function of the magnetic field for two different sampling rates. (b) Cross-correlation coefficient for a specific magnetic field, \(B=10\) mG, as a function of the ratio \(R_{D}/R_{\text{se}}\), quantifying the strength of transit-time-broadening relative to spin-exchange relaxation. The inter-species cross-correlation is suppressed if the linewidth is not dominated by spin-exchange. of individual collisions and the macroscopic behavior described by mean field theory, thereby shedding light on the overarching consistency of the theoretical framework. Consider the observable vector: \[\hat{\mathcal{V}}(t)=\left(\hat{f}^{\mathsf{a}}_{a,x},\hat{f}^{\mathsf{a}}_{a,y}, \hat{f}^{\mathsf{a}}_{b,x},\hat{f}^{\mathsf{a}}_{b,y},\hat{f}^{\mathsf{b}}_{a,x },\hat{f}^{\mathsf{b}}_{a,y},\hat{f}^{\mathsf{b}}_{b,x},\hat{f}^{\mathsf{b}}_{b,y}\right)^{\top}, \tag{28}\] where now the indices \(\mathsf{a}\) and \(\mathsf{b}\) denote atoms of the same species. All spin operators depend on time \(t\). Using the methods presented in Appendix B, the mean spin dynamics can be formulated as follows: \[\frac{d}{dt}(\hat{\mathcal{V}}(t))=\tilde{\mathcal{A}}\langle\hat{\mathcal{V }}(t)\rangle. \tag{29}\] where the matrix \(\tilde{\mathcal{A}}\) encapsulates the linear dynamics akin to the representation by matrix \(A\) pertinent to the dual-species case. In an experiment, collective variables \(\hat{f}^{\mathsf{a}}+\hat{f}^{\mathsf{b}}\) are measured. We thus define the collective observable vector \[\hat{\mathcal{M}}(t) =\mathcal{R}\hat{\mathcal{V}}(t)\] \[=\left(\hat{f}^{\mathsf{a}}_{a,x}+\hat{f}^{\mathsf{b}}_{a,x}, \hat{f}^{\mathsf{a}}_{a,y}+\hat{f}^{\mathsf{b}}_{a,y},\hat{f}^{\mathsf{a}}_{b,x}+\hat{f}^{\mathsf{b}}_{b,x},\hat{f}^{\mathsf{a}}_{b,y}+\hat{f}^{\mathsf{b} }_{b,y}\right)^{\top}, \tag{30}\] where \(\mathcal{R}\) is the transformation matrix from the two-atom spin space to the reduced collective-spin space: \[\mathcal{R}=\left(\begin{array}{cccccccc}1&0&0&0&1&0&0&0\\ 0&1&0&0&0&1&0&0\\ 0&0&1&0&0&0&1&0\\ 0&0&0&1&0&0&0&1\end{array}\right). \tag{31}\] Simple matrix algebra verifies that the structure of matrix \(\tilde{\mathcal{A}}\) is such that it fulfills the equality: \[\mathcal{R}\tilde{\mathcal{A}}\hat{\mathcal{V}}=\mathcal{R}\tilde{\mathcal{A }}\mathcal{R}^{+}\hat{\mathcal{M}}, \tag{32}\] where \(\mathcal{R}^{+}\) is the Moore-Penrose inverse [119] of \(\mathcal{R}\). As a result: \[\frac{d}{dt}\langle\hat{\mathcal{M}}(t)\rangle=\mathcal{R}\frac{d}{dt} \langle\hat{\mathcal{V}}(t)\rangle=\mathcal{R}\tilde{\mathcal{A}}\mathcal{R} ^{+}\langle\hat{\mathcal{M}}(t)\rangle=\mathcal{A}\langle\hat{\mathcal{M}}(t )\rangle, \tag{33}\] where \(\mathcal{A}=\mathcal{R}\tilde{\mathcal{A}}\mathcal{R}^{+}\). The matrix \(\mathcal{A}\), formulated to describe the evolution of the collective ensemble spin, corresponds precisely to the matrix derived through the utilization of the equation governing the density matrix of a single atom (see for example Eq. 45 of [120]). This congruence implies that the single-atom equation implicitly encompasses the influence of dynamic (non-equal time) inter-atomic correlations emanating from collisions involving atoms of the identical species. ### Character of correlations: entanglement As mentioned in the introductory Sec. IIB, a pertinent question is whether the observed multi-time correlations are quantum or classical in nature. In other words, whether the observed correlations can be described using a classical probability model or are they non-classical in the sense that they cannot be prepared with classical operations. Quantifying quantum or quantum/classical correlations [12] can be rather challenging and goes beyond the scope of this work. We briefly note that quantum correlations comprise an entire family of relationships [12], such as entanglement and quantum discord, which could potentially serve as benchmarks for the observed correlations. We here make only some exploratory comments and leave a more detailed discussion for future work. In [68] it was shown that spin-exchange collisions of the kind encountered in our work can create entanglement between two (partially) polarized spin ensembles that can be sustained for meaningful timescales. Similarly, it was shown in [14; 15] that spin-exchange collisions under appropriate conditions create quantum correlations and can be harnessed to transfer the quantum state from one spin species to another spin species. On the other hand, as explained in the previous subsection, the correlations measured here are consistent with the theory describing spin dynamics from the single-atom perspective (mean-field), hence such correlations cannot be anything but classical. It is possible, however, that the character of correlations could have a parametric dependence on the spin-polarization of the vapor, like the Werner state discussed in the introduction. We here use a simple two-atom toy model to illustrate that this seems might indeed be the case. Considering atoms spin-polarized along \(\mathbf{\hat{x}}\), a magnetic field along \(\mathbf{\hat{z}}\) creates a spin component along \(\mathbf{\hat{y}}\). What we will show is that we expect the fluctuations of the total spin component \(\hat{f}_{y}\) for \({}^{133}\)Cs and \({}^{87}\)Rb to be quantum correlated. We take as initial states for the \({}^{133}\)Cs and \({}^{87}\)Rb atoms the spin-temperature states \(\rho_{1}=e^{\beta\hat{f}_{x}}/\text{Tr}\{\beta\hat{\mathrm{f}}_{\mathrm{x}}\}\) and \(\rho_{2}=e^{-\beta\hat{f}_{x}}/\text{Tr}\{-\beta\hat{\mathrm{f}}_{\mathrm{x}}\}\), i.e. having the same but opposite spin polarization \(\langle\hat{s}_{x}\rangle=1/2\tanh{(\beta/2)}\), with \(\beta\) being the spin temperature [120]. The combined initial state is \(\rho=\rho_{1}\otimes\rho_{2}\). Note that for each atom the operator \(\hat{f}_{x}\) and the respective density matrix has a matrix representation of different dimension. We then evolve \(\rho\) by the Hamiltonian \(\hat{H}\), i.e. we calculate \(\rho^{\prime}=e^{-i\hat{H}t}\rho e^{i\hat{H}t}\), where \(\hat{H}=\hat{h}_{\mathrm{Cs}}\otimes\mathbb{1}+\mathbb{1}\otimes\hat{h}_{ \mathrm{Rb}}\), with \(\hat{h}_{\mathrm{Cs}}\) and \(\hat{h}_{\mathrm{Rb}}\) being the \({}^{133}\)Cs and \({}^{87}\)Rb Breit-Rabi Hamiltonians, respectively. We use a small field of 10 \(\mu\)G and a precession time of 10 \(\mu\)s. We then apply on \(\rho^{\prime}\) the spin-exchange operator \(\hat{P}_{e}=(1/2)\mathbb{1}+2\mathbf{\hat{s}}_{1}\cdot\mathbf{\hat{s}}_{2}\), and calculate the resulting negativity [121] of the state \(\hat{P}_{e}\rho^{\prime}\hat{P}_{e}^{\dagger}\), like in [68]. The result is shown in Fig. 6. It is seen that for low polarizations pertinent to the noise measurement of this work the two atoms are not entangled. However, there appears to be a threshold spin-polarization over which the two atoms gradually become strongly entangled. ## VII Conclusions We have studied experimentally and theoretically spin correlations that spontaneously build up in a dual-species alkali-metal vapor, here composed of \({}^{87}\)Rb and \({}^{133}\)Cs. We have identified and categorized various types of spin-correlations that can be probed in a hot atomic vapor and we have elucidated their behavior. The combined action of interspecies and intraspecies spin-exchange collisions leads to positive equal-time spin-correlations which are enhanced at low magnetic fields and suppressed at high fields due to an interplay of their unique spectral distribution and the measurement bandwidth. The nature of these correlations has been discussed, anticipating that similar correlations in spin-polarized vapors are expected to be genuinely quantum, i.e. to reflect interspecies spin entanglement. The use of two species rather than one, and the study of both auto- and cross-correlations, helps to unravel the complexity of spin dynamics in alkali-metal vapors, so far treated mostly as consisting of uncorrelated atoms, and can have significant repercussions for the field of quantum metrology with hot atomic vapors. ###### Acknowledgements. K.M. acknowledges support from Grant FJC2021047840-I funded by MCIN/AEI/10.13039/501100011033; the European Union "NextGenerationEU/PRTR"; Greece and the European Union [European Social Fund (ESF)] through the Operational Programme "Human Resources Development, Education and Lifelong Learning" in the context of the project "Strengthening Human Resources Research Potential via Doctorate Research" (Grant No. MIS-5000432), implemented by the State Scholarships Foundation (IKY). GV acknowledges funding from EU QuantERA Project PACE-IN (GSRT Grant No. T11EPA4-00015) and from the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI agreement no. HFRI-00768 (project QCAT). F.V., A.M., G.M., M.S., G.P.T. and I.K.K. acknowledge the cofinancing of this research by the European Union and Greek national funds through the Operational Program Crete 2020-2024, under the call "Partnerships of Companies with Institutions for Research and Transfer of Knowledge in the Thematic Priorities of RIS3Crete", with project title "Analyzing urban dynamics through monitoring the city magnetic environment" (project KPHP1 - 0029067). M.W.M acknowledges the European Commission project OPMMEG (101099379), Spanish Ministry of Science MCIN with funding from European Union NextGenerationEU (PRTR-C17.11) and by Generalitat de Catalunya "Severo Ochoa" Center of Excellence CEX2019-000910-S; projects SAPONAIIA (PID2021-123813NB-I00) and MARICHAS (PID2021-126059OA-I00) funded by MCIN/ AEI /10.13039/501100011033/ FEDER, EU; Generalitat de Catalunya through the CERCA program; Agencia de Gestio d'Ajuts Universitaris i de Recerca Grant 2021-SGR-01453; Fundacio Privada Cellex; Fundacio Mir-Puig;. A.K. and M.L. were supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project ID HFRI-FM17-1034). M.S. acknowledges financial support from the Spanish Agencia Estatal de Investigacion, Grant No. RYC2021-032032-I, PID2019107609GB-I00, and the Catalan Government for the project QuantumCAT 001-P-001644, co-financed by the European Regional Development Fund (FEDER). ## Appendix A Detailed Theoretical Derivation of Cross- and Auto-Correlation Spectra We here recapitulate the basic physics of our theoretical description, namely atom-light coupling, and atomic evolution due to coherent Hamiltonian dynamics and relaxation effects dominated by atom-atom collisions. ### Atom-light coupling The interaction of light with atomic spin ensembles has been described in detail elsewhere [108; 109; 110]. For near-resonant monochromatic light of non-saturating intensity, the coherent atom-light interaction (i.e. the interaction that leads to forward scattered light in the spatial mode of the probe beam) is described with a polarizability Hamiltonian that couples the magnetic sublevels of the atomic ground state to the polarization modes of the light. The resulting atomic polarizability is a rank-2 tensor operator that can be decomposed into three Figure 6: Negativity quantifying the entanglement of the \({}^{87}\)Rb-\({}^{133}\)Cs combined spin state produced by (i) spin-polarizing the vapors along the \(x\)-axis with equal and opposite polarizations, (ii) spin precession in a transverse magnetic field, and (iii) cross-spin exchange collisions. irreducible components. Of those, the tensor (rank-2) polarizability is negligible for the conditions of our experiment (pressure broadening \(\sim\) 10 GHz and detuning \(\sim\) 100 GHz). Most relevant for our experiment is the vector polarizability (rank-1) describing paramagnetic Faraday rotation and reading [122]: \[\hat{H}_{\rm int}=\sum_{i=1}^{N_{\rm st}}\hat{\mathcal{S}}_{3}(\mathbf{r}_{i},t )\beta(\mathbf{r}_{i})\left[g_{a}\hat{f}_{a,x}^{(i)}(t)-g_{b}\hat{f}_{b,x}^{(i )}(t)\right], \tag{10}\] where the summation is performed over all atoms probed by the laser beam, \(\mathcal{S}_{3}(\mathbf{r},t)=i\left(\hat{a}_{\rm t}^{\dagger}\hat{a}_{z}-\hat {a}_{\rm t}^{\dagger}\hat{a}_{y}\right)/2\) is the Stokes light-operator quantifying the photon-flux imbalance of the left- and right- circular polarization modes (and in the most general case is a function of space and time), \(\mathbf{r}_{i}\) is the location of the \(i\)-th atom, \(\hat{f}_{a,x}^{(i)}(t)\) and \(\hat{f}_{b,x}^{(i)}(t)\) are dimensionless spin components of the \(i\)-th atom along the probe laser direction in the \(a=I+1/2\) and \(b=I-1/2\) hyperfine levels of the ground state, with \(I\) being the nuclear spin quantum number (\(I=7/2\) for \({}^{133}\)Cs and \(I=3/2\) for \({}^{87}\)Rb). The parameter \(\beta(\mathbf{r}_{i})\) characterizes the local field intensity associated with the spatial mode of the probe beam (see below). For a D2 optical transition linewidth dominated by pressure broadening the coupling constants entering Eq.(10) are approximated by [67]: \[g_{\alpha}\approx\frac{1}{2I+1}\frac{cr_{\rm e}f_{\rm osc}}{A_{\rm eff}}\frac{ \nu_{\ell}-\nu_{\alpha}}{(\nu_{\ell}-\nu_{\alpha})^{2}+(\Delta\nu/2)^{2}}, \tag{11}\] where \(r_{\rm e}\approx 2.82\times 10^{-15}\) m is the classical electron radius, \(f_{\rm osc}\) is the oscillator strength of the optical transition, \(c\) is the speed of light, \(\Delta\nu\) is the pressure-broadened optical linewidth (FWHM), \(\nu_{\ell}\) is the frequency of light, and \(\nu_{\alpha}\) with \(\alpha\in\{a,b\}\) is the resonance frequency of the corresponding ground-state hyperfine level. If \(A_{\rm eff}\) labels the effective area of the beam, it is \(\beta(\mathbf{r}_{i})/A_{\rm eff}=I(\mathbf{r}_{i})/\int I(\mathbf{r}_{i})dydz\), where \(I(\mathbf{r}_{i})\) is the light intensity at the coordinate \(\mathbf{r}_{i}\) and \(\int I(\mathbf{r}_{i})dydz\) is the total power of the beam. For a TEM\({}_{00}\) Gaussian beam it is \(\beta(\mathbf{r}_{i})/A_{\rm eff}=e^{-2|\mathbf{r}_{i}|^{2}/w(x)^{2}}/(\pi w(x)^ {2}/2)\), where \(w(x)\) is the \(x\)-dependent Gaussian beam width. For a linearly polarized probe in the \(y\)-direction, neglecting the time it takes light to propagate through the spin ensemble [112], the solution of the Heisenberg equation of motion for the light operator (in the case of small rotation angles) yields [113]: \[\hat{\mathcal{S}}_{2}^{(\rm out)}(t)\approx\hat{\mathcal{S}}_{2}^{(\rm in)}(t )+\frac{1}{2}\Phi\bar{g}\hat{\mathcal{F}}(t), \tag{12}\] where \(\Phi\) is the photon-flux (photons per time). The out (in) superscripts denote the operator after (before) interacting with the atomic sample, \(\bar{g}=(|g_{a}|+|g_{b}|)/2\) is the mean coupling constant to the two ground-state hyperfine manifolds and \(\hat{\mathcal{F}}(t)\) is the measured collective spin defined as: \[\hat{\mathcal{F}}(t) \equiv\sum_{i=1}^{N_{\rm st}}\beta(\mathbf{r}_{i})\left[\frac{g_{ a}}{\bar{g}}\hat{f}_{a,x}^{(i)}(t)-\frac{g_{b}}{\bar{g}}\hat{f}_{b,x}^{(i)}(t)\right] \tag{13}\] \[=\int d\mathbf{r}\beta(\mathbf{r})\left[\frac{g_{a}}{\bar{g}} \hat{f}_{a,x}(\mathbf{r},t)-\frac{g_{b}}{\bar{g}}\hat{f}_{b,x}(\mathbf{r},t) \right]. \tag{14}\] In the last equation the spin-density operator \(\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{ \hat{\hathathathathathathathathathathathathathathathathat }}}}}}}}}}}}}}\) \hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{ \hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat }}}}}}}}}}}}}}\hat{\hat{\hat{\hat{\hat{\hat{\hat{ \hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathat }}}}}}}}}}}}}\hat{\hat{\hat{\hat{\hat{\hat{\hat{ \hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hathathathathathathathathathathathathathathat{ \hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hathathat{\hathathathathathathathathathat{\hathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathat{\hathathathathathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathathathathathathathathathathathathathathathat{\hathathathathathathathathathathathathathathat \hathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathathathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathathathathathathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathathathathathathathathathathathathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\hathathathathathathathathathathathathathathathathathathathat{\hathathathathathathathathathathathathathathathathathathathathathat \hathathat{\hathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathathat \hat{\ mean-field interaction between the \(\beta\) and \(\gamma\) species, and \(R_{\rm se}^{\beta,\gamma}\approx n_{\gamma}v_{\rm th}^{\beta,\gamma}\sigma_{\rm se }^{\beta,\gamma}\) is the spin-exchange rate, given in terms of the atomic density \(n_{\gamma}\) of species \(\gamma\), the relative thermal velocity between the colliding partners \(v_{\rm th}^{\beta,\gamma}\) and the spin-exchange cross-section \(\sigma_{\rm se}^{\beta,\gamma}\). In Eq.(100) the summation runs over all the different species present in the vapor, including the species \(\beta\). As we are dealing with unpolarized vapors, the effect of the frequency shift proportional to the parameter \(\kappa_{\beta\gamma}\) will be ignored in the following. We note that Eq. (100) is a mean field theory. The spin-exchange interactions that an atom can experience are replaced with an effective average interaction and the spin-degrees of freedom of the different atoms are traced out to get a reduced mean density matrix representing the evolution of the ensemble. Essentially, Eq. (100) provides for coarse-grained dynamics over a length scale much larger than the mean-free path. Even in this case, Eq. (100) can capture correlations that can be generated by the dynamics, at least at the level of classical correlations (see Sec. VI.3). #### a.3.3 S-damping collisions Binary collisions between alkali atoms or between alkali-metal atoms and buffer gas (without spin) lead to S-damping [120]: \[d\rho/dt=R\left(\phi-\rho\right), \tag{101}\] where the part of the density matrix with electron polarization is destroyed while the purely nuclear polarization remains unaffected. The S-damping rate, \(R\), is orders of magnitude smaller than the spin-exchange rate, \(R_{\rm se}\). #### a.3.4 Relaxation due to optical fields In [120], it is shown that for the experimentally relevant case of fast quenching (when excited atoms are much more likely to be quenched than to radiate a photon) and excited state \(J\)-damping rapid with respect to the hyperfine interaction, the net evolution of the single-atom density matrix due to optical (depopulation and repopulation) pumping by linearly polarized light is of the form (101), with \(R\) given by \(\Phi\sigma_{\rm op}\), the mean pumping rate per (unpolarized) alkali-metal atom given in terms of the absorption cross-section \(\sigma_{\rm op}\) and the photon flux \(\Phi\). In principle, the effect of light on the atomic-spin evolution can be made negligible, either by using low photon flux (low power or large beam area) or by choosing large detuning. However, as seen from Eq. (102), this results in a reduction of the measured spin-noise to photon shot noise ratio, leading to an increase in the uncertainty of estimation of the spin-noise spectrum [123] for a given number of repetitions. In practice, the laser power is chosen so as to optimize the measurement's signal-to-noise ratio. For our case in particular, the laser power should be such that spin relaxation by the optical fields is not dominant, i.e. spin dynamics should be dominated by spin-exchange collisions. #### a.3.5 Combined description Overall, the density matrix evolution for species \(\beta\) reads \[d\rho_{\beta}/dt =A_{\rm hf,\beta}\hat{\mathbf{I}}_{\beta}\cdot\hat{\mathbf{s}}_ {\beta}+g_{s}\mu_{B}\hat{\mathbf{s}}_{\beta}\cdot\mathbf{B}+R\left(\phi_{\beta }-\rho_{\beta}\right)\] \[+\sum_{\gamma}R_{\rm se}^{\beta,\gamma}\left\{\phi_{\beta}\left(1 +4\langle\hat{\mathbf{s}}_{\gamma}\rangle\cdot\hat{\mathbf{s}}_{\beta}\right) -\rho_{\beta}\right\}, \tag{102}\] where the relaxation rate \(R\) includes all the relaxation processes, other than the spin-exchange collisions, which destroy electron polarization without affecting the nucleus. #### a.3.6 Atomic diffusion In a cell with buffer gas, as in our experiment, the atomic thermal motion becomes diffusive via velocity-changing collisions. The effect of diffusion on spin-noise measurements has been studied in various works. In [86] the correlation function of diffusion-induced noise in unconfined systems is derived. A treatment based on the Bloch-Heisenberg-Langevin formalism is developed in [87], also considering boundary conditions on the cell walls. In our experiment the probe beam diameter is much smaller than the cell dimensions, thus the results of [86] are more relevant. There, it is shown that for diffusion through a TEM\({}_{00}\) Gaussian and well-collimated beam (Rayleigh range much larger than the cell length) the spin-noise spectrum of an atomic ensemble with transverse spin-relaxation rate \(1/T_{2}\) and spin precession frequency \(\nu_{\rm L}\) is given by \[S(\nu)\propto\frac{w^{2}}{2D}\Re\left[e^{s}\Gamma(s)\right],\ s=\frac{w^{2}}{4 D}\left[1/T_{2}+2\pi i(\nu-\nu_{\rm L})\right], \tag{103}\] where \(D\) is the atomic diffusion constant, \(w\) is the waist of the beam and \(\Gamma(s)=\int_{s}^{\infty}(e^{-x}/x)\,dx\) is the incomplete gamma function. When \(w^{2}/4DT_{2}\gg 1\) it is \(e^{s}\Gamma(s)\approx(1-1/s)/s\), and the spectrum takes the form: \[S(\nu)\sim 2\frac{1/T_{2}^{\prime}}{(1/T_{2}^{\prime})^{2}+4\pi^{2}(\nu-\nu_{ \rm L})^{2}}+\mathcal{O}\left(\frac{4DT_{2}}{w^{2}}\right), \tag{104}\] where \(1/T_{2}^{\prime}=1/T_{2}+R_{D}\), with \(R_{D}\sim 4D/w^{2}\). Therefore, when the diffusion time across the beam is much longer than the spin coherence time, the effect of diffusion on the spin-noise spectrum can be captured to a good approximation by introducing an additional relaxation term with rate \(R_{D}\). ### Correlations From Eqs. (7) and (21) it follows that the cross-correlation measured in the experiment is: \[C_{\mathrm{Rb,Cs}}(\tau) =\frac{q_{e}^{2}\eta_{\mathrm{Rb}}\eta_{\mathrm{Cs}}\Phi_{\mathrm{Rb }}\Phi_{\mathrm{Cs}}\bar{g}_{\mathrm{Rb}}\bar{g}_{\mathrm{Cs}}}{2}\] \[\times\langle\hat{\mathcal{F}}_{\mathrm{Rb}}(t+\tau)\hat{ \mathcal{F}}_{\mathrm{Cs}}(t)+\hat{\mathcal{F}}_{\mathrm{Cs}}(t)\hat{\mathcal{ F}}_{\mathrm{Rb}}(t+\tau)\rangle, \tag{22}\] where we have taken into account that the polarization properties of the two beams before the interaction with the atoms are uncorrelated e.g. \(\langle\mathcal{S}_{2,\mathrm{Rb}}^{(\mathrm{in})}(t)\mathcal{S}_{2,\mathrm{ Cs}}^{(\mathrm{in})}(t^{\prime})\rangle=0\). The single-channel auto-correlation function is: \[C_{\beta,\beta}(\tau)=2q_{e}^{2}\eta_{\beta}\Phi_{\beta}\Big{[}\delta(\tau)+ \frac{1}{2}\eta_{\beta}\Phi_{\beta}\beta_{\beta}^{2}\langle\hat{\mathcal{F}}_{ \beta}(t+\tau)\hat{\mathcal{F}}_{\beta}(t)\rangle\Big{]}, \tag{23}\] where \(\beta\in\{\mathrm{Rb,Cs}\}\). In deriving Eq. (23) we assumed perfectly coherent light before the interaction with the atoms, i.e. \(\langle\mathcal{S}_{2,\beta}^{(\mathrm{in})}(t)\mathcal{S}_{2,\beta}^{( \mathrm{in})}(t^{\prime})\rangle=(\Phi_{\beta}/2)\delta(t-t^{\prime})\). The detector inefficiency was modelled by a perfect detector preceded by a beam splitter of power transmissivity \(\eta_{\beta}\)[9]. The first term in Eq.(23) describes photon shot-noise, while the second contains information about the collective spin correlations. We remark that although the photon shot-noise does not explicitly appear in the inter-species correlator of Eq.(22), it does contribute to the uncertainty of its estimation. In Eqs.(22) and (23) it was assumed that there is no correlation between the light and spin operators, implying that the spin variables remain unaffected by the polarization fluctuations of the input field. This is true for our experiment probing unpolarized spin ensembles with light (refer to section D). However, this assumption breaks down when dealing with non-zero spin polarization. As long as the optical rotation remains small, Eqs. (21)-(23) remain valid even for optically thick ensembles with significant light absorption. In such cases, the variable \(\Phi\) represents the photon flux at the ensemble output, which can be significantly reduced compared to the photon flux at the ensemble input [124]. Combing Eqs. 20 and 22, the measured correlation can be written as: \[C_{\mathrm{Rb,Cs}}(\tau)=\mathcal{K}\int\int d\mathbf{r}d\mathbf{r}^{\prime} \beta^{\mathrm{Rb}}(\mathbf{r})\beta^{\mathrm{Cs}}(\mathbf{r}^{\prime})\times \sum_{\alpha,\alpha^{\prime}}g_{\alpha}^{\mathrm{Rb}}g_{\alpha^{\prime}}^{ \mathrm{Cs}}\left[\langle\hat{f}_{\alpha,x}^{\mathrm{Rb}}(\mathbf{r},t+\tau) \hat{f}_{\alpha^{\prime},x}^{\mathrm{Cs}}(\mathbf{r}^{\prime},t)\rangle+ \langle\hat{f}_{\alpha^{\prime},x}^{\mathrm{Cs}}(\mathbf{r}^{\prime},t)\hat{f }_{\alpha,x}^{\mathrm{Rb}}(\mathbf{r},t+\tau)\rangle\right], \tag{24}\] where \(\mathcal{K}\) is an overall scaling factor. A similar equation can be written for the autocorrelation function. ### Collective spin operators The dynamical evolution of spin correlations can be found from the quantum regression theorem (QRT) [9], which states that the spin-correlation functions follow the same equations of motion as the mean spins. Consequently, to explore the dynamic evolution of the collective spin correlations, it suffices to determine the mean dynamics of the corresponding spins. In this regard, the mean field equation (Eq. (20)) can be utilized. We note that the validity of Eq. (20) to describe the dynamics of the collective spin has been confirmed in numerous experiments. Remarkably, the QRT allows us to move deeper into the understanding of spin dynamics and address inter-atomic correlations. This is because the QRT connects the mean dynamics of the collective spin to the collective correlations, and the latter include a multitude of inter-atomic correlation terms. In the following, we derive a simplified version of Eq. 24 for the measured cross-correlation. We adopt the approach presented in [14], involving the definition of coarse-grained continuous local-symmetric spin-density operators. These coarse-grained operators are obtained through the spatial convolution of the spin-density operators with a window function, which remains non-zero over a specific volume (e.g. a Heaviside function). This volume must be adequately large to encompass a significant number of atoms while being sufficiently small to ensure a uniform interaction of all atoms with the probe beam. As discussed in [14], the inherent stochastic nature of collision parameters and random pairings of colliding atoms introduces noise. However, remarkably, these noise terms do not significantly influence the temporal unfolding of correlations, affecting only the zero-time correlation. The coarse-grained spin-density operators evolve as a single entity within the chosen coarse-grained volume, and their mean values follow the mean-field equation over time. In our experimental setup, characterized by a buffer gas pressure of 330 Torr and a large beam diameter, the coarse-grain can be selected to be adequately large; this ensures that atomic motion does not result in significant correlations between different coarse-grains. Correlations resulting from collisions are confined solely within the same coarse-grained region. As explained in [14], this leads to correlations in the coarse-grained spin-density operators that are proportional to the coarse-grained delta function. Overall, disregarding the minor variation in the effect of spin-relaxation caused by the probe beam along the atomic ensemble, the measured correlation in the exper iment can be expressed as (see Eq. 115): \[C_{\text{Rb,Cs}}(\tau)\propto\sum_{\alpha,\alpha^{\prime}}g_{ \alpha}^{\text{Rb}}g_{\alpha^{\prime}}^{\text{Cs}}\bigg{[}\langle\hat{f}_{ \alpha,x}^{\text{Rb}}(t+\tau)\hat{f}_{\alpha^{\prime},x}^{\text{Cs}}(t)\rangle+ \\ \langle\hat{\tilde{f}}_{\alpha^{\prime},x}^{\text{Cs}}(t)\hat{f}_{ \alpha,x}^{\text{Rb}}(t+\tau)\rangle\bigg{]}. \tag{116}\] In the above equation \(\hat{\tilde{f}}_{\alpha,x}^{\beta}\) denotes the coarse-grained, local-symmetric spin-density operator and is related to the single-atom spin \(\hat{f}\) by \[\langle\hat{\tilde{f}}_{\alpha,x}^{\beta}\rangle=n_{\beta}\langle\hat{f}_{ \alpha,x}^{\beta}\rangle \tag{117}\] where \(n_{\beta}\) is the atomic density of the species \(\beta\). The equations of motion for the spin-density operators can be accordingly found from Eqs. (117) and (110). ## Appendix B Mean spin dynamics The derivation of the linearized dynamics for the mean spin components (\(\langle\hat{f}_{\alpha,x}^{\beta}\rangle\) or \(\langle\hat{\tilde{f}}_{\alpha,x}^{\beta}\rangle\)) is best approached by adopting the method introduced in [49] (see also the Supplementary material in [101]), where spin operators are expressed as spherical tensor operators in the coupled \((F,m_{F})\) basis. For transverse spin, it is sufficient to analyze the dynamics of the \(\pm 1\) components of the rank-1 spherical tensor (with the zero-component defined by the direction of the applied DC magnetic field). We focus on studying the noise correlation occurring at Zeeman frequencies. Hyperfine coherences are not measured, and their contribution to Zeeman dynamics can be neglected in the zero polarization limit. Here, for the reader's convenience, we provide a concise overview of the contribution of spin-exchange interactions between different alkali species in the dynamics of the mean spins. We refer to [49] and [101] for more details. The equation of motion for the density matrix of species \(\beta\) due to spin-exchange collision with species \(\gamma\) can be written in the form: \[\frac{1}{R_{\text{se}}^{\beta,\gamma}}\frac{d\rho_{\beta}}{dt} =\sum_{m}\left\{\sqrt{\frac{[I_{\gamma}]}{[I_{\beta}]}}\langle{T _{001m}^{\gamma\dagger}}\rangle-\langle{T_{001m}^{\beta\dagger}}\rangle\right\} {T_{001m}^{\beta}}\] \[+\sum_{\begin{subarray}{c}\Lambda\mu m\\ \Lambda\neq 0\end{subarray}}\sqrt{2[I_{\gamma}]}\langle{T_{\Lambda\mu 00}^{ \beta\dagger}}\rangle\langle{T_{001m}^{\gamma\dagger}}\rangle{T_{\Lambda\mu 1m}^{ \beta}}\] \[-\rho_{\beta}+\sum_{\Lambda\mu}{T_{\Lambda\mu 00}^{\beta}} \langle{T_{\Lambda\mu 00}^{\beta\dagger}}\rangle+\sum_{m}\langle{T_{001m}^{ \beta\dagger}}\rangle{T_{001m}}, \tag{118}\] where \([I]=2I+1\), and \(T_{\Lambda\mu lm}\equiv T_{\Lambda\mu}(II)\otimes T_{lm}(SS)\) is the spherical tensor spin operator in the uncoupled basis, with \(T_{LM}(KK^{\prime})\) being the spherical tensor in the angular momentum basis with quantum numbers \(K\) and \(K^{\prime}\). The second line in Eq. 11 has only terms that are proportional to the product of the electron polarization of one species times the nuclear polarization (\(\Lambda\neq 0\)) of the other species. These terms introduce non-linearity in the atomic evolution since each polarization (electron or nuclear) depends on the density matrix. However, in the case of thermal-unpolarized atomic ensembles, these terms are second order with respect to the small quantity of polarization and can be neglected; consequently, linearized dynamics can be considered for such systems. Multiplying both sides of Eq. 11 with \(T_{1M}^{\beta\dagger}(FF)\) and taking the trace, the equation of motion for \(\langle{T_{1M}^{\beta\dagger}}(FF)\rangle\) can be obtained. The linearized equation contains solely terms of the form \(\langle{T_{1M}^{\beta\dagger}}(FF){T_{001m}}\rangle\) or \(\langle{T_{1M}^{\beta\dagger}}(FF){T_{\Lambda\mu 00}}\rangle\), which can be found by expressing \(T_{001m}\) and \(T_{\Lambda\mu 00}\) in the coupled basis (see Eqs. 63 and 64 in [49]), and using the orthogonality property for spherical operators, \(\langle{T_{LM}^{\dagger}}(FF^{\prime}){T_{lm}}(ff^{\prime})\rangle=\delta_{L \delta}\delta_{Mm}\delta_{Ff}\delta_{F^{\prime}f^{\prime}}\). By also employing the definition for the Hermitian conjugate, \(T_{LM}^{\dagger}(FF^{\prime})=(-1)^{F-F^{\prime}+MT}T_{L-M}(FF^{\prime})\), all equations can be readily transformed into equations of motion for the operators \(T_{LM}(FF)\). The contribution of other relaxation mechanisms to the dynamics of mean spins can be determined using similar calculations. Finally, the dynamics due to magnetic field are most easily determined from the Heisenberg equation of motion, while the hyperfine interaction does not affect the Zeeman spherical operators \(T_{LM}(FF)\). Equations of motion for the spin density operators \(\tilde{T}_{LM}(FF)\) can be obtained straightforwardly by utilizing Eqs. (117) and (118). This amounts to modifying the rate in the first term of the sum appearing in the first line of the RHS in Eq. 11, replacing \(R_{\text{se}}^{\beta,\beta^{\prime}}\propto n_{\beta^{\prime}}\) with \(R_{\text{se}}^{\beta^{\prime},\beta}\propto n_{\beta}\), i.e. interchanging the spin-exchange rates in the terms of the equations of motion that are responsible for the mixing between the spin operators of the two species. Overall, the linearized equations of motion for the spin density operators of the two species can be compactly written as: \[\frac{d\mathbf{T}}{dt}=\begin{pmatrix}\tilde{A}&\mathbf{0}_{4}\\ \mathbf{0}_{4}&\tilde{A}^{*}\end{pmatrix}\mathbf{T}, \tag{119}\] where \(\mathbf{0}_{4}\) is the \(4\times 4\) zero matrix and \(\mathbf{T}=[\mathbf{T}_{11}\;\;\mathbf{T}_{1-1}]^{\top}\), with \(\mathbf{T}_{11}=[T_{11}^{\text{Rb}}(aa)\;\;T_{11}^{\text{Rb}}(bb)\;\;T_{11}^{ \text{Cs}}(a^{\prime}a^{\prime})\;\;T_{11}^{\text{Cs}}(b^{\prime}b^{\prime})]\) and similarly for \(\mathbf{T}_{1-1}\). We remind the reader that \(a=I_{\text{Rb}}+1/2\) and \(b=I_{\text{Rb}}-1/2\), while \(I_{\rm Cs}-1/2\). The \(4\times 4\) matrix \(\tilde{A}\) is: \[\tilde{A} =\begin{pmatrix}R_{\rm se}^{\rm Rb,Rb}\Phi_{\rm se}(I_{\rm Rb})& \mathbf{0}_{2}\\ \mathbf{0}_{2}&R_{\rm se}^{\rm Cs,Cs}\Phi_{\rm se}(I_{\rm Cs})\end{pmatrix}\] \[+\begin{pmatrix}R_{\rm se}^{\rm Rb,Cs}\tilde{\Phi}(I_{\rm Rb})&R_{ \rm se}^{\rm Cs,Rb}\tilde{\Phi}_{\rm se}^{\top}/[I_{\rm Rb}]\\ R_{\rm se}^{\rm Rb,Cs}\tilde{\Phi}_{\rm se}/[I_{\rm Cs}]&R_{\rm se}^{\rm Cs,Rb} \tilde{\Phi}(I_{\rm Cs})\end{pmatrix}\] \[+R\begin{pmatrix}\tilde{\Phi}(I_{\rm Rb})&\mathbf{0}_{2}\\ \mathbf{0}_{2}&\tilde{\Phi}(I_{\rm Cs})\end{pmatrix}\] \[-R_{D}\begin{pmatrix}\mathbf{I}_{2}&\mathbf{0}_{2}\\ \mathbf{0}_{2}&\mathbf{I}_{2}\end{pmatrix}\] \[+i\omega_{0}\begin{pmatrix}\Phi_{B}(I_{\rm Rb})&\mathbf{0}_{2}\\ \mathbf{0}_{2}&\Phi_{B}(I_{\rm Cs})\end{pmatrix} \tag{10}\] where \(\mathbf{0}_{2}\) and \(\mathbf{I}_{2}\) are respectively the \(2\times 2\) zero and identity matrices, and \[\Phi_{\rm se}(I) =\frac{1}{3[I]^{2}}\begin{pmatrix}-2I(2I-1)&2\sqrt{I(I+1)(2I-1)(2I +3)}\\ 2\sqrt{I(I+1)(2I-1)(2I+3)}&-2(I+1)(2I+3)\end{pmatrix}\] \[\tilde{\Phi}(I) =\frac{1}{[I]^{2}}\begin{pmatrix}-(2I^{2}+I+1)&\sqrt{I(I+1)(2I-1) (2I+3)}\\ \sqrt{I(I+1)(2I-1)(2I+3)}&-(2I^{2}+3I+2)\end{pmatrix}\] \[\Phi_{\rm se} =\frac{1}{3\sqrt{[I_{\rm Rb}][I_{\rm Cs}]}}\begin{pmatrix}\sqrt{ (I_{\rm Rb}+1)(2I_{\rm Rb}+3)(I_{\rm Cs}+1)(2I_{\rm Cs}+3)}&-\sqrt{I_{\rm Rb} (2I_{\rm Rb}-1)(I_{\rm Cs}+1)(2I_{\rm Cs}+3)}\\ -\sqrt{(I_{\rm Rb}+1)(2I_{\rm Rb}+3)I_{\rm Cs}(2I_{\rm Cs}-1)}&\sqrt{I_{\rm Rb }(2I_{\rm Rb}-1)I_{\rm Cs}(2I_{\rm Cs}-1)}\end{pmatrix}\] \[\Phi_{B}(I) =\frac{1}{[I]}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix} \tag{11}\] In Eq. 10, we took into account that the S-damping relaxation rate, \(R\), and the relaxation rate due to diffusion, \(R_{D}\), is approximately the same for both species. Converting Eq. 10 to an equation of motion for the mean transverse Cartesian components of the spin-density operators simply involves a change of basis by a similarity transformation. Defining the vector \(\mathbf{X}=[\int_{a,x}^{\rm Rb}\tilde{f}_{b,x}^{\rm Rb},f_{a,y}^{\rm Rb},f_{b,y }^{\rm Rb},\tilde{f}_{a^{\prime},x}^{\rm Cs},\tilde{f}_{b^{\prime},x}^{\rm Cs},\tilde{f}_{a^{\prime},y}^{\rm Cs},\tilde{f}_{b^{\prime},y}^{\rm Cs}]^{\top}\), the dynamical evolution of the mean is given by: \[\frac{d\langle\mathbf{X}\rangle}{dt}=A\langle\mathbf{X}\rangle, \tag{12}\] where \(A=\mathcal{M}\tilde{A}\mathcal{M}^{-1}\). The matrix \(\mathcal{M}\) represents the change of basis: \(\mathbf{X}=\mathcal{M}\mathbf{T}\) and has the form: \[\mathcal{M}=\begin{pmatrix}-\text{t}&\text{t}\\ it&it\end{pmatrix}, \tag{13}\] \[\text{t}=\begin{pmatrix}\text{t}_{1}(I_{\rm Rb})&0&0&0\\ 0&\text{t}_{2}(I_{\rm Rb})&0&0\\ 0&0&\text{t}_{1}(I_{\rm Cs})&0\\ 0&0&0&\text{t}_{2}(I_{\rm Cs})\end{pmatrix}, \tag{14}\] where \(\text{t}_{1}(I)=\sqrt{(I+1)(2I+1)(2I+3)}/2\sqrt{3}\) and \(\text{t}_{2}(I)=\sqrt{I(2I-1)(2I+1)}/2\sqrt{3}\). We note that \(A\) is a diagonalizable matrix with only real elements and can be written in the block form: \[A=\begin{pmatrix}\mathcal{B}&\mathcal{D}\\ -\mathcal{D}&\mathcal{B}\end{pmatrix}, \tag{15}\] where \(\mathcal{B}\) and \(\mathcal{D}\) are \(4\times 4\) square matrices, with the diagonal matrix \(\mathcal{D}\) describing the coupling to the DC magnetic field. The physical meaning underlying this structure of \(A\) is that the two transverse spins share identical dynamics. ## Appendix C Properties of the covariance matrix ### General properties Here, we derive some useful properties of the covariance matrix. We define the vector \(\mathbf{\hat{X}}(t)=[\hat{x}_{1}(t),\hat{x}_{2}(t),...]^{T}\), where \(\hat{x}_{i}(t)\) are Hermitian operators (in our experiment these are Cartesian hyperfine spin operators) whose mean evolves in time according to Eq. 12. The symmetrized covariance matrix has components \(R_{ij}(\tau)=\langle\hat{x}_{i}(t+\tau)\hat{x}_{j}(t)+\hat{x}_{j}(t)\hat{x}_{i }(t+\tau)\rangle/2\). The dependence on \(\tau\) can be found from the regression theorem (see section IV.C) to be [118]: \[R(\tau)=\begin{cases}e^{A\tau}R(0)&,\tau\geq 0\\ R(0)e^{-A^{\top}\tau}&,\tau<0,\end{cases}. \tag{16}\] First, we notice that the stationary (\(\tau=0\)) covariance matrix is symmetric: \(R^{\top}(0)=R(0)\). For the transpose of the covariance matrix we find (\(\tau\geq 0\)): \[R^{\top}(\tau)=R^{\top}(0)e^{A^{\top}\tau}\Rightarrow R^{\top}(\tau)=R(0)e^{A^{ \top}|\tau|}=R(-\tau), \tag{10}\] i.e., \(R_{ij}(\tau)=R_{ji}(-\tau)\) as should be expected for any stationary process. The spectrum matrix is (assuming the measurement time to be infinite in Eq.(9)): \[S(\omega)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}R(\tau)e^{-i\omega\tau}d\tau. \tag{11}\] The reality of \(R(\tau)\) yields \(S^{*}(\omega)=S(-\omega)\). Taking into account that \(R_{ij}(\tau)=R_{ji}(-\tau)\) we obtain: \[S_{ij}^{*}(\omega) =\frac{1}{2\pi}\int_{-\infty}^{+\infty}R_{ij}(\tau)e^{i\omega\tau }d\tau \tag{12}\] \[\overset{\tau\rightarrow\tau}{=}\frac{1}{2\pi}\int_{-\infty}^{+ \infty}R_{ij}(-\tau)e^{-i\omega\tau}d\tau\] (13) \[=\frac{1}{2\pi}\int_{-\infty}^{+\infty}R_{ji}(\tau)e^{-i\omega \tau}d\tau=S_{ji}(\omega). \tag{14}\] The spectrum can also be written in the form: \[S_{ij}(\omega) =\int_{-\infty}^{+\infty}d\tau R_{ij}(\tau)e^{-i\omega\tau} \tag{15}\] \[=\int_{-\infty}^{0}d\tau R_{ij}(\tau)e^{-i\omega\tau}+\int_{0}^{ +\infty}d\tau R_{ij}(\tau)e^{-i\omega\tau}\] (16) \[=\int_{0}^{+\infty}d\tau R_{ij}(-\tau)e^{i\omega\tau}+\int_{0}^{ +\infty}d\tau R_{ij}(\tau)e^{-i\omega\tau}\] (17) \[=\int_{0}^{+\infty}d\tau R_{ji}(\tau)e^{i\omega\tau}+\int_{0}^{+ \infty}d\tau R_{ij}(\tau)e^{-i\omega\tau}. \tag{18}\] In general, \(S_{ij}(\omega)\) is complex, with real and imaginary parts given by \[\operatorname{Re}\left[S_{ij}(\omega)\right] =\int_{0}^{+\infty}d\tau\left[R_{ij}(\tau)+R_{ji}(\tau)\right] \cos(\omega\tau)\] \[\operatorname{Im}\left[S_{ij}(\omega)\right] =\int_{0}^{+\infty}d\tau\left[R_{ij}(\tau)-R_{ji}(\tau)\right] \sin(\omega\tau).\] If the spectrum is real we find: \[S_{ij}(\omega)=S_{ij}^{*}(\omega)\Rightarrow R_{ij}(\tau)=R_{ji}(\tau)=R_{ij} (-\tau). \tag{19}\] The reverse is also true, \(R_{ij}(\tau)=R_{ji}(\tau)\) implies \(S_{ij}(\omega)\) is real. An interesting case, relevant for spin-noise measurements, arises when the stationary covariance matrix is diagonal (\(R_{ij}(0)=\Delta x_{i}^{2}\delta_{ij}\), \(\Delta x_{i}^{2}\) being the variance of \(\hat{x}_{i}\)), and the spectrum is real, i.e. the covariance matrix is symmetric (see Eq. (19)). In this case, a condition is imposed between the noise variance terms and the matrix \(A\): \[R_{ij}(\tau)=R_{ji}(\tau)\Rightarrow\sum_{k}\left[e^{A\tau}\right]_ {ik}R_{kj}(0)=\sum_{k}\left[e^{A\tau}\right]_{jk}R_{ki}(0), \tag{20}\] \[\overset{\frac{R_{ij}(0)=\Delta x_{i}^{2}\delta_{ij}}{\leq}}{ \leq}\left[e^{A\tau}\right]_{ij}\Delta x_{j}^{2}=\left[e^{A\tau}\right]_{ji} \Delta x_{i}^{2}\] \[\Rightarrow\frac{\left[e^{A\tau}\right]_{ij}}{\left[e^{A\tau} \right]_{ji}}=\frac{\Delta x_{j}^{2}}{\Delta x_{i}^{2}}. \tag{21}\] Here, \([.]_{ij}\) denotes the \(ij\) element of the matrix in the square brackets. We note that if the steady-state spin-covariance matrix is diagonal, symmetry arguments dictate that the spin-noise in each hyperfine level scales as: \[\Delta x^{2}\propto nf(f+1)\times(2f+1)/(2I+1), \tag{22}\] where \(n\) is the atomic density, \(f\) is quantum number of total atomic spin in the given hyperfine state and \(I\) is the nuclear spin. As shown below, for this type of noise, the spin dynamics satisfy the constraint expressed in 21. ### Properties for spin dynamics In particular for the spin dynamics presented in B) and captured in matrix \(A\), the rules of matrix multiplication can be used to show that all the powers of \(A\) and consequently the matrix exponential \(e^{A\tau}\) has a similar to Eq. 16 pattern, that is: \[e^{A\tau}=\sum_{n=0}^{\infty}\frac{A^{n}\tau^{n}}{n!}=\begin{pmatrix}\mathfrak{ B}&\mathfrak{D}\\ -\mathfrak{D}&\mathfrak{B}\end{pmatrix}, \tag{23}\] though in this case \(\mathfrak{D}\) is not diagonal. This structure expresses the fact that the two transverse spin components (\(x\) and \(y\) here) are physically equivalent. The matrix elements of \(\mathfrak{B}\) satisfy the Equation: \[\frac{\left[\mathfrak{B}\right]_{ij}}{\left[\mathfrak{B}\right]_{ji}}=[r]_{ij}, \tag{24}\] where: \[r=\begin{pmatrix}1&\frac{(I_{1}+1)(2I_{1}+3)}{I_{1}(2I_{1}-1)}&\frac{(I_{1}+1)(2I_{ 1}+3)R_{1}^{21}}{(I_{2}+1)(2I_{2}+3)R_{1}^{22}}&\frac{(I_{1}+1)(2I_{1}+3)R_{1}^{ 21}}{I_{2}(2I_{2}-1)R_{1}^{22}}\\ \frac{I_{1}(2I_{1}-1)}{(I_{1}+1)(2I_{1}+3)}&1&\frac{I_{1}(2I_{1}-1)R_{1}^{21}}{ (I_{2}+1)(2I_{2}+3)R_{1}^{22}}&\frac{I_{1}(2I_{1}-1)R_{1}^{20}}{I_{2}(2I_{2}-1) R_{1}^{22}}\\ \frac{(I_{2}+1)(2I_{2}+3)R_{1}^{22}}{(I_{1}+1)(2I_{1}+3)R_{2}^{22}}&\frac{(I_{2 }+1)(2I_{2}+3)R_{1}^{22}}{I_{1}(2I_{1}-1)R_{1}^{22}}&1&\frac{(I_{2}+1)(2I_{2}+3) }{I_{2}(2I_{2}-1)}\\ \frac{I_{2}(2I_{2}-1)R_{1}^{22}}{(I_{1}+1)(2I_{1}+3)R_{2}^{22}}&\frac{I_{2}(2I_ {2}-1)R_{1}^{22}}{I_{2}(2I_{1}-1)R_{2}^{22}}&\frac{I_{2}(2I_{2}-1)}{(I_{2}+1)(2I _{2}+3)}&1\end{pmatrix}, \tag{101}\] where for a lighter notation we made the substitution \(\text{Rb}\to 1\) and \(\text{Cs}\to 2\). To prove this, it is shown through mathematical induction that the block diagonal matrices corresponding to the various integer powers of \(A\) adhere to the condition outlined in Equation 100. The covariance matrix takes the form: \[R(\tau)=\begin{pmatrix}\mathfrak{B}\Sigma_{xx}+\mathfrak{D}\Sigma_{yx}& \mathfrak{B}\Sigma_{xy}+\mathfrak{D}\Sigma_{yy}\\ \mathfrak{B}\Sigma_{yx}-\mathfrak{D}\Sigma_{xx}&\mathfrak{B}\Sigma_{yy}- \mathfrak{D}\Sigma_{xy}\end{pmatrix}, \tag{102}\] where the \(4\times 4\) matrices \(\Sigma_{xx}\), \(\Sigma_{yy}\), \(\Sigma_{xy}\), \(\Sigma_{yx}\) (with the \(x\), \(y\) subscripts indicating Cartesian components) are defined from the following equation: \[R(0)=\begin{pmatrix}\Sigma_{xx}&\Sigma_{xy}\\ \Sigma_{yx}&\Sigma_{yy}\end{pmatrix}. \tag{103}\] The symmetry of \(R(0)\) dictates that \(\Sigma_{xy}=\Sigma_{yx}^{\top}\). Furthermore, the physical condition of identical noise behavior _at all times_ for the transverse Cartesian components gives \(\Sigma_{xx}=\Sigma_{yy}\) and (see Eq. 102): \[\mathfrak{B}\Sigma_{xx}+\mathfrak{D}\Sigma_{yx}=\mathfrak{B}\Sigma_{yy}- \mathfrak{D}\Sigma_{xy}\Rightarrow\Sigma_{yx}+\Sigma_{xy}=0. \tag{104}\] For unpolarized thermal atoms it is natural to assume that the equal time covariance between the different transverse spins is symmetric with respect to the interchange of the transverse components, i.e. \(\Sigma_{xy}\) (and similarly \(\Sigma_{yx}=\Sigma_{xy}^{\top}\)) is symmetric: \(\Sigma_{xy}=\Sigma_{yx}^{\top}\). Augmenting this with Eq. (104) we find that \(\Sigma_{xy}=\Sigma_{yx}=0\), and the covariance matrix (\(\tau\geq 0\)) takes the simpler form \[R(\tau)=\begin{pmatrix}\mathfrak{B}\Sigma&\mathfrak{D}\Sigma\\ -\mathfrak{D}\Sigma&\mathfrak{B}\Sigma\end{pmatrix}, \tag{105}\] where \(\Sigma=\Sigma_{xx}=\Sigma_{yy}\). We now consider the case where there are no equal-time correlations between the different spin components, i.e. \(\Sigma\) is diagonal. As discussed above (see Eq. 101), in this case the noise variance of the collective spin in the thermal state scales according to the expected number of atoms in the corresponding hyperfine state and the magnitude of the spin in this state. Combining this with Eq. 100, we find that the covariance block matrix \(R_{xx}(\tau)=R_{yy}(\tau)=\mathfrak{B}\Sigma\) is symmetric: \[\begin{split}\left[\mathfrak{B}\Sigma\right]_{ij}& =\sum_{k}\left[\mathfrak{B}\right]_{ik}\delta_{kj}\left[\Sigma \right]_{jj}=\left[\mathfrak{B}\right]_{ij}\left[\Sigma\right]_{jj}\\ &\overset{Eq.\eqref{eq:R_xy}}{=}\left[\mathfrak{B}\right]_{ji}\left[ \Sigma\right]_{ii}=\left[\mathfrak{B}\Sigma\right]_{ji}.\end{split} \tag{106}\] This symmetry implies that the cross-spectrum between spin components in the same transverse (Cartesian) axis is strictly real, i.e. the imaginary part is zero (see discussion in 1). Inversely, if all the cross-spectra between the 4 spins (2 hyperfine spins for each of the 2 species) are strictly real, then it can be proven that the equal time covariance matrix \(\Sigma\) is symmetric, under the physically justifiable assumption that the variances for the various spin components scale according to Eq. (101). In the following we sketch the proof of the above statement. The reality of the spectra implies that the block matrix \(\mathfrak{B}\Sigma\) is symmetric: \[\mathfrak{B}\Sigma=\Sigma^{\top}\mathfrak{B}^{\top} \tag{107}\] Considering that \(\Sigma=\Sigma^{\top}\) and the assumption for the scaling of spin variances (Eq. (101)), there remain 6 elements (i.e. the elements located above -or lower- the main diagonal) to be determined from the system of equations introduced in Eq. 107. Under general conditions, given that \(\mathfrak{B}\) satisfies the condition in Eq. 100, the system of equations becomes non-singular for these 6 elements. Consequently, solving the system reveals that these elements are all zero. ## Appendix D Light-shift noise Here, on qualitative grounds we argue that the light-shift noise induced onto the equilibrium atomic ensemble from the probe beam (also termed back-action noise [13]) is significantly smaller than the spin-noise and can therefore be ignored. The Hamiltonian describing the probe light-shift experienced by an atom in a particular hyperfine state is: \(H_{\text{LS}}\propto\hat{S}_{3}(t)\hat{F}_{x}(t)\), where \(\hat{F}_{x}(t)\) is the hyperfine angular momentum in the direction of probe propagation and \(\hat{S}_{3}(t)\) is the Stokes element measuring the flux difference between the right- and left- circular components, which for linearly polarized probe only describes the quantum polarization fluctuations. This Hamiltonian is formally equivalent to the coupling of a (white-noise) transverse magnetic field to the atoms. The rms amplitude of the effective magnetic field is at the fT level for typical experimental conditions [125]. Clearly, the insensitivity of unpolarized atomic ensembles to magnetic-fields also applies to the light-shift noise. ## Appendix E Bandwidth effect on the observed noise We here provide a brief explanation of the bandwidth effect on the recorded noise spectrum. First, we con sider the power when digitizing a signal with a finite sampling rate. While the precise relationship between the recorded data point (\(\tilde{y}\)) and the underlying actual analog signal \(y\) may slightly vary depending on the data acquisition system, we model this relationship as: \(\tilde{y}(t)=\frac{1}{\Delta T}\int_{t}^{t+\Delta T}y(t^{\prime})dt^{\prime}\), where \(1/\Delta T\) represents the sampling rate. In this case, the zero-time cross-correlation power can be expressed as \[\langle\tilde{y}_{a}\tilde{y}_{b}\rangle =\frac{1}{\Delta T^{2}}\int_{t}^{t+\Delta T}dt^{\prime}\int_{t}^{t +\Delta T}dt^{\prime\prime}\langle y_{a}(t^{\prime})y_{b}(t^{\prime\prime})\rangle \tag{100}\] \[=\frac{1}{\Delta T^{2}}\int_{t}^{t+\Delta T}dt^{\prime}\int_{t^{ \prime}-t-\Delta T}^{t^{\prime}-t}d\tau R_{ab}(\tau)\] (101) \[=\frac{1}{\Delta T^{2}}\int_{t}^{t+\Delta T}dt^{\prime}\int_{t^{ \prime}-t-\Delta T}^{t^{\prime}-t}d\tau\int_{-\infty}^{\infty}d\omega S_{ab}( \omega)e^{\imath\omega\tau}\] (102) \[=\int_{-\infty}^{\infty}S_{ab}(\omega)\left[\frac{\sin\left( \frac{\omega\Delta T}{2}\right)}{\frac{\omega\Delta T}{2}}\right]^{2}d\omega, \tag{103}\] where \(S_{ab}(\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}R_{ab}(\tau)e^{-\imath \omega\tau}\) is the cross-correlation spectrum, and \(R_{ab}\) is the correlation between \(y_{a}\) and \(y_{b}\). In passing from Eq.(100) to Eq.(101) we took into account that for stationary processes the correlation \(\langle y_{a}(t^{\prime})y_{b}(t^{\prime\prime})\rangle\) only depends on the time difference \(t^{\prime}-t^{\prime\prime}\) and performed a change of variables: \((t^{\prime},t^{\prime\prime})\rightarrow(t^{\prime},\tau=t^{\prime}-t^{\prime \prime})\). If the two signals are filtered with the same filter (e.g. an anti-aliasing filter) then Eq.(103) is modified to: \[\langle\tilde{y}_{a}\tilde{y}_{b}\rangle=\int_{-\infty}^{\infty}S_{ab}(\omega )\left[\frac{\sin\left(\frac{\omega\Delta T}{2}\right)}{\frac{\omega\Delta T} {2}}\right]^{2}\Phi(\omega)d\omega, \tag{104}\] where \(\Phi(\omega)\) is the filter (in power). We now examine the noise-power of lock-in amplifier signals. For concreteness, we consider a simplified version of a lock-in amplifier signal \(\tilde{y}\), given from: \(\tilde{y}=(1/T_{\rm BW})\int_{0}^{T}y(t^{\prime})\ \exp[-(T-t^{\prime})/T_{\rm BW}]\cos( \omega_{0}t^{\prime})dt^{\prime}\), where \(\omega_{0}\) is the demodulation frequency, \(T_{\rm BW}\) is the lock-in time-constant being effectively the inverse of the measurement bandwidth (assuming 3 dB roll-off), \(y(t)\) is the actual signal, and \(T\) is the measurement time. The zero-time correlation power between two such lock-in signals is: \[\langle\tilde{y}_{a}\tilde{y}_{b}\rangle =\frac{1}{T_{\rm BW}^{2}}\int_{0}^{T}dt^{\prime}\int_{0}^{T}dt^{ \prime\prime}\langle y_{a}(t^{\prime})y_{b}(t^{\prime\prime})\rangle e^{-\frac {T-t^{\prime}}{T_{\rm BW}}}e^{-\frac{T-t^{\prime\prime}}{T_{\rm BW}}}\cos( \omega_{0}t^{\prime})\cos(\omega t^{\prime\prime}) \tag{105}\] \[=\frac{1}{T_{\rm BW}^{2}}\int_{0}^{T}dt^{\prime}\int_{t^{\prime}-T }^{t^{\prime}}d\tau R_{ab}(\tau)e^{-\frac{T-t^{\prime}}{T_{\rm BW}}}e^{-\frac {T-(t^{\prime}-\tau)}{T_{\rm BW}}}\cos(\omega_{0}t^{\prime})\cos\left[\omega_{ 0}(t^{\prime}-\tau)\right]\] (106) \[=\frac{1}{T_{\rm BW}^{2}}\int_{-\infty}^{\infty}d\omega S_{ab}( \omega)\int_{0}^{T}dt^{\prime}e^{-2\frac{T-t^{\prime}}{T_{\rm BW}}}\cos( \omega_{0}t^{\prime})\int_{t^{\prime}-T}^{t^{\prime}}d\tau e^{\frac{\tau}{T_{ \rm BW}}}\cos\left[\omega_{0}(t^{\prime}-\tau)\right]e^{\imath\omega\tau}= \int_{-\infty}^{\infty}d\omega S_{ab}(\omega)\phi(\omega),\] (107) \[\phi(\omega)=\frac{\frac{1}{T_{\rm BW}}^{2}\left[\frac{1}{T_{\rm BW }}^{2}+\left(\frac{1}{T_{\rm BW}}^{2}+\omega^{2}-\omega_{0}^{2}\right)\cos(2T \omega_{0})+2\frac{1}{T_{\rm BW}}\omega_{0}\sin(2T\omega_{0})+\omega^{2}+ \omega_{0}^{2}\right]}{2\left[\left(\frac{1}{T_{\rm BW}}^{2}+\omega^{2}\right)^ {2}+2\omega_{0}^{2}(\frac{1}{T_{\rm BW}}-\omega)(\frac{1}{T_{\rm BW}}+\omega)+ \omega_{0}^{4}\right]}\] (108) \[\approx\frac{1}{4}\frac{(1/T_{\rm BW})^{2}}{(1/T_{\rm BW})^{2}+( \omega-\omega_{0})^{2}}. \tag{109}\] The filter function \(\phi(\omega)\) was calculated in the (stationary) limit \(T\gg T_{\rm BW}\). The approximation in Eq. (109) holds for \(\omega_{0}T_{\rm BW}\gg 1\) and \(|\omega_{0}-\omega|\ll\omega_{0}\).
2306.14253
Network Optimization -- Using Relays as Neurons
We consider the optimization of a network with amplify-and-forward relays. Observing that each relay has a power limit, and hence a non-linear transfer function, we focus on the similarity between relay networks and neural networks. This similarity allows us to treat relays as neurons, and use deep learning tools to achieve better optimization of the network. Deep learning optimization allows relays to work in their non-linear regime (and hence increase their transmission power) while still avoiding harmful distortion. Moreover, like neural networks, which can implement almost any functionality, we can take advantage of the non-linearities and implement parts of the received functionalities over the relay network. By treating each relay element as a node in a deep neural network, our optimization results in huge gains over traditional relay optimization, and also allows the use of simpler receivers.
Itsik Bergel
2023-06-25T14:12:05Z
http://arxiv.org/abs/2306.14253v1
# Network Optimization - ###### Abstract We consider the optimization of a network with amplify-and-forward relays. Observing that each relay has a power limit, and hence a non-linear transfer function, we focus on the similarity between relay networks and neural networks. This similarity allows us to treat relays as neurons, and use deep learning tools to achieve better optimization of the network. Deep learning optimization allows relays to work in their non-linear regime (and hence increase their transmission power) while still avoiding harmful distortion. Moreover, like neural networks, which can implement almost any functionality, we can take advantage of the non-linearities and implement parts of the received functionalities over the relay network. By treating each relay element as a node in a deep neural network, our optimization results in huge gains over traditional relay optimization, and also allows the use of simpler receivers. ## I Introduction Relays have been essential for improving communication performance since the early days of wireless communication (e.g., [1, 2, 3]). Deploying more relays is a key to meeting the exponentially growing demand for wireless communication. However, the optimal operation of large relay networks is still infeasible. This research will focus on amplify-and-forward relays (e.g., [4, 5]), which are simple to design and optimize. In particular, these relays became more accessible by recent progress in two technologies: Energy harvesting (e.g., [6, 7]) and Full duplex communication (e.g., [8, 9]). Optimization of amplify-and-forward relay networks is far from trivial, as their performance usually presents a non convex behavior. Nevertheless, modern optimization methods know to converge to at least a local maximum for many relay networks [10, 11]. However, all traditional optimization of relay networks consider the relays as linear amplifiers with a power constraint. Thus, the use of such methods must limit the relay operation to the regime where it can be approximated as linear. As a result, these methods are forced to set a power constraint that is lower than the actual power achievable by the relay. In this work, we take a completely different approach. We observe that each relay naturally has a power limit and hence has a non-linear transfer function. Thus, we focus on the similarity between a relay network and a neural network. Neural networks have gained much popularity in recent years due to their ability to solve tough computational challenges, and in particular were good modeling of the problem is not available. The use of neural networks was suggested in many communication applications (e.g., [12, 13, 14, 15]) and even in relay applications (e.g., [16, 17, 18]). In particular [16] used neural networks for relay selection, [18] combined it also with power allocation, and [17] used a neural network for the prediction of outage probabilities. In this work, we use neural network technology, but in a completely different way. The resemblance between a non linear relay and a neuron allows us to treat relays as neurons, and use deep learning tools to achieve better optimization of the relay network. Moreover, neural networks can implement almost any functionality. Thus, through proper training, we can take advantage of the non-linearities and implement parts of the receiver functionalities over the relay network thus reducing the receiver complexity. Preliminary results show a huge \(25\)dB improvement over the state of the art, in a cellular network with 100 relays and 2 users. These results also demonstrate the ability of the relay network to non-linearly separate the signals for the two receivers. The main advantages of the proposed scheme are: * Better communication over relay networks. * New computational capabilities "over the air". * Support for distributed optimization. The main differences of the proposed scheme from other implementation of neural networks: * Relays are treated as neurons but are an actual part of the network. * Most of the network topology is determined by the channel
2303.06089
Real Option Pricing using Quantum Computers
In this work we present an alternative methodology to the standard Quantum Accelerated Monte Carlo (QAMC) applied to derivatives pricing. Our pipeline benefits from the combination of a new encoding protocol, referred to as the direct encoding, and a amplitude estimation algorithm, the modified Real Quantum Amplitude Estimation (mRQAE) algorithm. On the one hand, the direct encoding prepares a quantum state which contains the information about the sign of the expected payoff. On the other hand, the mRQAE is able to read all the information contained in the quantum state. Although the procedure we describe is different from the standard one, the main building blocks are almost the same. Thus, all the extensive research that has been performed is still applicable. Moreover, we experimentally compare the performance of the proposed methodology against the standard QAMC employing a quantum emulator and show that we retain the speedups.
Alberto Manzano, Gonzalo Ferro, Álvaro Leitao, Carlos Vázquez, Andrés Gómez
2023-03-10T17:09:59Z
http://arxiv.org/abs/2303.06089v2
# Real Option Pricing using Quantum Computers ###### Abstract We present a novel methodology to price derivative contracts using quantum computers by means of Quantum Accelerated Monte Carlo. Our contribution is an algorithm that permits pricing derivative contracts with negative payoffs. Note that the presence of negative payoffs can give rise to negative prices. This behaviour cannot be captured by existing quantum algorithms. Although the procedure we describe is different from the standard one, the main building blocks are the same. Thus, all the extensive research that has been performed is still applicable. Moreover, we experimentally compare the performance of the proposed methodology against other proposals employing a quantum emulator and show that we retain the speedups. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Classical Monte Carlo for derivatives pricing * 2.1.1 Simulation of sample paths of the underlying price * 2.1.2 Integration by Monte Carlo * 2.2 Quantum Accelerated Monte Carlo for derivatives pricing * 2.2.1 Quantum simulation * 2.2.2 Amplitude estimation * 2.2.3 QAMC simplifications and practical implementation * 3 Main contributions * 3.1 Direct encoding * 3.2 Amplitude estimation * 4 Conclusions * 5 Declarations * 5.1 Ethical Approval and Consent to participate * 5.2 Consent for publication * 5.3 Availability of supporting data * 5.4 Competing interests * 5.5 Funding * 5.6 Authors' contributions * 5.7 Acknowledgements * A Configuration of experiments * A.1 Price Estimation problems * A.2 Square root encoding Evaluation. * A.3 Direct encoding evaluation Introduction Over the last few years there has been an increasing interest in the application of quantum computing to quantitative finance. One of the reasons for such boost of popularity is because many algorithms used by financial institutions demand a high computing capacity and quantum computing promises relevant speedups in some significant relevant cases. Among the different financial applications that could benefit from the use of quantum computing, in this work we focus on the task of pricing financial derivatives. As it is well known in the literature (see for instance [14, 1]), the pricing of financial derivatives can be formulated in terms of the computation of the expectation of the derivatives payoff with respect to a given probability measure. This computation can be very consuming in terms of computational resources and is typically performed by means of Monte Carlo (MC) methods. In the context of general MC methods, the quantum computing community has proposed a quantum version which can obtain quadratic speedups for very general settings as indicated in [13]. We will refer to such techniques as Quantum Accelerated Monte Carlo (QAMC). They have been successfully applied to the problem of pricing financial derivatives (see [12, 10]). However, to the best of our knowledge, in the setups treated in the literature, it is implicitly assumed that the price and the payoff of the derivative are strictly positive. Although this is usual for a large number of financial derivatives (as is the case with most of the options traded in the markets) this is not always true. In this manuscript, we propose new quantum algorithms for solving these issues, while retaining the same speedup of previous proposals. There are three main contributions in this work. First, we propose a new encoding technique called direct encoding. Second, we employ the Real Quantum Amplitude Estimation (RQAE) algorithm[15] which is sensitive to the sign of the quantity of interest. The combination of the aforementioned components allows pricing derivative contracts with negative payoffs. Note that negative payoffs potentially entails negative prices. Finally, we perform an experimental assessment of the different encoding and amplitude estimation algorithms for some relevant financial products. The manuscript is organised as follows. Section 2 is devoted to revise the preliminaries which are at the basis of the article. More precisely, Sections 2.1 and 2.2 contain a brief summary of the classical and quantum techniques to tackle the derivative pricing problem. Section 3 is devoted to the main contributions of the present article. The first part, in Section 3.1, presents the direct encoding for negative payoffs along with its empirical evaluation. The second part, in Section 3.2, leverages the RQAE algorithm for the pricing of derivative contracts with negative prices and again performs an empirical assessment. In Section 4 we summarize the main conclusions. Appendix A contains the configuration of the numerical experiments. ## 2 Preliminaries A financial derivative is a financial contract whose price depends on the future prices of another asset or set of assets (usually referred as underlying assets). The derivative price is determined by the future cashflows that will be received which depend on the underlying asset prices. In the present work, we focus on the case where a future payoff is expected at a certain future date (maturity date) and we assume just one underlying asset. Inside this framework, we can find one of the most classical derivatives: European vanilla options. In these derivatives, the owner has the right (but not the obligation) to trade the underlying asset at a given price (strike price) at maturity. Call options provide the right to buy the underlying, while put options provide the right to sell. Therefore, the payoff at maturity is a nonlinear positive function of the asset price. If the derivatives contract involves the obligation to buy or sell (instead of the right), then the payoff function is linear and maybe negative or positive. Derivatives pricing consists of obtaining the price of the derivative at any previous date to maturity date. For this purpose, the uncertain future dynamics of the price of the underlying asset must be taken into account, which is usually modelled in terms of stochastic differential equations. More precisely, if we denote by \(S_{t}\) the price of the underlying asset at time \(t\), a general Ito process that satisfies the following Stochastic Differential Equation (SDE) can be considered: \[dS_{t}\,=\,\alpha(t,S_{t})\,dt\,+\,\beta(t,S_{t})\,dW_{t}, \tag{1}\] where \(\alpha\) and \(\beta\) are real functions to be specified for the particular model, while \(W_{t}\) denotes a Brownian motion, so that \(W_{t}\) follows a \(\mathcal{N}(0,t)\) distribution (and its increment \(dW_{t}\) follows a \(\mathcal{N}(0,dt)\)). Another ingredient in the derivatives pricing is the already mentioned payoff value. We denote by \(V_{t}\) the price of the derivative at time \(t\in[0,T]\), where \(T\) is the maturity date. Moreover, we assume the existence of a function \(V\), such that \(V_{t}=V(t,S_{t})\), i.e., the value of the derivative depends on time and on the underlying asset price through a function \(V\). As stated before, in this work we will focus on derivatives whose payoff only depends on the value of the asset at maturity, \(V_{T}=F(S_{T})\), where \(F\) represents the payoff function. Next, if we denote the strike price by \(K\), some examples of payoff are: * European call option: \(F(x)=\max(x-K,0)\). * European put option: \(F(x)=\max(K-x,0)\). * Linear payoff: \(F(x)=x-K\). Note that, while the two first payoffs are nonnegative for any value of \(S_{T}\), the third one can be either positive of negative, depending on \(S_{T}\). By using mathematical finance tools, mainly martingale properties and Ito's lemma, the following expression for price of the derivative at time \(t\) can be obtained (see for example [10]): \[V_{t}=V(t,S_{t})=e^{-\int_{t}^{T}r_{u}du}\,\mathbb{E}_{Q}[F(S_{T})|\mathcal{F} _{t}], \tag{2}\] where \(\mathbb{E}_{Q}\) denotes the expectation under the risk neutral measure \(Q\), \(r_{u}\) is the risk-free interest rate at time \(u\), \(F\) defines the payoff of the derivative and \(\mathcal{F}_{t}\) denotes the filtration containing the information until time \(t\). In this way, expression (2) indicates that the value of the derivative is the discounted price of the expected value of the payoff, conditioned to the current information of the market. In view of the pricing expression (2), the valuation of these financial derivatives mainly requires the computation of the involved expectation. Next, we briefly introduce one of the most popular techniques for computing such expectation, namely the Monte Carlo method. We revise the Classical Monte Carlo (CMC) in Section 2.1 and the Quantum Accelerated Monte Carlo (QAMMC) in Section 2.2. ### Classical Monte Carlo for derivatives pricing The CMC method for derivatives pricing in finance is composed of two steps: 1. Simulation of sample paths of the underlying asset by means of the numerical solution of the stochastic differential equation (1). 2. Use of Monte Carlo integration to compute the expectation that appears in expression (2). #### 2.1.1 Simulation of sample paths of the underlying price For the simulation of the sample paths, there exist several numerical methods for solving the associated SDE. To illustrate the whole procedure we will use hereon the Euler-Maruyama method as an example. The application of the Euler-Maruyama scheme to the general SDE (1) leads to the expression [1]: \[S_{t+\Delta t}\;=\;S_{t}+\alpha(t,S_{t})\Delta t\,+\,\beta(t,S_{t})\sqrt{ \Delta t}Z, \tag{3}\] where \(Z\) is the standard normal random variable, i.e. with mean equal to \(0\) and variance equal to \(1\), and \(\Delta t\) is the time step that is considered in the numerical method. In this case, \(\Delta t=(T-t)/M\), being the number of time steps, \(T\) the maturity date and \(t\) the initial time. By using Equation (3), it is straightforward to produce samples of \(S_{T}\), starting from a value of \(S_{t}\) at time \(t\). More precisely, we proceed as follows: * Start with an initial point \(S_{t}\). * Draw a sample of the standard normal random variable \(Z\). * Compute \(S_{t+\Delta t}\) from the random sample generated in the previous step. * Repeat the previous process, starting from the previously calculated value, until a sample of \(S_{T}\) is obtained. #### 2.1.2 Integration by Monte Carlo The procedure described in Section 2.1.1 generates samples \(S_{T}^{i}\) of the random variable \(S_{T}\), which can be then used to estimate the expectation that appears in expression (2) as follows: \[\mathbb{E}[F(S_{T})|\mathcal{F}_{t}]=\frac{1}{N}\sum_{i=0}^{N-1}F(S_{T}^{i})+ \epsilon_{CMC}+\epsilon_{EM}. \tag{4}\] In expresssion (4), \(N\) is the number of samples (paths) \(S_{T}^{i}\) generated by numerically solving the SDE, \(F\) is the payoff function of the target derivatives contract, \(\epsilon_{CMC}\) is the error due to the Monte Carlo approximation of the expectation and \(\epsilon_{EM}\) is the error due to the Euler-Maruyama scheme. The statistical error \(\epsilon_{CMC}\) scales as [1]: \[\epsilon_{CMC}\sim\frac{1}{\sqrt{N}}. \tag{5}\] The order of the error due to the Euler-Maruyama scheme is [13]: \[\epsilon_{EM}\sim\Delta t\sim\frac{1}{M}. \tag{6}\] Taking the definition of the total cost of the algorithm, \(C_{\text{CMC}}\) as the number of calls to the Euler-Maruyama formula defined in Equation (3), it is straightforward to derive that the total cost is approximately \(C_{\text{CMC}}\approx MN\). Hence, the overall error of the algorithm \(\epsilon\) scales with the cost as: \[\epsilon\sim\frac{1}{\sqrt{C_{\text{CMC}}}}. \tag{7}\] ### Quantum Accelerated Monte Carlo for derivatives pricing The QAMC for pricing has three main ingredients: * A quantum circuit which samples paths with the same probability as the classical circuit. * An operator which encodes the payoff of the specific derivative contract into the quantum state. * An amplitude estimation routine, which allows to retrieve the quantity of interest from the amplitude of a quantum state (and produces the actual speedup). In Section 2.2.1 we briefly discuss the first issue, while we reserve Section 2.2.2 for the second and third issues. #### 2.2.1 Quantum simulation The QAMC algorithm begins by creating a state in superposition where the probabilities of each path match those of the classical process discretized by using some numerical scheme such as the Euler-Maruyama formula. In order to build the algorithm, \(M+1\) different registers are needed, one for each time step. The first \(M\) registers are composed of two registers of \(n_{qb}\) qubits each (see Figure 1a): \[\left[\left|0\right\rangle\left|0\right\rangle\right]_{0}\otimes\left[\left|0 \right\rangle\left|0\right\rangle\right]_{1}\otimes...\otimes\left|0\right\rangle _{M}, \tag{8}\] where \(\left[\left|0\right\rangle\left|0\right\rangle\right]_{m}=\left[\left|0\right\rangle ^{\otimes n_{qb}}\left|0\right\rangle^{\otimes n_{qb}}\right]_{m}\), \(m=0,\ldots,M\) and \(\left|0\right\rangle_{M}=\left|0\right\rangle^{\otimes n_{qb}}\). Each of the individual registers \(\left|0\right\rangle^{\otimes n_{qb}}\) will be used to represent a decimal number. For simplicity, it can be understood as a single precision register. In order to generate a state in superposition which matches the probabilities of each path defined by the Equation (3), we need a standard normal sample generator. In the QAMC algorithm, this generator is represented by the unitary operator \(U_{Z}\) which performs the following transformation: \[U_{Z}\left|0\right\rangle\left|0\right\rangle=\sum_{j=0}^{J-1}\sqrt{p_{Z}(x_{ j})}\left|0\right\rangle\left|x_{j}\right\rangle=\left|0\right\rangle \left|x\right\rangle, \tag{9}\] where \(x\) is a set of \(J\) numbers that can be represented by the \(n_{qb}\) qubits from the individual registers and \(p_{Z}(x)\) is a discretized version of the standard normal probability distribution defined in the set of points \(x=\{x_{0},x_{1},...,x_{J-1}\}\). Note that, in general, \(J\) does not need to be equal to \(2^{n_{qb}}\). The efficiency of the transformation (9) is crucial for the overall efficiency of the algorithm. In the best case, this efficiency can be achieved using \(O(\log_{2}(J))\) gates (see [1]). In the worst case, it can be achieved in \(O(J\log_{2}(J))\) combining the results in [1] and [2]. The first step requires applying the operator \(U_{Z}\) to one of the members of all pairs \(\left[\left|0\right\rangle\left|0\right\rangle\right]_{i}\), thus obtaining the state: \[\left[U_{Z}\left|0\right\rangle\left|0\right\rangle\left|0\right\rangle \right]_{0}\otimes\left[U_{Z}\left|0\right\rangle\left|0\right\rangle\right] _{1}\otimes...\otimes\left|0\right\rangle_{M}=\left[\sum_{j=0}^{J-1}\sqrt{p_{Z} (x_{j})}\left|0\right\rangle\left|x_{j}\right\rangle\right]_{0}\otimes\left[ \sum_{j=0}^{J-1}\sqrt{p_{Z}(x_{j})}\left|0\right\rangle\left|x_{j}\right\rangle \right]_{1}\otimes...\otimes\left|0\right\rangle_{M}. \tag{10}\] In this configuration, the amplitudes encode the square root of the probabilities for all the different combinations of \(x\) in the different steps. In order to continue, the left register in the pair \(\left[\left|0\right\rangle\left|0\right\rangle\right]_{0}\) has to be initialised to \(\left|S_{t}\right\rangle\): \[\left[U_{S_{t}}\left|0\right\rangle\left|y\right\rangle\right]_{0}=\left[\left| S_{t}\right\rangle\left|y\right\rangle\right]_{0},\quad\forall\,\left|y \right\rangle. \tag{11}\] Figure 1 depicts schematically this process. Once the circuit is correctly initialised, an evolution operator must be applied. This evolution operator acts upon three individual registers as follows: \[U_{\Delta t}\left[\left|S_{t+m\Delta t}\right\rangle\left|x\right\rangle\right]_{ m}\left[\left|0\right\rangle\right]_{m+1}\longrightarrow\left[\left|S_{t+m\Delta t }\right\rangle\left|x\right\rangle\right]_{m}\left[\left|S_{t+(m+1)\Delta t}(S_ {t+m\Delta t},x)\right\rangle\right]_{m+1}, \tag{12}\] where the update rule is given, for instance, by Equation (3). After repeatedly applying the operator \(U_{\Delta t}\) defined in Equation (12), the final quantum state \(\left|S\right\rangle\) is: \[\left|S\right\rangle:=U_{S}\left|0\right\rangle=\sum_{k=0}^{\mathcal{K}-1} \sqrt{p_{S}(S_{k})}\left|S_{k}\right\rangle, \tag{13}\] where \(\mathcal{K}=J^{M}\) are the number of possible paths defined by the given (space and time) discretization and \(p_{S}(S_{k})\) is the probability of generating the path \(S_{k}\). Figure 2 depicts schematically this process. Figure 1: Circuit initialisation. Figure 2: Sketch description of the construction of the oracle defined in Equation (13). So far, a quantum circuit has been built which samples paths with the same probability as the classical circuit does. Moreover, the computational cost of one execution of the circuit is equivalent to one execution of the classical circuit. In other words, the number of gates needed to sample one path from the classical and the quantum circuit is "the same", since the classical circuit can always be translated to a quantum one using Toffoli gates (see [11]). However, note that classical and quantum gates are not directly comparable. #### 2.2.2 Amplitude estimation As discussed in the previous section, up to this point the quantum and the classical circuit have the same complexity. Nevertheless, when the error correction is taken into consideration, the current quantum gates are much slower than the analogous classical ones. Next, the mechanism that produces an speedup is briefly detailed. Once the state \(\ket{S}\) in Equation (13) is generated, the next step is to define the operator \(U_{\sqrt{F}}\) such that pushes the square root of the derivatives payoff \(F\) into the amplitude. For this reason, we will call this way of encoding _square root encoding_. In order to do that, an additional single qubit register is needed: \[\ket{\sqrt{F}}=U_{\sqrt{F}}\ket{S}=\frac{1}{\|\sqrt{F(S)}\|_{\infty}}\sum_{k=0 }^{\mathcal{K}-1}\sqrt{p_{S}(S_{k})F(S_{k})}\ket{S_{k}}\ket{0}+\sqrt{p_{S}(S_{ k})\left(1-F(S_{k})\right)}\ket{S_{k}}\ket{1}. \tag{14}\] For simplicity the term \(\|\sqrt{F(S)}\|_{\infty}\) is considered to be equal to one. In other words, it is assumed to be properly normalised. Moreover, it is tacitly assumed that the operator \(U_{\sqrt{F}}\) can be efficiently implemented. Figure 3 depicts schematically the overall process. Finally, the expectation of the payoff can be approximated by estimating the probability of obtaining \(\ket{0}\) in the last register: \[\mathbb{E}[F(S_{T})|\mathcal{F}_{t}]=\sum_{k=0}^{\mathcal{K}-1}p_{S}(S_{k})F(S _{k})+\epsilon_{\mathcal{K}}+\epsilon_{EM}\approx P_{\ket{0}}+\epsilon_{ \mathcal{K}}+\epsilon_{EM}+\epsilon_{\text{QAMC}}, \tag{15}\] where \(\sum_{k=0}^{\mathcal{K}-1}p_{S}(S_{k})F(S_{k})\) is the discretized expectation, \(\epsilon_{\text{EM}}\) is same Euler-Maruyama error as in Equation (4), \(\epsilon_{\mathcal{K}}\) is the discretization error, \(\epsilon_{\text{QAMC}}\) is the sampling error and the expression \[P_{\ket{0}}=\sum_{k=0}^{\mathcal{K}-1}\ket{p_{S}(S_{k})F(S_{k})}\] is proportional to the exact probability of obtaining the state \(\ket{0}\) in the last register. Using amplitude estimation techniques, the sampling error \(\epsilon_{QAMC}\) is of order [1]: \[\epsilon_{\text{QAMC}}\sim\frac{1}{Q}, \tag{16}\] Figure 3: Scheme of the generation of the oracle. The gate \(U_{S}\) corresponds to Equation (13). The gate \(U_{\sqrt{F}}\) corresponds to equation (14). with \(Q\) being the number of calls to the oracle defined by Equation (14). Recall that this oracle is strictly the same as in the classical algorithm described in Section 2.1.1 except for the application of operator \(U_{\sqrt{F}}\). Thus, each call to the oracle is equivalent to \(M\) executions of the Euler-Maruyama scheme, leading to a total cost of \(C_{\mathrm{QAMC}}=MQ\). Table 1 shows the computational cost, measured in the number of evaluations of the Euler-Maruyama formula, of each of the CMC and the QAMC. It can be easily seen that the QAMC performs quadratically better than the CMC. Moreover, the same scaling applies when we increase the number of dimensions. #### 2.2.3 QAMC simplifications and practical implementation So far, we have described the general setup of QAMC for pricing. A brief estimation tells us that we would require the order of hundreds or thousands of qubits to build the algorithm with single precision registers and a few time steps. With the current hardware, this is not feasible. Hence, in order to conceptually test this technique, we need to perform several simplifications. If we assume only European payoffs we can make the first simplification since we do not need to store the whole paths for the underlying. Instead, we will consider that we have just one register which encodes the value of the underlying and we will rewrite it on each step of the Euler-Maruyama scheme. The next simplification would be reducing as much as possible the number of time steps. In the limit, we could perform a single time step. Nevertheless, in numerical schemes such as Euler-Maruyama, a very big time step produces a very big error. For that reason, we restrict ourselves to models where we know how to do exact simulation, thus avoiding the need of doing several steps. This happens when we can obtain the exact solution of the governing SDE. Thus, in this work, we will consider the classical (and well known) model given by the following Black-Scholes SDE under the risk neutral measure [1] \[dS_{t}\;=\;rS_{t}\,dt\,+\,\sigma S_{t}\,dW_{t}, \tag{17}\] where \(r\) denotes the risk-free rate and \(\sigma\) is the volatility of the underlying asset price. As the expression of the exact solution of SDE (17) is known, starting for a given value \(S_{t}\), the simulation of the random variable \(S_{T}\) under the risk-neutral measure can be exactly carried out in one step by: \[S_{T}=S_{t}\exp\left(\left(r-\frac{1}{2}\sigma^{2}\right)(T-t)+\sigma Z\sqrt{ T-t}\right). \tag{18}\] Under the previously simplified setting, we will only need two-three registers. The first one for storing the initial value of the underlying \(S_{t}\), the second one would be the register for the standard normal \(Z\) and the third one for the final value \(S_{T}\). Note that, since we only perform one step \(S_{t}\) is not strictly necessary, it can be hardcoded in the operator \(U_{\Delta t}\) leaving us with only two registers. In yet another simplification, we assume that, instead of having a unitary operator \(U_{Z}\) which encodes the standard normal, we have an analogous unitary \(U_{BS}\) which encodes the Black-Scholes distribution \(p_{BS}\). With this last simplification, we only need a single register to perform the whole simulation. Note that, regardless all the simplifications, the algorithm is conceptually the same: we have a quantum circuit specified by the oracle \(U_{S}=U_{BS}\) which generates samples for the underlying price with the correct probability distribution. \begin{table} \begin{tabular}{|c|c|} \hline & Error \\ \hline CMC & \(O(1/\sqrt{C_{\mathrm{CMC}}})\) \\ \hline QAMC & \(O(1/C_{\mathrm{QAMC}})\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of order of the errors for the CMC and the QAMC in one dimension. Both \(C_{\mathrm{CMC}}\) and \(C_{\mathrm{QAMC}}\) denote the number of evaluations of the Euler-Maruyama formula. Throughout the manuscript, we will show different numerical experiments, all of them performed under the simplifications described in this section. For any other content, we will keep ourselves within the general setting from previous sections. For more information on how the experiments are performed, we refer the reader to Appendix A. To wrap up this section, in Figure 4 we show the results of the QAMC algorithm for different payoffs and amplitude estimation algorithms. If tweaked properly, the best performance is obtained with the Maximum Likelihood Amplitude Estimation (MLAE) [11]. However, this algorithm requires a fine tuning in the schedule in order to get the best results. For more information on this issue see for example [1]. If no fine tuning is performed, the best algorithm is the Iterative Quantum Amplitude Estimation (IQAE)[10] as it is more stable. The standard (classical) amplitude estimation algorithm (here denoted by CAE) [1] is stable but, in general, has worse performance. Figure 4: Absolute error between the QAMC algorithm and the discretized expectation for different number of calls to the oracle. Each of the panels corresponds to a different payoff (see A.1 for details) and each line corresponds to a different amplitude estimation algorithm. For a detailed description for the parameters of each simulation, see Table 5. The experiments have been performed using the square root encoding. Main contributions As it can be seen in Section 2.2.2, by sampling from the quantum circuit an estimation of: \[\sum_{k=0}^{\mathcal{K}-1}\left|p_{S}(S_{k})F(S_{k})\right|, \tag{19}\] can be obtained. Note that the payoff is assumed to be normalised so that \(\|\sqrt{F(S)}\|_{\infty}=1\). Nevertheless, it is important to note that, for derivatives whose payoffs can become negative, this method will not yield to correct prices approximations. In order to illustrate this, suppose that there is a payoff of the form: \[F(S_{T})=S_{T}-K, \tag{20}\] with \(T\) being the maturity of the contract, \(S_{T}\) the price of the underlying at maturity and \(K\) the strike price of the contract (see [12] for details). In Figure 5, when the square root encoding is used in combination with some of the most well known amplitude estimation algorithms in the literature, it is easily seen that there is no convergence to the real value because of the presence of the absolute value. In Sections 3.1 and 3.2 we develop an strategy to overcome this issue which consists of two parts. On the one hand, a new encoding is proposed. On the other hand, a different amplitude estimation technique is employed. ### Direct encoding The _direct encoding_ algorithm starts from the same initial state \(\left|S\right>\): \[\left|S\right>=U_{S}\left|0\right>=\sum_{k=0}^{\mathcal{K}-1}\sqrt{p_{S}(S_{k} )}\left|S_{k}\right>, \tag{21}\] where \(\mathcal{K}\) is again the number of possible paths defined by the given discretization. The next step is to define the operator \(U_{\overline{F}}\) such that pushes the payoff without squared roots into the amplitude. In order to do that, an additional single qubit register is needed: \[\left|F\right>=U_{F}\left|S\right>=\frac{1}{\|F(S)\|_{\infty}}\sum_{k=0}^{ \mathcal{K}-1}\sqrt{p_{S}(S_{k})}F(S_{k})\left|S_{k}\right>\left|0\right>+ \sqrt{p_{S}(S_{k})}\left(1-F(S_{k})\right)\left|S_{k}\right>\left|1\right>. \tag{22}\] Figure 5: Absolute error between the QAMC algorithm and the discretized expectation of an option with a payoff \((S_{T}-K)\) with \(K=S_{t}\) for different number of calls to the oracle. Each line corresponds to a different amplitude estimation algorithm. For a detailed description for the parameters of each simulation, see Table 5. The experiments have been performed using the square root encoding. Next, we project the state \(\left|F\right\rangle\) into \(\left\langle S\right|\left\langle 0\right|\): \[\left\langle S\right|\left\langle 0\right|F\right\rangle=\frac{1}{\|F(S)\|_{ \infty}}\sum_{k=0}^{\mathcal{K}-1}p_{S}(S_{k})F(S_{k}). \tag{23}\] In practice, instead of projection, we will apply the adjoint of \(U_{S}\) and then measuring the eigenstate zero (see Figure 6), hence obtaining: \[\sqrt{\bar{P}_{\left|0\right\rangle}}=\left|\left\langle 0\right|U_{S}^{\dagger}U_{ F}U_{S}\left|0\right\rangle\right|=\frac{1}{\|F(S)\|_{\infty}}\left|\sum_{k=0}^{ \mathcal{K}-1}p_{S}(S_{k})F(S_{k})\right|. \tag{24}\] Do not confuse \(P_{\left|0\right\rangle}\) from the square root encoding from the \(\bar{P}_{\left|0\right\rangle}\) in the direct encoding. The former one refers to the probability of measuring zero in the last register when the unitary \(U_{\sqrt{F}}U_{S}\) is applied. The latter one refers to the probability of measuring the eigenstate zero when the unitary \(U_{S}^{\dagger}U_{F}U_{S}\) is applied. Hence, an estimation of the discretized expectation is obtained as: \[\left|\sum_{k=0}^{\mathcal{K}-1}p_{S}(S_{k})F(S_{k})\right|=\|F(S)\|_{\infty} \sqrt{\bar{P}_{\left|0\right\rangle}}. \tag{25}\] Again, in order to achieve a speedup, an amplitude estimation algorithm is needed. As shown in Figure 7, when the direct encoding is used, the result converges to the discretized expectation even when the payoff takes negative values, which is a novelty when compared with other proposed methods in the literature, that cannot deal with these negative values. However, the presence of the outer absolute value in Equation (25) still prevents from a correct estimation when negative expectations arise. Figure 6: Scheme of the proposed direct encoding. Figure 7: Absolute error between the QAMC algorithm and the discretized expectation of an option with a payoff \((S_{T}-K)\) with \(K=S_{t}\) for different number of calls to the oracle. Each line corresponds to a different amplitude estimation algorithm. For a detailed description for the parameters of each simulation, see Table 5. The experiments have been performed using the direct encoding. ### Amplitude estimation At the end of the previous section it was discussed that the discretized expectation can be estimated through the probability of measuring state \(|0\rangle\): \[\sqrt{\bar{P}_{|0\rangle}}\propto\left|\sum_{k=0}^{\mathcal{K}-1}p_{S}(S_{k})F(S _{k})\right|.\] Thus, this partially solves the initial problem. Instead of obtaining the sum of absolute values, something proportional to the absolute value of the sum is returned. Hence, in a situation where the sign of the expectation is of interest, an additional mechanism to overcome this issue is needed. In fact, this is usually the case in financial applications, where the sign is the difference between a profit and a loss. For this case, we propose to use of the Real Quantum Amplitude Estimation (RQAE) algorithm. Its main property is that it is able to recover the amplitude of the quantum state plus some information on the quantum phase, namely the sign. Another interesting property is that it has a free parameter \(q\), called amplification ratio, which allows the user to control the overall depth of the circuit. Bigger values of \(q\) yields shallower circuits but slower convergence, while smaller values of \(q\) yields deeper circuits but faster convergence. In Figure 8, an example where the price of the derivative becomes negative is shown. As it can be seen, RQAE is the only algorithm that converges to the exact solution. Further, in Figure 9 we show that the RQAE is also competitive with the most advanced algorithms from the literature in terms of number of calls to the oracle. Figure 8: Absolute error between the QAMC algorithm and the discretized expectation of an option with a payoff \((S_{T}-K)\) with \(K=1.5S_{t}\) for different number of calls to the oracle. Each line corresponds to a different amplitude estimation algorithm. For a detailed description for the parameters of each simulation, see Table 5. The experiments have been performed using the direct encoding. ## 4 Conclusions In this work, we have presented two novel proposals for using quantum computers for derivative pricing. On the one hand, we have introduced a new encoding algorithm for the payoff of the derivatives contract, the direct encoding. On the other hand, the use of RQAE as the amplitude estimation routine in the overall pipeline is proposed. The combination of the two allows pricing derivative products with negative payoffs. Moreover, through Sections 3.1 and 3.2, we perform different experiments to validate our methodology. In particular, we have shown that existing alternatives fail for payoffs as simple as \(F(x)=x-K\) that involve negative values. Further, the proposed methodology is competitive in terms of "speed" with the best known algorithms. Although in theory QAMC gets a quadratic speedup over CMC, there are still many issues to solve in practice, specially if we take into consideration the current state-of-the-art hardware constraints. First, the implementation of the oracle \(U_{S}\) as explained in Section 2.2.1 requires an excessively large number of qubits. Second, the depths required by the current Grover-like routines are not feasible under the current decoherence times. Finally, the total number of gates when combining the implementation of oracle \(U_{S}\) with a Grover-like algorithm requires a gate error beyond the capabilities of the current technology. In future works, we will explore how to face the different previously pointed problems, with special focus in efficiently initializing states that approximately encode a target probability distribution. Figure 9: Absolute error between the QAMC algorithm and the discretized expectation for different number of calls to the oracle. Each of the panels corresponds to a different payoff and each line corresponds to a different amplitude estimation algorithm. For a detailed description for the parameters of each simulation, see Table 5. Declarations ### Ethical Approval and Consent to participate Not applicable. ### Consent for publication Not applicable. ### Availability of supporting data Not applicable. ### Competing interests The authors declare that they have no competing interests. ### Funding All authors acknowledge the European Project NExt ApplicationS of Quantum Computing (NEASQC), funded by Horizon 2020 Program inside the call H2020-FETFLAG-2020-01(Grant Agreement 951821). A. Manzano, A. Leitao and C. Vazquez wish to acknowledge the support received from the Centro de Investigacion de Galicia "CITIC", funded by Xunta de Galicia and the European Union (European Regional Development Fund- Galicia 2014-2020 Program), by grant ED431G 2019/01. ### Authors' contributions ### Acknowledgements The authors would like to thank Vedran Dunjko and Emil Dimitrov for fruitful discussions on some aspects of the present work. The most part of the computational resources for this work were provided by the Galician Supercomputing Center (CESGA).
2305.02161
The Influence of Nitsche Stabilization on Geometric Multigrid for the Finite Cell Method
Immersed finite element methods have been developed as a means to circumvent the costly mesh generation required in conventional finite element analysis. However, the numerical ill-conditioning of the resultant linear system of equations in such methods poses a challenge for iterative solvers. In this work, we focus on the finite cell method (FCM) with adaptive quadrature, adaptive mesh refinement (AMR) and Nitsche's method for the weak imposition of boundary conditions. An adaptive geometric multigrid solver is employed for the discretized problem. We study the influence of the mesh-dependent stabilization parameter in Nitsche's method on the performance of the geometric multigrid solver and its implications for the multilevel setup in general. A global and a local estimate based on generalized eigenvalue problems are used to choose the stabilization parameter. We find that the convergence rate of the solver is significantly affected by the stabilization parameter, the choice of the estimate and how the stabilization parameter is handled in multilevel configurations. The local estimate, computed on each grid, is found to be a robust method and leads to rapid convergence of the geometric multigrid solver.
S. Saberi, G. Meschke, A. Vogel
2023-05-03T14:49:11Z
http://arxiv.org/abs/2305.02161v1
# The Influence of Nitsche Stabilization on Geometric Multigrid for the Finite Cell Method+ ###### Abstract Immersed finite element methods have been developed as a means to circumvent the costly mesh generation required in conventional finite element analysis. However, the numerical ill-conditioning of the resultant linear system of equations in such methods poses a challenge for iterative solvers. In this work, we focus on the finite cell method (FCM) with adaptive quadrature, adaptive mesh refinement (AMR) and Nitsche's method for the weak imposition of boundary conditions. An adaptive geometric multigrid solver is employed for the discretized problem. We study the influence of the mesh-dependent stabilization parameter in Nitsche's method on the performance of the geometric multigrid solver and its implications for the multilevel setup in general. A global and a local estimate based on generalized eigenvalue problems are used to choose the stabilization parameter. We find that the convergence rate of the solver is significantly affected by the stabilization parameter, the choice of the estimate and how the stabilization parameter is handled in multilevel configurations. The local estimate, computed on each grid, is found to be a robust method and leads to rapid convergence of the geometric multigrid solver. **Keywords** Adaptive geometric multigrid Nitsche's method Immersed finite element Finite cell method Weak boundary conditions ## 1 Introduction Numerical approximation of partial differential equations on complex domains is a labor-intensive process, where the generation of an appropriate boundary-conforming computational mesh can constitute a salient portion of the simulation workflow. This has motivated the development of a class of advanced discretization methods that can broadly be categorized under the umbrella of immersed or unfitted finite element methods, namely eXtended FEM (XFEM) [1], finite cell method (FCM) [2, 3] and cutFEM [4] methods fall under this category. The classification of each method depends on the specific techniques that are used for domain integration and imposition of boundary conditions. Lagrange multipliers [5, 6, 7, 8, 9, 10], the penalty method [11, 12], Nitsche's method [13, 14, 15, 16, 17, 18, 19, 20], the LS-Nitsche's method [21, 22], etc. are some of the techniques that are commonly used for the imposition of boundary conditions. Both the penalty method and Nitsche's method impose essential boundary conditions in a weak sense, i.e., the variational formulation is modified to account for essential boundary conditions rather than introducing explicit constraints on the state variables. In addition, they introduce no additional unknowns and preserve the symmetry, positive-definiteness and banded structure of the matrix. In contrast to Nitsche's method, the penalty method is not variationally consistent [5, 18] and may lead to severe ill-conditioning of the system. The finite cell method makes use of adaptive quadrature for domain integration and a weak imposition method for essential boundary conditions, [2, 3, 23] and Nitsche's method is usually preferred in the context of FCM. [24, 25, 26] From a computational point of view, one of the main hurdles for the employment of such numerical methods is the solution of the discretized problem that has often limited their deployment on large scale problems. While direct solvers are robust and solve the system to machine accuracy, they normally suffer from sub-optimal complexity and become extremely expensive for large systems [27]. On the other hand, iterative solvers can offer much better complexity and concurrency on parallel machines; however, typically their convergence highly depends on the spectrum of the system matrix and is therefore mesh dependent. [27] Multigrid methods have been successfully used to remove the mesh dependence of iterative solvers in the finite element discretization of different classes of PDEs. [28] In the context of unfitted finite element methods, it is well known that small cut fractions can lead to severe ill-conditioning of the system matrix and, therefore, limit the usability of iterative solvers. A geometric multigrid solver for three CutFEM formulations [29] and for XFEM [30] using the Nitsche's method was recently studied. Geometric multigrid has been recently studied for the finite cell method with a penalty formulation [31] and with the Nitsche's method. [32, 33] In this work, we employ a finite cell formulation of the Poisson problem where adaptive quadrature and Nitsche's method are used for domain integration and the imposition of boundary conditions, respectively. We formulate a geometric multigrid solver for the solution of the resultant system and investigate the influence of the stabilization parameter in Nitsche's method on the performance of the geometric multigrid solver for the finite cell formulation. The sensitivity of the solver to variations in the stability parameter is studied and possible methods for the estimation of an appropriate stability parameter are explored. Furthermore, we discuss the implications of the mesh-dependent nature of the stability parameter for multilevel methods. The rest of this work is organized as follows. The numerical formulation of the model problem, the weak imposition of essential boundary conditions, the finite cell formulation and the geometric multigrid solver are described in detail in Section 2. The geometric multigrid solver is studied using a number of numerical benchmarks in Section 3 and the results are discussed. Finally, conclusions are drawn in Section 4. ## 2 Numerical methodology ### Model problem As the model problem in this work, we use the Poisson equation as a representative elliptic partial differential equation which can be written in strong form as \[\begin{split}-\nabla^{2}u&=f\qquad\text{in}\; \Omega,\\ u&=g\qquad\text{on}\;\Gamma_{D},\\ \nabla u\cdot\boldsymbol{n}&=t\qquad\text{on}\; \Gamma_{N},\end{split} \tag{1}\] where \(u\) is the scalar solution variable, \(\Omega\) is the spatial domain, \(\Gamma=\Gamma_{D}\cup\Gamma_{N}\) is the boundary of the domain, \(\Gamma_{D}\) and \(\Gamma_{N}\) are the Dirichlet and Neumann parts of the boundary, respectively, \(f\) is the body force function and \(\boldsymbol{n}\) is the unit-length outer normal vector to the boundary. \(g\) and \(t\) are prescribed functions on \(\Gamma_{D}\) and \(\Gamma_{N}\), respectively. Multiplying the strong form with appropriate test functions, integrating over the domain and using the Green's theorem, the boundary-conforming weak form can be written as: Find \(u\in V\) such that for all \(v\in V_{0}\) \[\int_{\Omega}\nabla v\cdot\nabla u\;d\boldsymbol{x}-\int_{\Gamma_{D}}v( \nabla u\cdot\boldsymbol{n})\;d\boldsymbol{s}=\int_{\Omega}vf\;d\boldsymbol {x}+\int_{\Gamma_{N}}vt\;d\boldsymbol{s}, \tag{2}\] where \(v\) are the test functions and \[\begin{split} V&:=\{u\in H^{1}(\Omega)\mid u=g\text{ on }\Gamma_{D}\},\\ V_{0}&:=\{v\in H^{1}(\Omega)\mid v=0\text{ on } \Gamma_{D}\},\end{split} \tag{3}\] where \(H^{1}\) is the Sobolov space. We note that the term on \(\Gamma_{D}\) in Equation (2) vanishes in the boundary-conforming case as the essential boundary conditions are included in the Space \(V\). ### Immersed finite element method The physical domain \(\Omega\) is approximated using a boundary-conforming tessellation in classical finite element methods. However, given that \(\Omega\) can be arbitrarily complex, a fact which is directly reflected in the effort required for the generation of a boundary-conforming tessellation of the domain, immersed finite elements method embed \(\Omega\) in an embedding domain \(\Omega_{e}\), as shown in Figure 1. The computational domain is thereby extended by the fictitious part \(\Omega_{e}\setminus\Omega\). The embedding of \(\Omega\) in \(\Omega_{e}\) necessitates two overarching modifications to the standard finite element weak formulation. On the one hand, the weak formulation must recover the physical domain \(\Omega\) which is achieved using a penalization factor as explained in detail below. On the other hand, as opposed to the boundary-conforming case, in which the space \(V\) includes the essential boundary conditions, Equation 2 must be modified to impose essential boundary conditions weakly in immersed methods since the physical boundary \(\Omega\) is not guaranteed to conform to the boundary of the computational domain \(\Omega_{e}\). We start with the latter, namely the weak imposition of essential boundary conditions, for which we use the symmetric Nitsche's method in this work. Applying the Nitsche's method, the weak formulation in Equation (2) turns into \[\begin{split}\int_{\Omega}\nabla v\cdot\nabla u\;d\mathbf{x}-\int_{ \Gamma_{D}}v(\nabla u\cdot\mathbf{n})\;d\mathbf{s}-\int_{\Gamma_{D}}(u-g)(\nabla v \cdot\mathbf{n})\;d\mathbf{s}+\int_{\Gamma_{D}}\lambda v(u-g)\;d\mathbf{s}=\\ \int_{\Omega}vf\;d\mathbf{x}+\int_{\Gamma_{N}}vt\;d\mathbf{s},\end{split} \tag{4}\] where \(\lambda\) is the scalar stabilization parameter in the Nitsche's method. The third term on the left-hand side of Equation (4) is the symmetric consistency term ensuring that the symmetry of the original weak form is retained. The fourth term on the left-hand side of Equation (4) is the stabilization term ensuring that the boundary conditions are satisfied and that the formulation is stable for a large enough \(\lambda\). The weak formulation in Equation 4 is in the next step extended to the embedding domain as follows: \[\begin{split}\int_{\Omega_{e}}\alpha\nabla v\cdot\nabla u\;d\mathbf{ x}-\int_{\Gamma_{D}}v(\nabla u\cdot\mathbf{n})\;d\mathbf{s}-\int_{\Gamma_{D}}(u-g)( \nabla v\cdot\mathbf{n})\;d\mathbf{s}+\int_{\Gamma_{D}}\lambda v(u-g)\;d\mathbf{s}=\\ \int_{\Omega_{e}}\alpha vf\;d\mathbf{x}+\int_{\Gamma_{N}}vt\;d\mathbf{s}, \end{split} \tag{5}\] where \[\begin{cases}\alpha=1,&\text{in}\;\Omega,\\ \alpha=0,&\text{in}\;\Omega_{e}\setminus\Omega,\end{cases} \tag{6}\] where \(\alpha\) is the scalar penalization factor. The physical domain is essentially recovered by penalizing the part of the embedding domain that lies outside of the physical domain. We note that in practice, in the fictitious part \(\Omega_{e}\setminus\Omega\), a small positive, non-zero value \(\alpha\ll 1\) is used instead of zero in order to avoid severe numerical ill-conditioning. Let \(\mathcal{T}_{h}:=\{K_{i}\}_{i=1}^{n_{K}}\) be a tessellation of the computational domain \(\Omega_{e}\) into a set of \(n_{K}\) compact, connected, Lipschitz sets \(K_{i}\) with non-empty interior and \(\dot{K}_{i}\cap\dot{K}_{j}=\emptyset\;\forall\;i\neq j\). Figure 2: Adaptive quadrature integration on a cell cut by the physical domain and handling of hanging nodes as a result of adaptive mesh refinement Figure 1: Illustration of the physical domain \(\Omega\) along with Dirichlet (\(\Gamma_{D}\)) and Neumann (\(\Gamma_{N}\)) boundaries in a structured embedding domain \(\Omega_{e}\) \(\overline{\Omega}_{e,h}:=\cup_{i=1}^{n_{K}}K_{i}\) then defines an approximation of \(\Omega_{e}\). Introducing a finite-dimensional function space \(V_{h}\subset H^{1}(\Omega_{e})\), the following weak formulation is obtained: Find \(u_{h}\in V_{h}\) such that for all \(v_{h}\in V_{h}\) \[a_{h}(u_{h},v_{h})=b_{h}(v_{h}), \tag{7}\] with \[\begin{split} a_{h}(u_{h},v_{h}):=&\int_{\Omega_{ e}}\alpha\nabla v_{h}\cdot\nabla u_{h}\;d\mathbf{x}-\int_{\Gamma_{D}}v_{h}(\nabla u_{h} \cdot\mathbf{n})\;d\mathbf{s}-\int_{\Gamma_{D}}u_{h}(\nabla v_{h}\cdot\mathbf{n})\;d\mathbf{s} +\int_{\Gamma_{D}}\lambda v_{h}u_{h}\;d\mathbf{s},\\ b_{h}(v_{h}):=&\int_{\Omega_{e}}\alpha v_{h}f\;d \mathbf{x}+\int_{\Gamma_{N}}v_{h}t\;d\mathbf{s}-\int_{\Gamma_{D}}g(\nabla v_{h}\cdot \mathbf{n})\;d\mathbf{s}+\int_{\Gamma_{D}}\lambda v_{h}g\;d\mathbf{s}.\end{split} \tag{8}\] The embedding domain is typically regular and can be generated efficiently using a variety of data structures such as space trees [34]. In this work, we employ adaptive quadrature in conjunction with adaptive mesh refinement (AMR) using space trees for volume integration as illustrated in Figure 2. Adaptive quadrature integration is a natural extension of adaptive mesh refinement on space tree data structures. We note that the adaptive integration space (blue lines in Figure 2) is not a part of the global system of equations. As shown in Figure 2, adaptive mesh refinement leads to non-conforming discretizations, i.e., hanging nodes, that must be properly handled in order to obtain a consistent solution. We constrain hanging nodes to their corresponding non-hanging counterparts and remove them from the global system. We use conventional Lagrange shape functions throughout this work. The symmetric Nitsche's method is robust and accurate, provided that the stabilization parameter is appropriately chosen; however, an insufficiently-large parameter leads to loss of stability of the discretization, and extremely-large parameters are equivalent to the penalty method, which is stable but can cause severe ill-conditioning in the matrix. The choice of the stabilization parameter is, therefore, crucial to the discretization. A number of methods for the estimation of the stabilization parameter involve the solution of general eigenvalue problems [15, 17, 35]. Applying Young's inequality with \(\varepsilon\), the bilinear form in Equation (8) can be estimated by \[\begin{split} a_{h}(v_{h},v_{h})\geq&\int_{\Omega_{ e}}\alpha\nabla v_{h}\cdot\nabla v_{h}\;d\mathbf{x}-\frac{1}{\varepsilon}\int_{ \Gamma_{D}}v_{h}v_{h}\;d\mathbf{s}\\ &-\varepsilon\int_{\Gamma_{D}}(\nabla v_{h}\cdot\mathbf{n})(\nabla v _{h}\cdot\mathbf{n})\;d\mathbf{s}+\int_{\Gamma_{D}}\lambda v_{h}v_{h}\;d\mathbf{s}.\end{split} \tag{9}\] Assuming a constant \(C\) such that \[C\int_{\Omega_{e}}\alpha\nabla v_{h}\cdot\nabla v_{h}\;d\mathbf{x}\geq\int_{\Gamma _{D}}(\nabla v_{h}\cdot\mathbf{n})(\nabla v_{h}\cdot\mathbf{n})\;d\mathbf{s}, \tag{10}\] and assuming that the stabilization parameter \(\lambda\) is constant over the integration domain, the following relation is obtained \[a_{h}(v_{h},v_{h})\geq(1-\varepsilon C)\int_{\Omega_{e}}\alpha\nabla v_{h}\cdot \nabla v_{h}\;d\mathbf{x}+(\lambda-\frac{1}{\varepsilon})\int_{\Gamma_{D}}v_{h}v_{ h}\;d\mathbf{s}. \tag{11}\] For the coercivity of the bilinear form, it is required that both \((1-\varepsilon C)\) and \((\lambda-\frac{1}{\varepsilon})\) be positive. It can then easily be shown that necessarily \(\lambda>C\). Under this stability-ensuring constraint, the stabilization parameter \(\lambda\) should be chosen as small as possible in order to avoid numerical ill-conditioning. Therefore, in order to find an appropriate value for \(\lambda\), a good estimate for the lower bound \(C\) has to be obtained. This can be achieved from Equation (10), which allows to compute \(C\) via a generalized eigenvalue problem as follows: \[\mathbf{Kv}=\Lambda\mathbf{Mv}, \tag{12}\] where \(\mathbf{v}\) and \(\Lambda\) are the eigenvectors and eigenvalues, respectively, and using a basis \(\{\phi_{i}\}\subset V_{h}\), the matrices \(\mathbf{K}\) and \(\mathbf{M}\) are formed by \[\begin{cases}\mathbf{K}_{ij}:=\int_{\Gamma_{D}}(\nabla\phi_{i}\cdot\mathbf{n})(\nabla\phi _{j}\cdot\mathbf{n})\;d\mathbf{s},\\ \mathbf{M}_{ij}:=\int_{\Omega_{e}}\alpha\nabla\phi_{i}\cdot\nabla\phi_{j}\;d\mathbf{x},\end{cases} \tag{13}\] for the model problem. The lower bound \(C\) for the stabilization parameter in Nitsche's method can now be chosen as the largest eigenvalue \(\max_{k}\Lambda_{k}\). Please note that \(\mathbf{M}\) is integrated over the entire domain in Equation (13). However, the corresponding integral can conservatively be restricted to the part of the domain intersected by the Dirichlet boundary in Equation (10) and consequently in Equation (13). Given the above reasoning, two approaches for the stabilization parameter estimation can be formulated: The global approach estimates \(C\) through the solution of one generalized eigenvalue problem given by Equation (12), where the integration of \(\mathbf{M}\) is restricted to only those elements \(K\in\mathcal{T}_{h}\) which are intersected by \(\Gamma_{D}\). This provides a single stabilization parameter \(\lambda_{g}\) obtained for the entire domain. Alternatively, the local approach is to restrict the estimate in Equation (10) and analogously the generalized eigenvalue problem in Equation (12) to the domain of each element \(K\in\mathcal{T}_{h}\) cut by the Dirichlet boundary. Thereby, element-wise constant stabilization parameters \(\lambda_{l}^{K},K\in\mathcal{T}_{h}\) with \(K\cap\Gamma_{D}\neq\emptyset\), are computed through the solution of a series of local generalized eigenvalue problems. For shorter notations, we denote the set of element-wise parameters \(\{\lambda_{l}^{K}\}\) by \(\lambda_{l}\). In this work, we employ both methods and provide a comparison between the two approaches with respect to their impact on the iterative solver performance in Section 3. From a computational point of view, the main difference is to either solve one large eigenvalue problem or several smaller eigenvalue problems. The solution of the generalized eigenvalue problem (12), however, is not trivial due to the rank deficiency of the matrices. A possible algorithm for solving the problem is singular value decomposition which is employed in this work. Due to the increasing size of the eigenvalue problem for the global approach, it might become prohibitively expensive on very fine meshes, whereas the local approach remains applicable. ### Geometric multigrid The discretized form of the model problem as described in Section 2.2 leads to the following system of equations: \[\mathbf{A}\mathbf{x}=\mathbf{b}, \tag{14}\] where \(\mathbf{A}\) and \(\mathbf{b}\) are defined according to the bilinear and linear forms in Equation (8), respectively, and \(\mathbf{x}\) is the solution vector. It is well known that the existence of small cut fractions, where the intersection between the physical domain and elements of the computational embedding domain is small, can lead to numerical ill-conditioning in immersed finite element methods, and the condition number of the system matrix can be arbitrarily large; therefore, special treatment is necessary for the successful application of iterative methods. In this work, we use a Schwarz-type smoother for the treatment of cut cells [31, 32]. The fundamental idea of the multigrid method, namely smoothing the oscillatory frequencies of the error on the fine grid and approximating the smooth frequencies on the coarse grid remains intact. #### 2.3.1 Components In its most fundamental form, geometric multigrid requires the following components: a hierarchy of grids \(\tau_{0},\ldots\tau_{n}\), where \(\tau_{n}\) is the original fine mesh and \(\tau_{0}\) is the coarsest mesh, transfer operators \(\mathbf{R}_{l}\) and \(\mathbf{P}_{l}=\mathbf{R}_{l+1}^{T}\) that restrict a vector from level \(l\) to \(l-1\) and prolongate it from \(l\) to \(l+1\), respectively, a smoother operator \(\mathbf{S}\) that is employed in a number of pre- and post-smoothing steps for each level except the coarsest level \(\tau_{0}\) and a base solver that is employed on the coarsest level \(\tau_{0}\). Grid HierarchyWe highlight a few important aspects of the adaptive mesh refinement employed in this work. An exemplary grid refinement is shown in Figure 3 with a four-level grid hierarchy. A 2:1 balance is enforced on all grids, which means that no neighbor quadrants will have refinement levels that are more than two levels apart. Starting from a fine grid \(\tau_{l}\), in order to arrive at the immediate coarse grid \(\tau_{l-1}\), a loop through the data structure is performed and all quadrants with the maximum refinement level are coarsened. Applying this algorithm recursively leads to a desired number of nested spaces. Transfer operatorsGiven a vector \(\mathbf{v}_{l}\) on level \(l\), the restricted vector \(\mathbf{v}_{l-1}\) on the coarse level \(l-1\) can be obtained as \[\mathbf{v}_{l-1}=\mathbf{R}_{l}\mathbf{v}_{l}, \tag{15}\] where \(\mathbf{R}_{l}\) is the restriction operator. The restriction of the bilinear form on the other hand can be achieved in two commonly employed manners. One approach uses the restriction and prolongation operators as \[\mathbf{A}_{l-1}=\mathbf{R}_{l}\mathbf{A}_{l}\mathbf{R}_{l}^{T}, \tag{16}\] which we refer to as RAP method. Another approach obtains the coarse matrices through the computation of the bilinear form using the function space of the coarse grid including a mesh-based estimation of the stabilization parameter. We refer to this approach as assembly method. The existence of the mesh-dependent stabilization parameter in the formulation in the previous sections sharply sets these approaches apart in the context of multilevel methods. This effect is investigated in Section 3. SmoothersWe use a multiplicative Schwarz-type smoother [31, 32] for the treatment of cut cells on the finest mesh. The smoother consists in the solution of a series of local subdomain problems: we create a cell-based subdomain for every cell cut by the physical boundary that includes all degrees of freedom associated with the cell. A single-DoF subdomain is created for all other degrees of freedom which do not appear in a cell-based subdomain [32]. Given these \(n_{sd}\) subdomains, the smoother operator can be written as \[\mathbf{S}=(\mathbf{R}_{s,n_{sd}}^{T}\mathbf{A}_{s,n_{sd}}^{-1}\mathbf{R}_{s,n_{sd}})\dots( \mathbf{R}_{s,1}^{T}\mathbf{A}_{s,1}^{-1}\mathbf{R}_{s,1})\quad\text{ with }\quad\mathbf{A}_{s,i}=\mathbf{R}_{s,i}\mathbf{A}\mathbf{R}_{s,i}^{T}, \tag{17}\] where \(\mathbf{R}_{s,i}\) is the restriction operator of the \(i\)-th subdomain and \(\mathbf{A}_{s,i}\) is the local block of the \(i\)-th subdomain. For the assembly approach on coarser meshes, the same smoother strategy could be applied. For the RAP approach, however, the formation of the coarse grid matrices is purely algebraic. Therefore, the employed smoothers usually are chosen to be algebraic methods as well. As the main target in our studies is a comparision of the approaches, we therefore employ a damped Jacobi smoother on coarse meshes in both cases. Please note that a Schwarz smoother on assembled coarse matrices likely provides even better convergence rates than the ones presented for comparison reasons in Section 3. Base solverWe use a direct LU solver on the base level. Given the high computational complexity of direct solvers [27], we note that it is important that the size of the base problem is kept relatively small, as discussed in Section 3. \begin{table} \begin{tabular}{c r r r} \hline Mesh & \(n_{\texttt{DoF}}\) & \(\lambda_{g}\) & \(\lambda_{l}^{\text{mean}}\) \\ \hline \(\tau_{0}\) & 1089 & - & - \\ \(\tau_{1}\) & 1209 & 1279.3 & 312.23 \\ \(\tau_{2}\) & 1657 & 13343.2 & 912.02 \\ \(\tau_{3}\) & 2637 & 22770.2 & 2457.70 \\ \(\tau_{4}\) & 4661 & 310834.0 & 4188.30 \\ \(\tau_{5}\) & 8665 & 136533.0 & 17873.99 \\ \(\tau_{6}\) & 16953 & - & 30835.11 \\ \(\tau_{7}\) & 33269 & - & 62312.00 \\ \(\tau_{8}\) & 65689 & - & 161481.18 \\ \hline \end{tabular} \end{table} Table 1: Mesh hierarchy: the coarse mesh \(\tau_{0}\) is constructed by five levels of uniform refinement on the unit square. For \(k\geq 1\), the meshes \(\tau_{k}\) are constructed by adaptively refining its coarser parent \(\tau_{k-1}\) one level towards the boundary of the physical domain. \(n_{\texttt{DoF}}\) is the number of degrees of freedom in the mesh. \(\lambda_{g}\) and \(\lambda_{l}^{\text{mean}}\) are the global and mean local eigenvalues, respectively Figure 4: A circular physical domain is embedded in a unit square. A constant-valued inhomogeneous Dirichlet boundary is imposed on an arch to the left of the physical domain and the rest is left as homogeneous Neumann boundary Figure 3: The hierarchy of grids constructed from a randomly-refined fine grid, \(\tau_{3}\). Cells are colored based on their refinement level Figure 5: The discretization of the domain for \(\tau_{5}\), i.e., five levels of uniform refinement on the unit square followed by five levels of adaptive refinement towards the boundary of the physical domain Figure 6: Two-grid solver on mesh \(\tau_{5}\). (a) The distribution of the local stabilization parameter computed from the generalized eigenvalue problem and the global stabilization parameter, (b) the average reduction rate for variations in the global and local stabilization parameters and (c) the convergence of the iterative solver in terms of relative residual for selected cases. \(\lambda_{g}\) and \(\lambda_{l}\) denote the computed stabilization parameters using the global and local approaches, respectively. In (b), \(\lambda\) denotes the element-wise constant stabilization parameter in the case of the local approach and the global stabilization parameter in the case of the global approach. Note that the solver does not converge for values smaller than the computed value in the local approach. Numerical studies For the numerical experiments, we consider a circular physical domain that is embedded inside a square as shown in Figure 4, where a Poisson problem with Dirichlet and Neumann boundary conditions is solved. An inhomogeneous Dirichlet boundary condition with a constant value of \(1.0\) is imposed on an arch section to the left of the domain and the rest is left as homogeneous Neumann boundary as demonstrated in Figure 4. Starting from the unit square, the refinement criteria includes five levels of uniform refinement followed by a number of adaptive refinement levels towards the boundary of the physical domain, leading to a hierarchy of nested grids that are used throughout this section. The grid hierarchy is detailed in Table 1. The discretization of the domain by mesh \(\tau_{5}\) is shown in Figure 5. In addition, eight levels of adaptive quadrature are used for volume integration. The multigrid solver detailed in Section 2.3 is used as a solver for the discretized problem. A V-cycle with three pre- and post-smoothing steps is employed. Relative residual with a threshold of \(10^{-9}\) is taken as the convergence criterion. The penalization parameter \(\alpha\) is \(10^{-10}\). The stabilization parameter in Nitsche's formulation is computed through either the local, element-wise or the global approach detailed in Section 2.2. The computed global and local parameters from the generalized eigenvalue problems are designated with \(\lambda_{g}\) and \(\lambda_{l}\), respectively. An in-house code is used to implement the finite cell approach, the multigrid algorithm and to run the studies. p4est [34] and PETSc [36] are used for octree and linear algebra functionalities. ### Stabilization parameter influence In order to investigate the sensitivity of the iterative method to the stabilization parameter in each scheme, a number of variations from the computed parameter are examined: the global or the local stabilization parameter is multiplied by different factors to analyze the behavior for values smaller and larger than the ones suggested by the estimates obtained through the solution of the eigenvalue problems described in Section 2.2. The two-grid method on mesh \(\tau_{5}\) is used in order to better isolate the effect of the stabilization parameter. The convergence of the solver as well as the distribution of the stabilization parameter are shown in Figure 6. Please note that the global stabilization parameter is not necessarily larger than the largest local parameter, as shown in Figure 6(a), i.e., the computed global stabilization parameter may be smaller than the local parameter for some of the cut cells. For the global stabilization parameter scheme, the solution achieves rapid convergence in the vicinity of the computed value as can be seen in Figure 6(b). However, lower reduction rates can be obtained at smaller stabilization parameters than the one computed from the generalized eigenvalue problem. This trend continues up to the point that the stability of the solution is lost. At this point, we observe that the solution could either asymptotically reach a plateau or diverge. An example of such behavior is shown in Figure 6(c) for a stability parameter around three times smaller than the computed global value, where the solution reaches a plateau at approximately \(10^{-7}\) and slowly diverges. Further reduction of the global stabilization parameter leads to divergence of the solver. It can be inferred that the most desirable behavior in the solution for this scheme is achieved for the smallest value of the global parameter that is large enough to ensure stability. An empirical choice of \(2\lambda_{g}\) suggested by a number of studies [17, 18] does not seem to have an obvious advantage in terms of the convergence rate of the solver for the global estimate. For the local stabilization parameter scheme, the solution behavior is shown for variations of the stabilization parameter in Figure 6(b). It can be observed that the smallest reduction rates occur in the proximity of the computed local parameters. Although the solution quickly becomes unstable below the computed values, it remains consistent over a larger span compared to the global parameter. It can be inferred that the computed local stability parameter is close to the minimum value that is required for the coercivity of the problem. An empirical value of \(2\lambda_{l}\) seems to be a reasonable choice in this case. Although both methods achieve virtually identical convergence rates at the computed values, the average reduction rate achieved by the local scheme is considerably smaller than its global counterpart at higher relative parameters, and the iterative solver with the local scheme exhibits a better convergence behavior for the majority of the interval. Furthermore, the iterative method is more robust with regards to variations in the stabilization parameter in this scheme and the minimum reduction rate remains relatively constant over a rather large span of variation. The distribution of the local stabilization parameter in Figure 6(a) can shed some light on this observation. It can easily be seen that there is an enormous gap between the distribution of the stabilization parameter in the local scheme and the computed global value. Intuitively, the global value can be interpreted as an estimate for the minimum value that satisfies the coercivity condition on all elements, whereas the local values only need to satisfy the condition in a local domain. This freedom allows for a much more accurate distribution of the stability parameter that leads to better performance of the solver. While other configurations of the solver, e.g., more pre/post smoothing steps certainly change the absolute value of measures such as the average reduction rate and similarly the number of required iterations, we did not observe fundamentally different behavior in the convergence of the solver for such variations. ### Multigrid convergence The implications of the mesh dependence of the stabilization parameter on a multigrid solver are analyzed through a mesh study in this section. The mesh hierarchy in Table 1 is employed. The mesh \(\tau_{0}\) is used as the base problem for all tests, and is solved down to machine accuracy using a direct solver. Therefore, larger problems employ a deeper grid hierarchy. Local and global estimation of the stabilization parameter along with two approaches to the computation of the coarse grid matrices, namely RAP and assembly, lead to four possible cases. The configuration of the geometric multigrid solver is identical to the previous example on each level. The Schwarz smoother is applied to the finest grid and a damped Jacobi smoother is applied on all coarser levels. \(2\lambda_{g}\) and \(2\lambda_{l}\) are used in the global and local schemes, respectively. On the three finest problems, only the local scheme is applied as the computational cost of the global scheme becomes exceedingly inhibiting. The average reduction rate of the multigrid solver for each case is given in Figure 7(a). It can be seen that for both methods of obtaining the system matrix, namely RAP and assembly, the local estimation method outperforms its global counterpart. The difference increases with levels of hierarchy. In each case of local and global estimation, the assembly methods achieves a smaller reduction rate than its RAP counterpart. A significant gap in the convergence rate can be observed between the combinations at higher levels of the hierarchy. When the coarse grid matrices are assembled, the convergence rate achieved by the global estimation scheme remains relatively close to the local estimation scheme up to the first few levels, after which the convergence rates start to deviate. The underlying mechanism of this behavior is twofold. In this case, the global stabilization parameter is estimated on each level of the hierarchy and is therefore suitable for that discretization. This can be inferred as the main reason why the global scheme performs relatively well up to a certain level in this configuration. The deterioration of the convergence rate can be imputed to slightly inferior convergence rates at each level that accumulate in the multilevel configuration. This is only exacerbated by the enlargement of the hierarchy depth. An analogous behavior is observed in the case where coarse grid matrices are obtained through restriction. A more accurate distribution of the stabilization parameter through the local scheme on the fine grid leads to lower reduction rates, especially at higher levels of hierarchy. In either estimation method of the stabilization parameter, smaller and more scalable convergence rates are achieved when using the assembly method for coarse matrices. This can be attributed to the fact that a more accurate estimation of the stabilization parameter Figure 8: A possible scenario where the smaller cut fraction on the coarser grid could potentially lead to a larger stabilization parameter Figure 7: Study for multigrid hierarchy: (a) Average reduction rate for the geometric multigrid solver for global and local estimation of the stabilization parameter and two approaches of obtaining the coarse grid matrices, (b) the distribution of the local stabilization parameter on a coarse (\(\tau_{1}\)) and a fine (\(\tau_{8}\)) mesh at each level can achieve a better convergence rate. In the RAP method, this estimation is performed only on the finest grid, and might not be reasonable for coarser grids. This can be more easily perceived by looking at the distribution of the stabilization parameters on different depths of the hierarchy. The distribution of the stabilization parameter is shown in Figure 7(b) for two sample grids. It can clearly be seen that the stabilization parameter has a highly grid-dependent distribution. If the distribution from one grid were to be used on the other, it would either lead to extremely slow convergence or divergence according to the results in the previous section. Since the majority of cells are larger on coarser grids, the stabilization estimates from finer grids are normally sufficiently large for the coarse grid. The convergence rates and relative robustness of the RAP method may be justified by this mechanism. However, we would like to bring a possible scenario into attention. The stabilization estimate depends not only on the grid size, but also on the cut configuration. Specifically, unfortunate cut elements, where the physical boundary intersects a small portion of the cell lead to large stabilization parameters. It is thus possible that the required stabilization parameter on a coarser grid be larger than on the fine grid. This is manifested for example between \(\tau_{4}\) and \(\tau_{5}\) in Table 1, where the coarser grid, \(\tau_{4}\), requires a larger stabilization parameter than the finer grid, \(\tau_{5}\), in the global estimate although on average, the required stabilization parameter is smaller for \(\tau_{4}\) in the local estimate. Furthermore, Figure 8 illustrates a possible scenario in which the cut configuration is deteriorated on the coarse grid although the grid size is larger. In conclusion, it is observed that a dedicated estimate on every mesh level through the assembly method can lead to significant improvement in the convergence rate of the solver. ## 4 Conclusions We investigate the mesh-dependent stabilization parameter in Nitsche's method in the context of the geometric multigrid solution to immersed finite element formulations. We find that the stabilization parameter not only carries importance for ensuring the coercivity of the bilinear operator and thus the stability of the solution, but also significantly affects the performance of the iterative solver. The stabilization parameter can in general be incorporated in the integration process either as a domain-wide or an element-wise constant. A good estimate for each approach can be formulated through generalized eigenvalue problems, leading to the global and local estimates, respectively. The global estimate obtains a constant parameter from a single generalized eigenvalue problem, whereas a series of smaller generalized eigenvalue problems lead to a distribution of the stabilization parameter in the local estimate. For a multigrid setting, the stabilization parameters can be computed separately for each mesh level if the coarse grid matrices are assembled. Using the RAP approach for coarse grid matrices, the stabilization parameter estimate is only computed on the finest mesh and then implicitly contained in coarser matrices via restriction. For the stabilization parameter methods, we find that deviation from the values obtained from the generalized eigenvalue problems significantly affects the convergence rate of the solver. Specifically, the best behavior in the global estimate is achieved for the smallest global parameter that ensures stability of the solution, which can be several times smaller than the value computed from the generalized eigenvalue problem. For the local estimation, the solver tends to achieve its smallest reduction rates for a value of \(2\lambda_{l}\)-\(4\lambda_{l}\). The solution quickly becomes unstable below the calculated distribution in this approach. However, it is more robust for larger relative values compared to the global estimate. The reduction rate of the solver deteriorates for overestimated parameters in both approaches. We find that the solver tends to achieve better convergence rates with the local estimate method, especially for deeper hierarchies. The discretization dependence of the stabilization parameter influences the performance of the multilevel solver. Forming coarse grid matrices through the evaluation of the bilinear form on the coarse function space, an individual stabilization parameter estimate on each level of the hierarchy is obtained which we find to provide better convergence rates. For the RAP approach, the finest level estimate of the stabilization parameter is implicitly used also on coarser meshes which deteriorates the convergence rates, in particular for deeper mesh hierarchies. We achieve the fastest and most scalable convergence rates for a local stabilization parameter estimate in conjunction with assembled matrices on each level of the hierarchy. The local estimate seems to be a favorable approach to obtaining the stabilization parameter, considering the increasing costs for generalized eigensolvers for the global estimate and the better performance achieved by the local estimate. Computing a separate stabilization parameter on every level of the multilevel hierarchy via the evaluation of the bilinear form on coarse spaces seems favorable as it produces level-adjusted stabilization parameters in the coarse matrices which result in better convergence rates for the multigrid solver in our studies. ## Acknowledgments Financial support was provided by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) in the framework of subproject C4 of the collaborative research center SFB 837 Interaction Modeling in Mechanized Tunneling, grant number 77309832. This support is gratefully acknowledged. We also gratefully acknowledge the computing time on the computing cluster of the SFB 837 and the Department of Civil and Environmental Engineering at Ruhr University Bochum, which has been employed for the presented studies.
2301.10092
Model soups to increase inference without increasing compute time
In this paper, we compare Model Soups performances on three different models (ResNet, ViT and EfficientNet) using three Soup Recipes (Greedy Soup Sorted, Greedy Soup Random and Uniform soup) from arXiv:2203.05482, and reproduce the results of the authors. We then introduce a new Soup Recipe called Pruned Soup. Results from the soups were better than the best individual model for the pre-trained vision transformer, but were much worst for the ResNet and the EfficientNet. Our pruned soup performed better than the uniform and greedy soups presented in the original paper. We also discuss the limitations of weight-averaging that were found during the experiments. The code for our model soup library and the experiments with different models can be found here: https://github.com/milo-sobral/ModelSoup
Charles Dansereau, Milo Sobral, Maninder Bhogal, Mehdi Zalai
2023-01-24T15:59:07Z
http://arxiv.org/abs/2301.10092v1
# Model soups to increase inference without increasing compute time ###### Abstract In this paper, we compare Model Soups performances on three different models (ResNet, ViT and EfficientNet) using three Soup Recipes (Greedy Soup Sorted, Greedy Soup Random and Uniform soup) from [1], and reproduce the results of the authors. We then introduce a new Soup Recipe called Pruned Soup. Results from the soups were better than the best individual model for the pre-trained vision transformer, but were much worst for the ResNet and the EfficientNet. Our pruned soup performed better than the uniform and greedy soups presented in the original paper. We also discuss the limitations of weight-averaging that were found during the experiments. The code for our model soup library and the experiments with different models can be found here. ## 1 Introduction Traditionally, training a deep learning model is done in two steps: first, the model is trained several times by varying the hyper parameters, and then the model with the best performance is chosen before being used. The publication "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time" [1] proposes to average the weights of the different models to maximize the use of all the training without wasting learning time. This adds very little memory cost, learning time or inference time. The paper shows that a performance increase is visible on some models and even defines a new state of the art on image classification. The authors present two ways to combine two models to create a soup: greedy soup and uniform soup. In this project, we will first test the Model Souping on a Resnet [3], ViT [4], and EfficientNet [5] to reproduce similar performance improvements. Then, we test a new way to create a soup similar to the pruning of decision trees. ## 2 Summary ### Prior Work Before the paper on model soups, the idea of combining the outputs of multiple models was already explored. In 1996, Leo Breiman published Bagging predictors, where he presented a method that generates multiple versions of a predictor and uses them to get an aggregated predictor. Then, in 1999, Eric Bauer and Ron Kohavi published An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants, where they reviewed many voting classification algorithms, like Bagging and AdaBoost, and showed interesting results. In 2000, Thomas G. Dietterich published Ensemble Methods in Machine Learning, where he explains why ensembles can outperform single classifiers. All of these ensembling techniques increase the accuracy and robustness of models, but have a shortcoming in the computation cost especially when the number of models is large. ### Original paper In the original paper, the authors propose that, instead of selecting the individual fine-tuned model with the highest accuracy on a validation set, averaging the weights of independently fine-tuned models in what they called a _model soup_ can outperform the former while requiring no more training and adding no cost of inference. Prior to their paper, weight averaging along a single training trajectory showed a performance improvement in models trained from random initialization [2]. Their approach comes as an extension of the weight averaging concept applied to the context of fine-tuning. The authors explored two main ways to make soups. The first method is the Uniform soup that simply averages all the models parameters as \(f(x,\frac{1}{k}\sum_{i=1}^{k}\theta_{i})\) where \(\theta_{i}\) represents a model found through fine-tuning. The other method is called Greedy soup and is made by first sorting the models by decreasing order of validation accuracy and then by adding each model as an ingredient of the soup if it provides a better performance than the best individual model on the held-out validation set. To use the method of the greedy soup, they consider a neural network denoted by \(f(k,\theta)\), where \(\theta\) represents the parameters that are obtained by fine-tuning models with pre-trained initialization of \(\theta_{0}\) and hyperparameter configuration of h as inputs. The configuration of the hyperparameter include the optimizer, data augmentation and the learning rate. The algorithm for the greedy-soup method is illustrated in Algorithm 1 To evaluate the soup models, the authors fine-tune CLIP, ALIGN and ViT-G models that were all pre- trained. The soups were evaluated on a held out validation set to avoid overfitting the training data. The results obtained show that the soups outperform the best individual model with the most impressive result being the ViT-G model soup achieving 90.94% on ImageNet, a new state of the art. It can be seen in figure 1 that a Greedy soup definitively results in an improved accuracy. They also show that the model soups are applicable for the fine-tuning of transformer models for text classification. ### Our approach As per the plan submitted for the project, our approach was to first try to replicate the results of the paper on a simpler model. For this, we chose to implement a ResNet model and train it on the CIFAR-100 dataset, which is publicly available. The ResNet model was trained varying the learning rate and weight decay, similarly to the paper, to obtain 36 models with different hyperparameters combinations. We then implemented the Greedy soup and the Uniform soup proposed in the article to evaluate them. To approach the evaluation of the soups the same way the authors of the paper did, a validation set is created by splitting the original test set of the CIFAR-100 dataset in 2: 5000 examples for the validation set, and 5000 for the new test set. The results are presented and discussed in section 3. Following these experiments, we tried to replicate the results obtained by the authors on the same models (ViT-G). Thus, we found pretrained ViT-G models and fine-tuned them with the same hyperparmeters on the CIFAR-100 dataset. We applied the soup-making algorithms and compared our results with the results of the authors in section 3. Furthermore, we proposed a new algorithm to average the weighs of the models in a way similar to pruning in decision trees, which we called "Pruned Soups". The pseudo-code for the algorithm can be found in Algorithm 2. The third model's approach consisted to implement an EfficientNet [6] model as it is said to achieve a better accuracy and efficiency than previous ConvNets such as Resnet [5]. The model was trained using the same hyperparmeters on the CIFAR-100 dataset. The implementation was taken from a code from Github submitted for the paper "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks". You can find the source code here. This model was used as it is focused on balancing the dimensions of the network width, depth and resolution by scaling each aspect with constant ratio. ### Implementation Details The model soup code has two main components: * The _generate_soup.py_ script which contains the main _make_soup_() function which uses pytorch trained models and an evaluator to generate different kinds of soups. * An evaluator interface, which must be implemented by the Evaluator object used to generate the soups. An example of an Evaluator Class can be found in _model_evaluators/cifar_eval_. The _Evaluator_ provides the metric upon which the classification of the models and the selection of ingredients for the soup will be made. The code is designed to always maximize that metric so we must be careful when using metrics like loss that are designed to be minimized. The _Evaluator_ is Figure 1: Results of the accuracies obtained by fine-tuning a CLIP ViT-L model on CIFAR-10 in the paper. Figure 2: CIFAR-100 dataset on which all our models were trained. responsible for loading the desired dataset and iterating over it. The soup code is designed to use two separate sets: one for initial ordering of the models and for selecting ingredients and one for computing final performance. This distinction is critical to avoid overfitting our soups to one single dataset. ## 3 Methods ### Experimental setup This section represents our experimental setup and the results on CIFAR-100 using three different approaches. The models are trained using similar settings : a stochastic gradient descent optimizer with a 0.9 value for the momentum, and with a weight decay [1e-5, 2e-5, 5e-5, 1e-4, 2e-4, 4e-4] and learning rate [0.01, 0.02, 0.05, 0.1, 0.2, 0.4] varying for each model. Each model is trained for 10 to 15 epochs with a total batch size of 256. Models were trained at first over 36 different combinations of hyperparameters, from which the accuracy of the best model was noted. To create the soups for each model, a Colab notebook was used to select the type of model to evaluate, the number of runs, the number of ingredients, the type of strategy and the soup making method. To evaluate the performances of the soups, the test set was separated to create a separated validation set. The user-friendly notebook allows to import the functions easily from Github and test any model, in our case: ResNet, ViT or EfficientNet. Results were obtained for sorted and random Greedy soups, Uniform soups, and sorted and random Pruned soups. ### Results As we can see on Table 1, the Greedy soups achieved a better performance than the best fine-tuned model, with an accuracy of 51% while picking models randomly, and 51.75% when the models were sorted in decreasing accuracy. This is about 1% above the performance of the best model (50.3%) which is comparable to the results obtained in the paper [1]. We can also see that the proposed method (Pruned soups) achieves even higher performance, with an accuracy of 52% while picking models randomly, and 52.1% when the models were sorted in decreasing accuracy. However, it is important to note that the standard deviation of these results are in the order of 0.3%, so more experiments would be needed to conclude if the pruned soups perform better in average than the greedy soups. Another observation that can be made is that the pruned soups used fewer ingredients in average (around 3) than the greedy soups (around 5). Finally, contrary to the paper, the uniform soup has a much lower accuracy than the best model. This can be explained by the fact that some models included in the uniform soup had poor performance, as shown by the surface plot of the grid search in Figure 3. When looking at the ResNets and EfficientNets models, we can instead see that the performance of the soups dramatically decreases in comparison to the best model, as shown in Table 2. In fact, their accuracy is close to 1%, which means that they do not do better than a model that would completely randomly guess the class of the image, since the CIFAR-100 dataset has 100 possible classes. Further, we can observe that the only soups that still can generalize on the test set are the greedy soups containing only 1 ingredient, which means it is the same as picking the best model or a random model, according to the strategy of the soup. Therefore, it can be seen that the architectures of these models are not well adjusted for the creation of soups. \begin{table} \begin{tabular}{l r r} \hline Method & Acc. (\%) & Ingredients (avg) \\ \hline Best individual model & 50.3 & - \\ Uniform soup & 32.22 & 22 \\ Greedy soup (random) & 51.06 & 5.1 \\ Greedy soup (sorted) & 51.76 & 5 \\ Pruned soup (random) & 52.04 & 3.2 \\ Pruned soup (Sorted) & 52.1 & 3 \\ \hline \end{tabular} \end{table} Table 1: Accuracies of the different methods across our experiments on the CIFAR-100 dataset for the ViT models. Figure 3: Comparison of the performance obtained by the different soups made and the individual models from the grid search for the ViT models. ### Ablation study We illustrate the impact of the number of ingredients on the performances of the greedy soup in figure 4. An increase in ingredients (up to 5) provides a better final model over the best individual model with an improvement of the accuracy by around 1%. The greedy soups studied in the paper also select 5 models for CLIP ViT-B/32 and ALIGN EfficientNet-L2 that provides better results by 0.7 and 0.5 percentage points. This applies to both strategies (considering the models to add randomly versus in decreasing order of accuracy). It can also be noted that when increasing the maximum number of ingredients beyond 5, the sorted greedy soup has the same performance, as adding any other model results in a decrease in accuracy. Similarly, the random greedy soup gets diminishing returns and oscillates around 51.5% accuracy. The oscillation can be explained by the variance in the order in which the models are considered to be added in the soup. To mitigate that, the results were averaged over 10 runs. With the help of figure 5, we can see that an increase in the number of passes increases the accuracy of both random and sorted pruned soups. However, the sorted pruned soup quickly stabilizes at a 52.1% accuracy with a number of passes greater or equal to 2. For the random pruned soup, we have that the accuracy increases at every increase of the number of passes finally reaching similar performance to the sorted pruned soup at 5 passes with an accuracy of 52.04%. This means the number of passes can be increased until the optimal combination of models for the soup is found, and removing any model from the soup hinders its ability to generalize. As we can see in figure 6, an increase in the number of passes has a decreasing impact on the average number of ingredients in pruned soups. Starting at an average of 13.5 ingredients for a single pass, that number decreases exponentially, with a smaller decrease at every increase in the number of passes, to reach a much lower average of 3.2 ingredients, which was the best configuration with the set of trained models that was used. Similar to what the paper mentioned, a sorted strategy gives a slightly better accuracy than a randomly ordered strategy. We can see in figures 3 and 4 that a better accuracy is achieved with sorted soup models. an hypothesis for a new criteria can be made: the models considered for the uniform soup must have an accuracy in the order of the best model, similar to a set of fine-tuned models with the same local minimums, with small variations. In that sense, if the weights of the models converge to different local minimums, averaging them will move the solution in an arbitrary place, which is not guaranteed at all to be in a local minimum. This was observed for the ResNets and EfficientNets models. In order to apply different approaches to the soups, it is important to further analyze and understand the limitations of model soups on ResNets and EfficientNets as the averaging weights method did not work. As the authors mentioned in the paper, the use of pre-trained models is essential to the use of soups for the fine-tuning of hyperparameters. We showed that a simple average of learned parameters of non-pre-trained models made the performance much worse, performing in a similar range to random guesses. In other words, the concept of soups works well for the fine-tuning of pre-trained models, but not for the trained-from-scratch models. In fact, the soups work well when the models have been tuned independently from the same initialization as they have the same error landscape. Averaging the model weights will then result in a coherent performance only when we have this similar error landscape between models. This leads to a limitation that was not discussed in the original paper: the models used to make soups not only need to be pre-trained, but the model in itself will need to converge to the same error landscape when fine-tuned, so the soup methods cannot be applied to all model architectures. For instance, the EfficientNets used for our experiments where in fact pre-trained, but the fine-tuning with different hyperparameters lead to different local minima for the weights, and averaging any model with another in this situation destroys any capacity of the soup to generalize. ### Future Work Although model soups show some promise on some problems and have been shown to provide state of the art results for some problems, a clearer understanding of their limitations is important. In this work, we show some limitations of model soups such as the need for all models to be in the same error landscape. A comprehensive study of all models where model soups might or might not perform would be critical to understand the different limitations of model soups. One relevant hypothesis is that batch normalization is causing the fine-tuned models to end up in different error landscapes as it introduces some randomness to the weights of the model. Verifying this hypothesis would require some more experimentation and would lead to a much deeper understanding of the limitations of model soups. Another hypothesis is that the different error landscapes might arise as a result of the optimizer used during fine-tuning of the models. To verify that hypothesis, more experiments could be done with different optimizers to see if this factor has an influence. ## 5 Conclusion This work shows that Model Soups are promising when it comes to making the most out of the training time of your models and is a step forward to getting to smarter hyperparameter tuning. However, we show that the shortcomings of model soups are still not well known and understood. In order to further develop the concept of model soups and integrate it in modern deep learning pipelines, these limitations will need to be understood and, if possible, fixed completely. Once they are well understood, model soups could very well become an integral part of modern deep learning pipelines as their benefits towards training efficiency and performance improvements cannot be undermined. Figure 6: Average number of ingredients in pruned soups as a function of the number of passes over the ingredients
2305.14703
Generative diffusion learning for parametric partial differential equations
We develop a class of data-driven generative models that approximate the solution operator for parameter-dependent partial differential equations (PDE). We propose a novel probabilistic formulation of the operator learning problem based on recently developed generative denoising diffusion probabilistic models (DDPM) in order to learn the input-to-output mapping between problem parameters and solutions of the PDE. To achieve this goal we modify DDPM to supervised learning in which the solution operator for the PDE is represented by a class of conditional distributions. The probabilistic formulation combined with DDPM allows for an automatic quantification of confidence intervals for the learned solutions. Furthermore, the framework is directly applicable for learning from a noisy data set. We compare computational performance of the developed method with the Fourier Network Operators (FNO). Our results show that our method achieves comparable accuracy and recovers the noise magnitude when applied to data sets with outputs corrupted by additive noise.
Ting Wang, Petr Plechac, Jaroslaw Knap
2023-05-24T04:15:34Z
http://arxiv.org/abs/2305.14703v1
# Generative diffusion learning for parametric partial differential equations ###### Abstract We develop a class of data-driven generative models that approximate the solution operator for parameter-dependent partial differential equations (PDE). We propose a novel probabilistic formulation of the operator learning problem based on recently developed generative denoising diffusion probabilistic models (DDPM) in order to learn the input-to-output mapping between problem parameters and solutions of the PDE. To achieve this goal we modify DDPM to supervised learning in which the solution operator for the PDE is represented by a class of conditional distributions. The probabilistic formulation combined with DDPM allows for an automatic quantification of confidence intervals for the learned solutions. Furthermore, the framework is directly applicable for learning from a noisy data set. We compare computational performance of the developed method with the Fourier Network Operators (FNO). Our results show that our method achieves comparable accuracy and recovers the noise magnitude when applied to data sets with outputs corrupted by additive noise. ## 1 Introduction In many scientific and engineering applications, it is essential to handle computational models involving random input parameters. Often this task is computationally challenging as it relies on repeated solves of the underlying mathematical model for distinct values of input parameters. It is therefore crucial to develop fast and reliable methods that solve the problems for a range of input parameters. Operator learning problem.In the context of parameter dependent partial differential equations (PDEs) the reliance of a solution on input parameters can be conveniently captured by the the following abstract PDE \[\mathcal{F}(u(x);a(x))=0,\qquad\text{in }D\subset\mathbb{R}^{d}, \tag{1}\] defined by the mapping \(\mathcal{F}:\mathcal{U}\times\mathcal{A}\rightarrow\mathbb{R}\). Here \(\mathcal{A}\) and \(\mathcal{U}\) are suitable spaces of functions over the domain \(D\) for which the problem is well-posed. The input \(a\) takes values in the input function space \(\mathcal{A}\) and \(u\) is the corresponding solution taking values in the output function space \(\mathcal{U}\). A typical example where the proposed approach will be applied is the case when \(a\) is a random field distributed according to a probability measure \(\mu\). We give specific examples of PDEs later when discussing various benchmarks. For now we tacitly assume that the choice of \(\mathcal{F}\), \(\mathcal{A}\), \(\mathcal{U}\), and \(\mathcal{V}\) defines a well-posed problem. In other words, we assume that there exists a unique solution operator, or equivalently input-to-output mapping \(\mathcal{S}:\mathcal{A}\to\mathcal{U}\) such that \(u=\mathcal{S}(a)\) is uniquely defined and stable in the respective topology of the space \(\mathcal{A}\) and \(\mathcal{U}\). The overarching goal of the operator learning problem is to find an operator \(\mathcal{S}_{\theta}:\mathcal{A}\to\mathcal{U}\), parameterized by the parameters \(\theta\), and approximating \(\mathcal{S}\) in a certain sense. Once \(\mathcal{S}_{\theta}\) is learned, solutions to (1) can be readily evaluated for a range of inputs thus avoiding repeated solves of the equation for different \(a\). We emphasize that the function spaces \(\mathcal{A}\) and \(\mathcal{U}\) are infinite dimensional by definition. However, in the numerical context we only have access to the pair of fields \((a,u)\) in some finite dimensional approximation spaces \(\mathcal{A}_{M}\) and \(\mathcal{U}_{M}\). For example, on a discretization of the domain \(D\) with an \(M^{d}\) points grid, i.e., \(D_{M}\subset D\). With a slight abuse the notation, in the remainder of the article we denote \(a=a|_{D_{M}}\in\mathcal{A}_{M}=\mathbb{R}^{M^{d}\times d_{a}}\) and \(u=u|_{D_{M}}\in\mathcal{U}_{M}=\mathbb{R}^{M^{d}\times d_{u}}\). For the ease of presentation, we assume that the function spaces \(\mathcal{A}\) and \(\mathcal{U}\) are real-valued and hence \(d_{a}=d_{u}=1\). Operator learning methods.Modern machine learning techniques provide a promising tool for tackling the operator learning problem. Current neural network based operator learning methods can be classified into two categories: purely data driven and physics informed. The data-driven approach assumes the access to a data set of input-output pairs \(\mathcal{D}=\{(a^{(n)},u^{(n)})\}_{n=1}^{N}\) and aims to learn the parametric approximation \(\mathcal{S}_{\theta}\), such that the empirical data loss function \[\mathcal{L}_{\text{DD}}(\theta)=\frac{1}{N}\sum_{n=1}^{N}\|u^{(n)}-\mathcal{S }_{\theta}(a^{(n)})\|^{2} \tag{2}\] is minimized over the space of parameters \(\theta\). Recently, various deep learning architectures have been designed in order to solve (2) efficiently. The state-of-the-art methods, e.g., the fully convolution network [33], PCA network [2; 7], multi-wavelet method [6], turbulent flow net [27], deep operator net (ONet) [16; 17], and the Fourier neural operator (FNO) [12; 13; 14], etc., all fall into this category. Among them, ONet approximates the solution operator based on the universal approximation theorem of operators [3] and FNO parameterizes the integral kernel directly in Fourier space [11]. Both methods have received tremendous attention thanks to their mesh independent feature. Other than the above mentioned deep learning based methods, we also refer the reader to a recent work on a kernel-based framework for operator learning [1]. Although the data-driven approach shows promising performance in solving many challenging parametric PDE problems, it still faces the main limitation of being data-intensive. In comparison, the physics-informed approach relies on the direct knowledge of the underlying PDE in order to learn \(\mathcal{S}_{\theta}\) and utilizes the physics-informed empirical loss in the minimal residual form \[\mathcal{L}_{\text{PI}}(\theta)=\frac{1}{N}\sum_{n=1}^{N}\|\mathcal{F}( \mathcal{S}_{\theta}(a^{(n)});a^{(n)})\|^{2}\,. \tag{3}\] Since the above loss only involves the input \(a\) and not the solution (output) \(u\), the physics-informed approach, in principle, does not require the knowledge of the data set \(\mathcal{D}\) and hence has the key advantage of being "data-free" by design [15; 18; 29; 31; 34]. Nevertheless, the optimization of a physics-informed loss remains challenging and consequently limits its applicability to relatively simple problems [26; 28; 30]. We emphasize that all the methods mentioned above learn an operator \(\mathcal{S}_{\theta}\) that is inherently deterministic since it maps a given input realization \(a\) to a unique output realization \(u=\mathcal{S}_{\theta}(a)\). Denoising diffusion probabilistic model.Recently, DDPM has outperformed traditional deep generative models in image synthesis thanks to its ability to generate high quality samples from complex and high dimensional distributions [5; 8; 22; 24]. Inspired by sampling technique from non-equilibrium thermodynamics, DDPM utilizes the time reversal property of a class of parameterized Markov chains in order to learn a sequence of latent distributions that eventually converge to the data distribution [25]. Once properly trained, the model sequentially transforms a given Gaussian noise into a sample from the data distribution. Contributions of our work.Based on DDPM, we introduce a probabilistic method, termed as the _probabilistic diffusion neural operator_ (PDNO), for solving the operator learning problem. * To achieve this, we propose a probabilistic formulation of the operator learning problem by reformulating it as a conditional distribution learning problem, i.e., given an input \(a\), we learn a generative model for the conditional distribution \(u|a\). * The proposed formulation allows a natural application of DDPM as the underlying sampler for \(u|a\). To the best of our knowledge, this is the first work employing DDPM for the operator learning problem. * Since our method maximizes the evidence likelihood and it learns a class of distributions \(u|a\), it is directly applicable to a noisy data set where the output \(u|a\) is beyond a Dirac distribution (i.e., when \(u\) is noise free). Moreover, the learned standard deviation (std) of \(u|a\) provides an informative quantification of the uncertainty and hence the predictive quality of the learned model. * In comparison with FNO or ONet, our method requires fewer regularity assumptions in addition to DDPM being well-known for its flexibility as a generative model when learning complex distributions. Hence, we anticipate the method being capable of learning highly non-smooth solutions (see the advection PDE example in Section 4). The main goal of this work is to explore what possibilities DDPM can offer in computational science. Our numerical experiments in Section 4 suggest that PDNO can be a promising complement to the data-driven approach for operator learning. Despite the advantages summarized above, the current method does suffer certain limitations compared to state-of-the-art methods such as FNO and ONet. We discuss these limitations in detail in the conclusion section. ## 2 The probabilistic formulation The starting point of our method for solving the operator learning problem is to view each output \(u\) for a given input \(a\) as a conditional random variable. Such a viewpoint naturally leads to the probabilistic formulation: to learn a probabilistic conditional model \(q_{\theta}(u|a)\) by maximizing the joint data likelihood \(q_{\theta}(\mathcal{D})\). Since the input distribution of \(a\sim\mu\) is known, this is equivalent to maximizing the following conditional likelihood \[\prod_{n=1}^{N}q_{\theta}(u^{(n)}|a^{(n)})\propto q_{\theta}( \mathcal{D})=\prod_{n=1}^{N}q_{\theta}(u^{(n)}|a^{(n)})p(a^{(n)}). \tag{4}\] Although for a given input \(a\) the corresponding output \(u\), as a solution to the PDE (1), is deterministic, there is no loss of generality in modeling it as random since \(u|a\) can be viewed as a Dirac measure \(\delta_{u}(\cdot|a)\) concentrated at the solution \(u\) if no noise is added to \(u\). Indeed, solving a deterministic problem from a stochastic perspective is common in machine learning, e.g., Gaussian process regression uses a Gaussian random field to describe a distribution over functions [32]. The advantage of the probabilistic approach for operator learning is two-fold: _(i) Flexibility:_ the probabilistic model \(q_{\theta}(u|a)\) is more flexible than deterministic models since it does not depend on the specific choice of the cost functional \(\mathcal{L}\). For instance, we can assume that the prediction \(\mathcal{S}_{\theta}(a)\) and the observation \(u\) differ by a Gaussian noise \(\epsilon\), i.e., \(u=\mathcal{S}_{\theta}(a)+\epsilon\) with \(\epsilon\sim\mathcal{N}(0,\sigma_{\mathbf{I}}^{2}\mathbf{I})\). Note that when the noise magnitude \(\sigma_{n}\) is known, maximizing the likelihood (4) leads to (2) with \(\mathcal{L}\) being the \(L_{2}\)-loss. In this work, we do not assume any distribution for the conditional \(q_{\theta}(u|a)\) but rather learn it completely from the data in a generative manner. _(ii) Uncertainty evaluation:_ in practice the data can contain noise due to numerical/measurement errors and similarly the trained model can demonstrate uncertainty due to over-fitting or under-fitting. Aside from the prediction of the output, we are often interested in the quality of the prediction as well. The probabilistic model \(q_{\theta}(\cdot|a)\) allows for quantifying uncertainty of the learned model arising from data sets corrupted by noise. Since our probabilistic framework constructs the conditional distribution \(q_{\theta}(u|a)\), it is directly applicable to the noisy data setting as well and hence we consider two types of data sets in this work: **1) Noise-free data:** for each input \(a\), there is a unique output \(u\) in the noise-free data set \(\mathcal{D}\) such that the pair \((a,u)\) solves (1). Therefore, our probabilistic model \(q_{\theta}(u|a)\) learns a class of Dirac distributions \(\delta_{u}(\cdot|a)\); **2) Gaussian noise data:** we assume a noisy data set \(\mathcal{D}_{\eta}\) obtained from corrupting the output \(u\) by an additive Gaussian noise \(\eta\sim\mathcal{N}(0,\Sigma(a))\), i.e., \(u_{\eta}=u+\eta\). In this case, the data set may contain repeated realizations of \(a\) whose corresponding outputs \(u_{\eta}\) are different due to the noise. Therefore, our probabilistic model \(q_{\theta}(u|a)\) learns a Gaussian distribution \(\mathcal{N}(u;m(a),\Sigma(a))\). Note that we do not assume the covariance \(\Sigma(a)\) is known and hence the maximum likelihood estimation (4) is not equivalent to minimizing the \(L_{2}\)-loss (2). Conditional DDPM for operator learning The specific model that we apply in learning the distribution \(q_{\theta}(u|a)\) is the DDPM, which fits naturally as a sampler with the proposed probabilistic framework. We first briefly review the basic framework for the unconditional DDPM and then generalize it to the conditional setting. ### Denoising diffusion probabilistic model Given a set of unlabeled observations of \(u\), a DDPM learns a sampler of the underlying distribution \(p(u)\) by constructing a time reversible Markov chain. Specifically, DDPM views the observation \(u\) as an initial state \(u_{0}\) of a Markov chain defined by the forward transition probability \[p(u_{t}|u_{t-1})=\mathcal{N}(u_{t};\sqrt{1-\beta_{t}}u_{t-1},\beta_{t}\mathbf{ I}),\qquad t=1,\dots,T \tag{5}\] where \(\beta_{t}\) is the variance schedule such that the final state \(u_{T}\) is approximately standard normal for \(T\) sufficiently large. The joint distribution of the Markov chain (conditioned on \(u_{0}\)) is hence given by \[p(u_{1:T}|u_{0})=\prod_{t=1}^{T}p(u_{t}|u_{t-1}). \tag{6}\] The process \(u_{1:T}\) can be considered as a procedure of gradually adding noise to \(u_{0}\) until it becomes a Gaussian noise \(u_{T}\). Conversely, we can start from a normal distribution \(q_{\theta}(u_{T})\) and gradually denoise it to obtain the desired distribution \(q_{\theta}(u_{0})\approx p(u_{0})\). This is accomplished by defining the reverse transition probability \[q_{\theta}(u_{t-1}|u_{t})=\mathcal{N}(u_{t-1};m_{\theta}(t,u_{t}),\Sigma_{ \theta}(t,u_{t})),\qquad t=T,\dots,1, \tag{7}\] where \(m_{\theta}(t,u_{t})\) and \(\Sigma_{\theta}(t,u_{t})\) are the mean and the covariance of the distribution, respectively. The joint distribution of the reverse Markov chain is hence given by \[q_{\theta}(u_{0:T})=q_{\theta}(u_{T})\prod_{t=1}^{T}q_{\theta}(u_{t-1}|u_{t}). \tag{8}\] The DDPM parameterizes \(m_{\theta}\) and \(\Sigma_{\theta}\) using neural networks in order to maximize the log-likelihood \(\log q_{\theta}(u_{0})\). Due to the intractability of the likelihood, the training is performed by maximizing the usual variational lower bound (VLB) \[-\log q_{\theta}(u_{0})\leq-\mathrm{VLB}(\theta)\triangleq\mathbb{E}_{p(u_{0: T})}\left[\log\frac{p(u_{1:T}|u_{0})}{q_{\theta}(u_{0:T})}\right].\] Once we learn the reversed model \(q_{\theta}\), we can sample from \(q_{\theta}(u_{0})\approx p(u_{0})\) by running the reverse time Markov chain from time \(T\) to \(0\). ### Conditional DDPM Recall that we are given a data set of input-output pairs \(\mathcal{D}\) and we aim at learning the conditional distribution \(q_{\theta}(u|a)\). To adapt the DDPM to supervised learning, we let the initial state \(u_{0}\) depend on the input \(a\) and define the conditional forward transition distribution \[p(u_{t}|u_{t-1},a)=\mathcal{N}(u_{t};\sqrt{1-\beta_{t}}u_{t-1},\beta_{t} \mathbf{I}),\qquad t=1,\dots,T\,. \tag{9}\] Note that the above conditional transition distribution remains the same as the unconditional one (5). However, the input \(a\) does affect the parameterization of the conditional reverse transition distribution \[q_{\theta}(u_{t-1}|u_{t},a)=\mathcal{N}(u_{t-1};m_{\theta}(t,u_{t},a),\Sigma_ {\theta}(t,u_{t},a))\,, \tag{10}\] where \(m_{\theta}(t,u_{t},a)\) and \(\Sigma_{\theta}(t,u_{t},a)\) are the mean and the covariance whose parameterization will be worked in next section. This can be seen from the Bayes' formula that the true conditional reverse transition distribution \[p(u_{t-1}|u_{t},a)=\frac{p(u_{t}|u_{t-1},a)p(u_{t-1}|a)}{p(u_{t}|a)}\] depends on \(a\) as both \(p(u_{t-1}|a)\) and \(p(u_{t}|a)\) depend on \(a\). Similar to the unconditional DDPM, we perform the training by maximizing the VLB of the conditional log likelihood \[-\log q_{\theta}(u_{0}|a)\leq-\text{VLB}(\theta;a)\triangleq\mathbb{E}_{p(u_{0: T}|a)}\left[\log\frac{p(u_{1:T}|u_{0})}{q_{\theta}(u_{0:T}|a)}\right].\] To derive a computable form of the VLB, we mimic the unconditional PPDM and write \[-\text{VLB}=L_{T}+\sum_{t=2}^{T}L_{t-1}+L_{0}\,, \tag{11}\] where \[L_{0}(\theta;a) =\mathbb{E}_{p(u_{1}|a)}\left[-\log q_{\theta}(u_{0}|u_{1},a)\right]\] \[L_{t-1}(\theta;a) =\mathbb{E}_{p(u_{t},u_{0}|a)}\left[\text{KL}(p(u_{t-1}|u_{t},u_{ 0})||q_{\theta}(u_{t-1}|u_{t},a))\right],\qquad t=2,\ldots,T\] \[L_{T}(\theta;a) =\mathbb{E}_{p(u_{0}|a)}\left[\text{KL}(p(u_{T}|u_{0})||q_{ \theta}(u_{T}))\right]\] For completeness, we show the derivation of (11) in Appendix B.1. Since \(q_{\theta}(u_{T})\) is assumed to be a standard Gaussian, \(L_{T}\) is independent of the parameter \(\theta\). ### Parameterization for the reverse process In order to minimize \(L_{t-1}\) for \(t=2,\ldots,T\), we first apply Bayes' formula and the reparameterization trick to obtain \[p(u_{t-1}|u_{t},u_{0})=\mathcal{N}(u_{t-1};m(t,u_{t},u_{0}),\Sigma(t,u_{t},u_{ 0})), \tag{12}\] with \[m(t,u_{t},u_{0})=\frac{1}{\sqrt{\alpha_{t}}}\left(u_{t}-\frac{1-\alpha_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}\epsilon_{t}\right),\qquad\Sigma(t,u_{t},u_{0})= \frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}\mathbf{I}\,,\] where \(\bar{\alpha}_{t}=\prod_{n=1}^{t}\alpha_{n}\) with \(\alpha_{t}=1-\beta_{t}\) and \(\epsilon_{t}\) is a standard Gaussian (see Appendix B.2 for derivation). Note that the covariance \(\Sigma(t,u_{t},u_{0})\) only depends on the time \(t\) but not the state \(u_{t}\) and \(u_{0}\). Next, we parameterize \(q_{\theta}(u_{t-1}|u_{t},a)\) so that it is of the same form as \(p(u_{t-1}|u_{t},u_{0})\). Following the unconditional DDPM, we choose the following parametric form \[q_{\theta}(u_{t-1}|u_{t},a)=\mathcal{N}(u_{t-1};m_{\theta}(t,u_{t},a),\Sigma_ {\theta}(t,u_{t},a)) \tag{13}\] with \[m_{\theta}(t,u_{t},a)=\frac{1}{\sqrt{\alpha_{t}}}\left(u_{t}-\frac{1-\alpha_{t }}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(t,u_{t},a)\right),\] where \(\epsilon_{\theta}(t,u_{t},a)\) is a neural network function that approximates the the standard Gaussian \(\epsilon_{t}\). The choice of the covariance depends on the data distribution as follows \[\Sigma_{\theta}(t,u_{t},a)=\begin{cases}\frac{1-\bar{\alpha}_{t-1}}{1-\bar{ \alpha}_{t}}\beta_{t}\mathbf{I}&\text{Noise free observation,}\\ \beta_{t}\mathbf{I}&\text{Observations with additive Gaussian noise.}\end{cases} \tag{14}\] In the case of data set with non-Gaussian noise, a linear combination of the above two covariances can be used for introducing an additional hyper-parameter \(\lambda\). Note that \(\Sigma_{\theta}(t,u_{t},a)\) in (14) is indeed not parameterized by \(\theta\) and is independent of \(u_{t}\) and \(a\). Comparing the two distributions (12) and (13) in KL divergence and empirically ignoring the pre-factor lead to \[L_{t-1}(\theta)=\mathbb{E}_{p(\epsilon_{t})p(u_{0}|a)}\left[\|\epsilon_{t}- \epsilon_{\theta}(t,\sqrt{\bar{\alpha}_{t}}u_{0}+\sqrt{1-\bar{\alpha}_{t}} \epsilon_{t},a)\|^{2}\right],\qquad t=2,\ldots,T, \tag{15}\] where we have replaced \(u_{t}\) by (B.1) using the reparameterization trick. Similarly, we can calculate \(L_{0}\) and ignore the pre-factor which gives \[L_{0}(\theta)=\mathbb{E}_{p(\epsilon_{1})p(u_{0}|a)}\left[\|\epsilon_{1}- \epsilon_{\theta}(1,\sqrt{\bar{\alpha}_{1}}u_{0}+\sqrt{1-\bar{\alpha}_{1}} \epsilon_{1},a)\|^{2}\right]. \tag{16}\] Unifying (15) and (16) and averaging over the input \(a\) leads to the final loss function to be optimized: \[L(\theta)=\mathbb{E}_{t\sim\text{Unif}(1,\cdots,T)}\mathbb{E}_{p(a,u_{0})} \mathbb{E}_{p(\epsilon_{t})}\left[\|\epsilon_{t}-\epsilon_{\theta}(t,\sqrt{ \bar{\alpha}_{t}}u_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon_{t},a)\|^{2}\right]\,, \tag{17}\] where \(\epsilon_{t}\) are i. i. d. standard Gaussians. ### Probabilistic diffusion neural operator The resulting PDNO algorithm consists of two stages: the training stage that minimizes \(L(\theta)\) in order to learn the reversed conditional Markov model \(q_{\theta}(u_{t-1}|u_{t},a)\) and the inference stage that samples sequentially from \(q_{\theta}(u_{t-1}|u_{t},a)\), as depicted in the schematic diagram 1. Algorithm 1 summarizes the procedure for training and sampling of PDNO. As indicted in (13), we parameterize the noise predictor \(\epsilon_{\theta}\) for \(q_{\theta}(u_{t-1}|u_{t},a)\), which has been empirically observed to produce higher accuracy than parameterizing the mean \(m_{\theta}\) directly. For implementation, we follow the architecture presented in [8] to use a UNet backbone to represent the noise predictor \(\epsilon_{\theta}(t,u_{t},a)\) with the sinusoidal position embedding for \(t\)[21]. In comparison with the unconditional DDPM, the conditional DDPM utilized by PDNO takes an additional argument \(a\) as the input to the UNet. In our implementation, \(a\) and \(u_{t}\) are channel-wise concatenated before being fed into the UNet. As a probabilistic model, PDNO learns a class of conditional distributions \(q_{\theta}(\cdot|a)\) rather than the deterministic solution operator. For a given test input \(a\), \(N_{s}\) approximated samples of \(u|a\), denoted by \(\{\mathcal{S}_{\theta}^{(n)}(a)\}_{n=1}^{N_{s}}\), are required in order to obtain various statistics of the conditional distribution. This is the case when we handle noisy data sets. However, it should be emphasized that, in the case of noise-free data where we learn a class of point distribution \(\delta_{u}(\cdot|a)\) concentrated at \(u\), only a single sample is required to estimate the corresponding solution \(u\) provided that the model is well trained. In practice, we validate the learned model \(\delta_{u}(\cdot|a)\) with a diagnostic run on the test data set to check whether the estimated standard deviation is approximately zero. ``` 0: Training Stage: 1:repeat 2:\((a,u)\sim p(a,u)\) 3:\(t\sim\text{Unif}(1,\cdots,T)\) 4:\(\epsilon\sim\mathcal{N}(0,I)\) 5: stochastic gradient descent on \(L(\theta)\) in (17) 6:until converged 7: 8:for a test input \(a\) 9:for\(n=1,\cdots,N_{s}\)do 10:\(u_{T}^{(n)}\sim\mathcal{N}(0,I)\) 11:for\(t=T,\cdots,1\)do 12:\(u_{t-1}^{(n)}\sim q_{\theta}(u_{t-1}|u_{t},a)\) in (13) 13:return statistics of \(\{u_{0}^{(n)}\}_{n=1}^{N_{s}}\) ``` **Algorithm 1** PDNO ## 4 Computational benchmarks We demonstrate efficacy of PDNO on several benchmarks and provide comparison with results obtained by FNO [13]. Denote \(\widetilde{D}\) as the test data set. We evaluate the prediction accuracy and uncertainty of the model \(\mathcal{S}_{\theta}\) by the mean relative \(L_{2}\)-error (MRLE) and the mean standard deviation (MSTD) that are defined by \[\frac{1}{|\widetilde{\mathcal{D}}|}\sum_{(a,u)\in\widetilde{\mathcal{D}}}\frac {\|\text{mean}(\{\mathcal{S}_{\theta}^{(n)}(a)\}_{n=1}^{N_{s}})-u\|_{2}}{\|u \|_{2}}\quad\text{ and }\quad\frac{1}{|\widetilde{\mathcal{D}}|}\sum_{(a,u)\in \widetilde{\mathcal{D}}}\frac{\|\text{std}(\{\mathcal{S}_{\theta}^{(n)}(a)\}_ {n=1}^{N_{s}})\|_{2}}{N_{s}}\,,\] respectively. All experiments are implemented with a single Tesla T4 GPU. More implementation details about the experiments are given in Appendix C. **Test 1** (_Elliptic equation in 1D_) We start with a problem whose analytical solution is known explicitly: \[-(a(x)u^{\prime}(x))^{\prime}=0\,,\qquad x\in(-1,1)\,, \tag{18}\] with the boundary condition \(u(-1)=0\) and \(u(+1)=1\), and \(u^{\prime}\) denotes \(du/dx\). We are interested in learning the solution operator \(\mathcal{S}\) mapping a log-normal field \(a\in\mathcal{A}=L^{2}((-1,1);\mathbb{R})\) to the solution Figure 1: A schematic diagram of the inference stage of PDNO field \(u\in\mathcal{U}=L^{2}((-1,1);\mathbb{R})\). Details on the data generation and the analytical solution can be found in Appendix C.1. We apply PDNO in both settings of noise-free data and noisy data. In order to assess the distribution of the learned solution field, we choose two distinct grid points \(x_{1}\) and \(x_{2}\) over \((-1,1)\) and apply the learned PDNO to predict the joint distribution \((u(x_{1}),u(x_{2}))\). To generate an approximation to the joint distribution, we first sample \(\mathcal{S}_{\theta}(a)\) for \(\tilde{N}=10^{5}\) inputs \(a\) from the test data set and then evaluate the learned outputs \(\mathcal{S}_{\theta}(a)\) at \(x_{1}\) and \(x_{2}\) to obtain \((u_{\theta}(x_{1}),u_{\theta}(x_{2}))\). In Figure 2, we examine the 2D joint histogram sampled from the exact solution \((u(x_{1}),u(x_{2}))\) and the approximated solution \((u_{\theta}(x_{1}),u_{\theta}(x_{2}))\) based on \(\tilde{N}=10^{5}\) inputs \(a\sim\mu\). The result shows the errors are concentrated at the order of \(10^{-5}\) (see Figure 2 Right). We also test the performance of PDNO on a Gaussian noise data set. Figure A1 in Appendix C.1 demonstrates that PDNO is able to learn the mean as well as the standard deviation from a noisy data set, which is the key advantage of PDNO over other deterministic methods. **Test 2** (_Elliptic equation in 2D_) To test the method on higher dimensional input-output data sets, we apply the proposed PDNO to learn a solution operator \(\mathcal{S}:a\in L^{\infty}(D;\mathbb{R})\mapsto u\in H^{1}_{0}(D;\mathbb{R})\) to a 2D elliptic PDE: \[-\nabla\cdot(a(x)\nabla u(x)) =f(x) x\in D \tag{19}\] \[u(x) =0 x\in\partial D,\] where the input \(a\) is a random field defined in Appendix C.2. See Appendix C.2 for more on simulation parameters and setup of the experiment. As shown in Table 1, in the noise-free data setting, PDNO achieves lower MRLE than FNO in various mesh sizes. With this example we also present a study for learning the solution operator along with the uncertainty from a noisy data set \(\mathcal{D}_{\eta}\) in which the output data are corrupted by a Gaussian noise \(\eta\sim N(0,\sigma^{2}\mathrm{I})\). We present the results in Figure A2 of Appendix C.2 where both the mean and the standard deviation are predicted for a particular input \(a\). Again the result shows that PDNO allows us to learn the solution operator and recover the magnitude of the noise. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Parameter**} & \multicolumn{2}{c}{\(M=64\)} & \multicolumn{2}{c}{\(M=128\)} & \multicolumn{2}{c}{\(M=256\)} \\ & & MRLE & MSTD & MRLE & MSTD & MRLE & MSTD \\ \hline FNO & 2, 376, 449 & 0.0060 & N/A & 0.0078 & N/A & 0.0050 & N/A \\ PDNO & 35, 707, 841 & 0.0046 & \(5.08\mathrm{e}{-07}\) & 0.0022 & \(3.52\mathrm{e}{-07}\) & 0.0037 & \(1.21\mathrm{e}{-05}\) \\ \hline \hline \end{tabular} \end{table} Table 1: MRLE and MSTD for the 2D elliptic PDE in Test 2. Figure 2: 2D joint histograms of the solution field evaluated at \(x_{1}\) and \(x_{2}\) based on \(\tilde{N}=10^{5}\) test points. **Left**: the histogram of the approximate solution; **Center**: the histogram of the exact solution; **Right**: the histogram of the error, showing the errors at all test points to be roughly of the order \(10^{-5}\) (errors are concentrated at the lower left corner). **Test 3** (_Burgers' equation_) We consider Burgers' equation with periodic boundary conditions on the spatial domain \(D=(0,1)\), \[\partial_{t}u(x,t)+u(x,t)\partial_{x}u(x,t) =\nu\partial_{xx}u(x,t),\qquad x\in D\,,\quad t\in(0,1] \tag{20}\] \[u(x,0) =u_{0}(x),\qquad x\in D\] where \(\nu\) is the viscosity coefficient and \(u_{0}\in L^{2}_{\mathrm{per}}(D;\mathbb{R})\) is the random input initial condition whose generation is described in Appendix C.3. Similar to [13], we aim to learn the operator \(\mathcal{S}:L^{2}_{\mathrm{per}}(D;\mathbb{R})\to L^{p}_{\mathrm{per}}(D; \mathbb{R})\), for \(p>1\), mapping the initial condition \(u_{0}\) to the solution \(u(\cdot,1)\). Prediction result is summarized in Table 2 and the decay of MRLE is shown in Figure 3. We observed that FNO achieves the claimed mesh invariant property in this example. Although PDNO achieves low MRLE for the experiment, FNO performs significantly better than PDNO in various mesh size. We argue that the great performance of FNO for this example is due to the periodic boundary conditions used in the equation. **Test 4** (_Advection equation_) This example aims at investigating the performance of PDNO for an advection PDE whose solution is highly non-smooth. We test our method for the 1D advection equation with the inflow condition on the domain \(D=(0,1)\): \[\partial_{t}u(x,t)+\partial_{x}u(x,t)=0\qquad x\in D\,, \tag{21}\] \[u(x,0)=u_{0}(x)\,,\quad\text{and}\quad u(0,t)=0\qquad t\geq 0.\] The solution operator learned in this problem is the mapping from the initial condition \(u_{0}\) to a solution at a fixed time \(t_{f}>0\), i.e., \(\mathcal{S}:u_{0}\in L^{\infty}(D;\mathbb{R})\mapsto u(\cdot,t_{f})\in L^{ \infty}(D;\mathbb{R})\). The distribution of the input \(u_{0}\) is chosen in a way such that the output solution \(u(\cdot,t_{f})\) is highly non-smooth. See C.4 for more details on the setup. The results of the experiment are shown in the Table 3 which compares the MRLE obtained by PDNO and FNO. In this challenging example, PDNO significantly outperforms FNO in all mesh sizes. Figure A3 depicts the solutions \(u\) learned by FNO and PDNO at particular inputs \(a\). Compared to FNO, our PDNO is able to learn very fine details of the highly non-smooth solution. ## Conclusion We have presented a probabilistic generative model based on DDPM for learning the solution operator for parametric PDEs. Our method differs from other state-of-the-art approaches in that we view the input-to-output mapping as a family of conditional distributions conditioned on the input parameter. Therefore, our model predicts the solution with quantified uncertainty. While the method has various advantages, we also observed several limitations in numerical experiments. More specifically, the current method learns a model that is trained on a predefined mesh, which limits the model's ability to predict solutions at points outside the mesh. More importantly, the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Parameter**} & \multicolumn{2}{c}{M=512} & \multicolumn{2}{c}{M=1024} & \multicolumn{2}{c}{M=2048} \\ & & MRLE & MSTD & MRLE & MSTD & MRLE & MSTD \\ \hline FNO & \(582,849\) & \(0.0005\) & N/A & \(0.0005\) & N/A & \(0.0005\) & N/A \\ PDNO & \(3,980,129\) & \(0.0047\) & \(5.47\mathrm{e}{-04}\) & \(0.0053\) & \(7.93\mathrm{e}{-04}\) & \(0.0152\) & \(1.51\mathrm{e}{-03}\) \\ \hline \hline \end{tabular} \end{table} Table 2: MRLE and MSTD on Burgers’ equation in Test \(3\). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Parameter**} & \multicolumn{2}{c}{M=256} & \multicolumn{2}{c}{M=512} & \multicolumn{2}{c}{M=1024} \\ & & MRLE & MSTD & MRLE & MSTD & MRLE & MSTD \\ \hline FNO & \(2,328,961\) & \(0.0514\) & N/A & \(0.1072\) & N/A & \(0.1325\) & N/A \\ PDNO & \(3,980,129\) & \(0.0062\) & \(7.91\mathrm{e}{-04}\) & \(0.0081\) & \(7.38\mathrm{e}{-04}\) & \(0.0445\) & \(2.59\mathrm{e}{-03}\) \\ \hline \hline \end{tabular} \end{table} Table 3: MRLE and MSTD for the Advection problem in Test 4. mesh-dependent property requires us to retrain the model in order to learn over different meshes. Finally, since sampling of \(u|a\) requires sequential sampling for \(T\) steps, the inference of our model is relatively slow compared to other deterministic models. To deal with the mesh-dependent issue, we propose to carry out the diffusion learning over an appropriate latent space [20], e.g., over the Fourier space [13] or the spectral space [19]. Alternatively, we can take advantage of the cascaded diffusion model pipeline to make predictions on a fine mesh based on a model pre-trained on a coarse mesh [9]. As for the sampling efficiency, the denoising diffusion implicit models can be applied to accelerate the inference with a slight compromise in accuracy [23]. Finally, the current implementation utilizes a UNet architecture to represent the transition density, which often involves larger number of parameters than the architecture for FNO. A more efficient DDPM based solver calls for a special purpose architecture for the operator learning application. ## Acknowledgments and Disclosure of Funding Research of P.P. was supported by the U.S. Army Research Office Grant W911NF2220234. Computations in this research were supported in part through the use of DARWIN computing system: DARWIN - A Resource for Computational and Data-intensive Research at the University of Delaware and in the Delaware Region, which is supported by NSF under Grant Number: 1919839.
2304.03156
Patch-wise Features for Blur Image Classification
Images captured through smartphone cameras often suffer from degradation, blur being one of the major ones, posing a challenge in processing these images for downstream tasks. In this paper we propose low-compute lightweight patch-wise features for image quality assessment. Using our method we can discriminate between blur vs sharp image degradation. To this end, we train a decision-tree based XGBoost model on various intuitive image features like gray level variance, first and second order gradients, texture features like local binary patterns. Experiments conducted on an open dataset show that the proposed low compute method results in 90.1% mean accuracy on the validation set, which is comparable to the accuracy of a compute-intensive VGG16 network with 94% mean accuracy fine-tuned to this task. To demonstrate the generalizability of our proposed features and model we test the model on BHBID dataset and an internal dataset where we attain accuracy of 98% and 91%, respectively. The proposed method is 10x faster than the VGG16 based model on CPU and scales linearly to the input image size making it suitable to be implemented on low compute edge devices.
Sri Charan Kattamuru, Kshitij Agrawal, Shyam Prasad Adhikari, Abhishek Bose, Hemant Misra
2023-04-06T15:39:11Z
http://arxiv.org/abs/2304.03156v1
# Patch-wise Features for Blur Image Classification ###### Abstract. Images captured through smartphone cameras often suffer from degradation, blur being one of the major ones, posing a challenge in processing these images for downstream tasks. In this paper we propose low-compute lightweight patch-wise features for image quality assessment. Using our method we can discriminate between blur vs sharp image degradation. To this end, we train a decision-tree-based XGBoost model on various intuitive image features like gray level variance, first and second order gradients, texture features like local binary patterns. Experiments conducted on an open dataset show that the proposed low compute method results in 90.1% mean accuracy on the validation set, which is comparable to the accuracy of a compute-intensive VGG16 network with 94% mean accuracy fine-tuned to this task. To demonstrate the generalizability of our proposed features and model we test the model on BHBID dataset and an internal dataset where we attain accuracy of 98% and 91%, respectively. The proposed method is 10x faster than the VGG16 based model on CPU and scales linearly to the input image size making it suitable to be implemented on low compute edge devices. Classification, Blur Classification, Image Quality Assessment, XGBoost, Low-compute + Footnote †: journal: Pattern Recognition + Footnote †: journal: Pattern Recognition + Footnote †: journal: Pattern Recognition ## 1. Introduction Images have become ubiquitous with the advent of smartphone devices with advanced camera systems. This explosion in digital image data has been the driving force behind many computer vision applications such as object detection, face recognition (Shi et al., 2017), medical image classification (Beng et al., 2019), document recognition, and self-driving cars (Beng et al., 2019). These tasks often rely on high quality images with objects in sharp focus and any degradation in image quality leads to adverse performance (Koh et al., 2019). Blur is one such undesirable degradation effect that is commonly found in images. This is caused by factors such as lack of focus or due to relative motion between the camera and target. Intuitively, we observe that the degree of blur in an image inversely effects the amount of information within an image. And this poses a challenge to extract semantic level information from it. Thus, blur classification is proposed as a preprocessing step to identify and reject low quality images either at the time of image acquisition or before processing them for downstream tasks. Blur detection, segmentation and image deblurring are areas of active research (Bangalore et al., 2019; Krizhevsky et al., 2014; Krizhevsky et al., 2014) which help identify and rectify the images affected with blur. The objective of blur detection is to identify the region within an image that is affected with blur. While blur segmentation is posed as a pixel-wise classification task to generate a blur map. Image deblurring restores a sharp image from a given blurred image. Some approaches to image deblurring pose this task as an image filtering problem (Krizhevsky et al., 2014), while some approaches pose this as an image-to-image translation problem (Krizhevsky et al., 2014). The tasks of image quality assessment and blur classification can be divided into reference-based and non-reference based methods. Non-reference based methods are more challenging since we do not have any access to a sharp reference image which contains accurate information of the scene. Convolutional neural networks (CNNs) are a well-known approach (Krizhevsky et al., 2014; Krizhevsky et al., 2014) for image classification, but there exist a few obvious downsides of using CNNs for the task of blur classification. These are data-hungry models and tend to perform best with training on large scale datasets. They have parameters in the order of millions and are trained on optimized GPU-based architectures that take considerably long time to train. Even after training such a large model, in most cases, they cannot be deployed directly on the edge or low compute devices. We generally require sophisticated hardware designed specifically to run these models along with additional optimization steps like distillation, pruning or quantization (Chen et al., 2019). For our application we perform blur classification for image quality assessment, to identify and reject images that contain unintentional blur (as shown in Fig. 1). Additionally, we require a model that accurately performs this classification with fast inference times. In this work, * We propose a method for non-reference based blur classification that utilizes conceptually intuitive features * We train the proposed set of features using Extreme Gradient Boosting (XGBoost) classifier * We train convolutional neural networks of varying complexity on the task of blur classification and compare these against our hand-crafted features across metrics like accuracy, roc-auc and benchmark them on inference time We experimentally demonstrate that our proposed feature sets combined with XGBoost classifier present a model that is accurate and provides fast inference times. ## 2. Related Works We focus on blur detection or classification at the image level to classify the presence of blur in an image. **Feature based methods -** Blur classification is a well studied area of research and multiple features like Statistical Features, Texture Features, Image quality metrics, Spectrum and Transform features, and Local power spectrum Features have been used to classify blur (Birshick et al., 2014; Li et al., 2015; Li et al., 2016). Gradient based methods like Laplacian and Tenengrad capture the regions of rapid changes in intensity, like edges. The presence of high level of edges can be correlated to sharpness of an image. This is extended to capture variance and mean of a laplacian at a global level to represent the amount of blur within an image (Li et al., 2015). The authors of (Li et al., 2015) use a combination of 35 features to classify different types of blur within an image by training a SVM in an one against one method and ensemble multiple such SVMs together to classify the type of blur. Global features do not present an accurate representation of blur around the main object of interest and in (Li et al., 2015) the authors propose using both spatial and frequency features to train a naive bayes classifier on local patches. They further extend the concept to multiscale features to create a blur segmentation map. From (Birshick et al., 2014) it is observed that frequency domain features are less robust to noise. Hence, in our work, we draw inspiration from (Li et al., 2015) and explore the use of gradient based operations applied at a patch level to images for robust blur classification. Local binary pattern (LBP) based descriptors have been used to give a robust representation of image texture. In (Li et al., 2015) the authors propose a modification to LBP by using only uniform patterns, where the local grid contains only two transitions from zero to one or vice versa. The authors utilize this formulation for the task of defocus blur segmentation. **Deep Learning based methods** - Some work has been done to investigate the usefulness of CNNs as a feature extractor and in (Li et al., 2015) they replace handcrafted features with a convolutional neural network, however in their comparison the laplacian features perform better. Modern research in deep learning methods has focused on blur segmentation and rectification of various types of blur (defocus, motion, haze etc). Most of the approaches first detect the focused and out-of-focus regions within an image and subsequently use these segmented regions for deblurring. In (Li et al., 2015) the authors use VGG based FCN to extract relevant features from blurred and sharp regions within an image. The benefit of this approach is that it bypasses the need for handcrafted features, while it also comes with a penalty of high computational requirement, limiting the use in edge devices and fast inference situation. In our work we adapt the feature extraction pipeline and fine-tune this for the task of blur classification. ## 3. Proposed Method We propose a method that uses conceptually simple spatial image features, along with their statistical measures such as the mean and the variance. These features rely on the fact that the texture of sharp images is different from that of a blurred image. As the image is subjected to blur the edge details within the scene are reduced. More precisely, the strength and the count of strong edges in blurred images are lower compared to that of a sharp image in other words, Figure 1. Figure shows the predictions by our best performing model, the rows correspond to samples from BHBID, KBD, Internal Dataset respectively. Columns 1 & 2 correspond to the true positives of blur, columns 3 & 4 correspond to the true positives in sharp, column 5 & 6 correspond to the false positives in blur and sharp respectively in blurred images the gradient follows a heavy tailed distribution. Additionally, blur causes a smoothening effect in images which implies that in a neighbourhood the pixel intensities are closer to the central pixel. The features extracted are as follows: 1. **Normalized gray level variance** This operator is an indication of the overall intensity of the image Eqn. (1). The intensity distribution of a blurred image is relatively packed and on the lower spectrum with an overall uniform distribution of intensity when compared to sharp images. \[NGLV=\frac{\sum_{(i,j)\in\Omega(x,y)}{(\Lambda I(i,j-\mu)^{2}}}{\mu(x,y)}\] (1) _where \(\mu(x,y)\) is the mean value for computed over the neighborhood window \(\Omega(x,y)\)_ 2. **Tenengrad** Sobel is a first order derivative operator and indicates the spatial locations of change in intensity of an image. Locations that tend to have a high change in intensities represent stronger edges. We use the horizontal and vertical Sobel maps to obtain the Tenengrad map Eqn. (2) and use the mean of this operator as our feature. \[TEN(x,y)=\sum_{(i,j)\in\Omega(x,y)}{(S_{x}(i,j)^{2}+S_{y}(i,j)^{2})}\] (2) _where \(S_{x}\) and \(S_{y}\) are the Sobel gradients in \(x\) and \(y\) direction_ 3. **Laplacian** Laplacian operator is the second-order derivative of the input signal. It is highly sensitive to the noise in the input image when compared to Sobel. Laplacian produces high values where there is a rapid change of intensities. We convolve the input image with a Laplacian operator and obtain the mean of the Laplacian map and also extract the variance from the Laplacian map. \[Laplacian_{var}(x,y)=\sum_{(i,j)\in\Omega(x,y)}{(\Lambda I(i,j)-\bar{\Delta I})^{2}}\] (3) _where \(\bar{\Delta I}\) is the mean value of the Laplacian map in a neighbourhood \(\Omega(x,y)\)_ 4. **LBP Sharpness Map** In (Han et al., 2017) the authors propose that the LBP feature descriptor can be leveraged to accurately detect blur within an image patch. For a blurred image, LBP operator relies on the fact that the intensities in a neighbourhood are closer to the central pixel. We use the statistical mean and variance of the LBP descriptor of an image. These features can even help to distinguish between partially blurred images with intended blur from completely unintended blurred images. \[M_{LBP}(x,y)=\frac{\sum_{i=6}^{9}{n(LBP_{8,1}^{rizio}i)}}{N}\] (4) _where \(n(LBP_{8,1}^{rizio}i)\) represents the number of rotation-invariant uniform 8-bit LBP patterns of type i, and N is the total number of pixels in the neighbourhood_ On the whole, we compute the Tenengrad gradient, Laplacian gradient, LBP sharpness map and obtain statistical measures of these maps and the Normalized gray level variance of the input image and stack these together to create a feature vector (Table 1). This descriptor is used for classifying an input image as blur or sharp. We compute the features at a global level, on the entire image. Our experiments (refer experiments 5) reveal that global level features fail to capture finer details of the image and this leads to some obvious misclassifications. We improve upon this by employing the proposed features in a patch-wise manner to capture finer context. We train XGBoost and CNN models using Figure 2. Blurred image from KBD. (a) is the original image and (b) is the same image with a text watermark on the top left. Adding a watermark we observed a 10.7% increase in the mean of the laplacian map, along with an increase of 39.8% in the standard-deviation. Similarly we also observed a 9.5% and 30.2% increase respectively in the mean and standard deviation of the tenengrad map. In (c) we apply patch-wise classification and plot the predictions on each image grid, where each patch depicts the probability-score of an image being blur and the blur prediction. In (d) we can see how the text watermark affects the probability score and the decision of the model. these features for the task of blur classification and report the metrics on different datasets. ## 4. Dataset For the purpose of our experiments, we have used a kaggle dataset1 for training and validation, which we refer to as Kaggle Blur Dataset (KBD). The dataset contains a total of 1050 images split equally across the 3 categories namely sharp, motion-blur, and defocus-blur. Since we do not distinguish between the type of blur in images we have combined the two blur categories into a single category which gives us a total of 700 blur images and 350 sharp images. Footnote 1: Kaggle Blur Dataset - [https://www.kaggle.com/datasets/kwentar/blur-dataset](https://www.kaggle.com/datasets/kwentar/blur-dataset) To demonstrate the generlizability of our approach, we evaluate our model on the BHBID Dataset from (Kang et al., 2018). This dataset contains a total of 1188 sample images. The train split contains 418 blurred images and 200 clear images. While the test split contains the remaining 210 blurred images and 80 sharp images. We also evaluate the performance of this model on our internal test set. This dataset contains indoor images captured from mobile cameras. This contains a total of 407 images of varying resolution. Among all the samples a total of 202 images are blurred and the rest are sharp. ## 5. Experiments In order to compare the different methods we use the KBD to train and validate all of our models. We use BHBID and our internal dataset to test the trained models. We choose XGBoost for this task as it perfectly fits our need for a fast and accurate classifier. Additionally, we train several CNN networks for the blur classification task. In the subsequent section, we discuss the improvements in feature selection and compare the performance of our experiments in Table 2. We train and evaluate the classifiers in a Stratified KFold Cross-validation manner with K set to 5 and repeat this for 5 random shuffles of the dataset. For each run, we use 25% of the samples as our validation set and 75% of the data is used to train our models. We train the XGBoost model with its parameters (max-depth, learning-rate, n-estimators, gamma) set to default values of (6, 0.3, 100, 0) and use the default binary:logistic (binary cross-entropy) as our objective function. ### Global Image Features In the feature extraction stage, spatial image features - tenengrad mean, laplacian mean and variance, and normalized gray level variance are extracted from grayscale images. With this 4-dimensional feature vector as input, we train a XGBoost model. This model achieves a mean AUC of 0.933 and a mean accuracy of 0.869. On our internal test set we observed missclassification of some blurred images, this was due to a text watermark that is introduced by some smartphones while capturing images. Such artefacts skew the gradients and global features especially mean and variance cannot encapsulate these local features Fig.2. ### Patch-wise Image Features To investigate the discriminative ability of the features, we isolated the misclassified images, and conduct experiments by cropping the image regions with text. The models classify the watermarked images accurately in this case. Based on this we develop a patch-wise voting mechanism for blur classification. We can observe from Figure 2 that the watermarks cause a degradation in prediction confidence. Rather than employing our classifier in a sliding window fashion on each patch within a single image, we split the image into 3x3 regular size grids and extract features from each of the grid in a single pass. The features are concatenated to produce single feature vector for each image. We refer to these as patch-wise features. With these features, we observe an increase of 1.8% in accuracy and an increase of 1.2 units in AUC. We conduct ablation on the number of patches, by increasing in number of grids within an image from 3x3 to 5x5 and 7x7 grids. For each we train XGBoost classifiers, and observe an average increase of 2.3% in mean ace and an average increase of 1.67 units in mean auc compared to global features as we move to _smaller grid sizes and a larger number of grids_. This is because high number of patches capture details from the images on a much finer level. We observe that for high resolution images a large number of grids at 7x7 patches are well suited, while for lower resolution images, a smaller number of grids at 3x3 patches perform better. The best results were observed when we use patch-wise features extracted on 7x7 grids, we refer to this as _Grid 7x7_ in Table 2. ### LBP Features We investigate the addition of the global mean and variance of the LBP features to our feature descriptor, resulting in a 6 dimensional feature vector. Training the model on these features we observed a mean increase of 1 unit in AUC and a mean increase of 1.2% in accuracy. We further conduct ablation on local patch-wise features by adding global LBP features and patch-wise LBP features. We observe similar trends (Figure 3) in the increase of AUC and accuracy. We achieve the best performance, an average AUC of 0.956 and average accuracy of 90.1%, using _LBP Grid 7x7_ features. We compare all these methods on the validation sets, plot the metrics in Figure 3 and report them for comparison in Table 2 \begin{table} \begin{tabular}{l l l} \hline \hline **Feature** & **Description** & **Index** \\ \hline _Laplacian_ & Mean and variance of the laplacian map & 1-2 \\ _Tenengrad_ & Mean of the tenengrad map & 3 \\ _Normalized gray level variance_ & Combination of variance and mean from the grayscale image & 4 \\ _LBP Sharpness Map_ & Mean and variance of the sharpness map & 5-6 \\ \hline \hline \end{tabular} \end{table} Table 1. Selected set of features for blur classification. These features are calculated at global or in a patch-wise manner at local level. ### Comparison with convolutional neural networks To present an effective benchmark of performance, we compare our proposed features with various CNNs. We train two CNN classifiers which have a vgg16 backbone. These networks were fine-tuned from different tasks, one was used as an encoder in a defocus segmentation task, we refer to this as _vgg-defocus_ and the other was trained on imagenet for classification, we refer to this as _vgg-imagenet_. VGG16 is a heavy and memory hungry network that is orders of magnitude complex compared to our handcrafted features, with this complexity in mind, we also train a simple cnn network with 6 layers (4 conv + 2 linear). We also fine-tune a Mobilenet (Mobilenet, 2017) network which is preferable for low-latency and low-power systems. We compare these over evaluation criteria such as AUC, accuracy. KBD contains images of varying resolution. In order to train the networks, we follow the method laid out in (Beng et al., 2017) and resize the shorter side of the image to 224 while preserving the aspect ratio. We then perform a center crop of 224x224 from the resized image. For fine-tuning, we freeze the convolutional layers in the networks and modify the structure of the final layers for binary classification. We use data augmentation, which includes a random horizontal and random vertical flip with a probability of 0.6. We fine tune the network using an Adam optimizer with a learning rate set to 1e-3, a batch size of 256, and binary cross entropy loss as our objective function. Similar to the XGBoost setup, we train these networks in a Stratified KFold Cross Validation manner with 25% of the data being used as validation set in each run. To prevent overfitting, we train the vgg networks for 5 epochs each and the other networks for 10 epochs each. We report metrics on the original image resolution and also on images downscaled to 224x224. The adaptive average pooling layer handles the variable input size in the case of original image resolution. We report the mean AUC and mean accuracy metrics on the validation splits in Table 2. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Feature Type**} & \multicolumn{1}{c|}{Accuracy} & AUC \\ \hline \multirow{2}{*}{**Global Features**} & Global Feature & \(86.9*3\) & \(0.933*0.02\) \\ \cline{2-4} & Global Feature + LBP & \(88.12*\) & \(0.943*0.01\) \\ \hline \multirow{3}{*}{**Patch-wise Features (No LBP)**} & Grid 3x3 & \(88.7*2\) & \(0.945*0.02\) \\ \cline{2-4} & Grid 5x5 & \(89.3*2\) & \(0.951*0.01\) \\ \cline{2-4} & Grid 7x7 & \(89.6*1\) & \(0.953*0.01\) \\ \hline \multirow{3}{*}{**Patch-wise Features + Global LBP**} & Grid 3x3 + Global LBP & \(89.1*\) & \(0.949*0.01\) \\ \cline{2-4} & Grid 3x5 + Global LBP & \(89.3*2\) & \(0.952*0.01\) \\ \cline{2-4} & Grid 7x7 + Global LBP & \(89.6*2\) & \(0.955*0.01\) \\ \hline \multirow{3}{*}{**Patch-wise Features**} & LBP Grid 3x5 & \(89.2*\) & \(0.95*0.01\) \\ \cline{2-4} & LBP Grid 5x5 & \(89.4*\) & \(0.953*0.01\) \\ \cline{2-4} & LBP Grid 7x7 & \(90.1*\) & \(0.956*0.01\) \\ \hline \multirow{3}{*}{**CNN Methods**} & \(\underline{\textbf{vgg-defocus}}^{*}\) & \(94.2*1*\) & \(0.985*0.01\) \\ \cline{2-4} & \(\underline{\textbf{vgg-imagenet}}^{*}\) & \(93.1*\) & \(0.971*0.02\) \\ \cline{1-1} \cline{2-4} & vgg-defocus & \(83.2*\) & \(0.894*0.04\) \\ \cline{1-1} \cline{2-4} & vgg-imagenet & \(78.3*\) & \(0.888*0.04\) \\ \hline \multicolumn{4}{l}{\(*\) The metrics were calculated on downscaled images} \\ \end{tabular} \end{table} Table 2. Improvements observed by introducing more features and local features. We report the average accuracy and auc score of different the methods across all the validation sets of the KFold cross validation. For the CNN networks we report metrics on input resolution and downsampled images to 224x224. Figure 3. Figure shows the mean accuracy and mean auc plots for the XGB models trained on different features, it can be observed that the metrics increase as we move to finer and more detailed features \begin{table} \begin{tabular}{l l l l} \hline \hline **Network** & **Param** & **Inference** & **Accuracy** \\ & **Count** & **Time** & \\ \hline _VGG16 \(*\)_ & 138M & 16.4s & 93.1 \\ _MobileNetv2 \(*\)_ & 2.3M & 4.3s & 90.8 \\ _Simple CNN Classifier \(*\)_ & 0.96M & 0.94s & 80.5 \\ \hline \multicolumn{4}{l}{\(*\) Accuracy calculated on downscaled images} \\ \end{tabular} \end{table} Table 3. Comparison of CNN methods - we measure the inference time on images of size 1000x1000 ### Results We benchmark the inference time required to classify an image across all the methods. All the models are run on a computer with an Intel i5-8250U CPU running at 1800 MHz using 8GB RAM, running Ubuntu 20.04.3 LTS and Python 3.8.10, no other compiler optimizations are used. For the XGBoost model, we measure the time required to extract the features and perform the classification. For vgg16 we set the batch size to 1 and record the time elapsed for the forward pass. For all the algorithms, we measure the time required to process 5 different images, we repeat this experiment for 10 runs and report the mean of the 10 runs in Fig. 4. We also test our extracted features using support vector machines (SVM) classifiers and compare the performance to XGBoost classifier. They perform 3, 3.4, 3.4 percentage points (accuracy) lower than XGBoost in case of Grid 7x7, Grid 7x7 + Global LBP, LBP Grid 7x7. This shows that the extracted features are discriminative across various classifiers. When discussing CNN classifiers, it is worth noting that the VGG network provides the best performance but is very large and requires a lot of memory. In contrast, the lightweight Mobilenet is reasonably fast and accurate but does not compare well in speed with our selected features. Simple CNN is very fast but it performs worse than Mobilenet Table 3 To demonstrate the generalization of our method we test our methods on two additional datasets. We train our model on the training split of BHBID Dataset and report the results achieved by our best model _LBP Grid 7x7_. We also report the metrics on our internal test set, we use the XGBoost model that was trained on KBD out-of-the-box and perform classification using our best model _LBP Grid 7x7_. The results are summarized in Table 4. A drawback of this approach is that this fails when none of the patches in the image contain features in other words a plain image with no noticeable difference in intensity, in such cases the images are classified as blur. ## 6. Conclusion In this work, we address the task of blur classification for image quality assessment. We extract various spatial and statistical image features to classify an input image as blurred or sharp. And empirically demonstrate how extracting features at a global level fails to capture the intricate details in an image. We propose a patch-wise method for feature extraction and show its effectiveness for blur classification on multiple datasets. We also apply this method to our internal dataset without any further tweaks after training on KBD. We train XGBoost models and show the superiority of our features both in terms of inference time and classification metrics. Our best performing feature is the LBP Grid 7x7 that has an AUC of **0.95** on \begin{table} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{AUC} & \multicolumn{2}{c}{F1 Score} \\ \cline{3-5} & & & Blur & Sharp \\ \hline \hline BHBID (_TestSet_) & 98 & 0.98 & 0.98 & 0.96 \\ Internal Dataset & 91 & 0.96 & 0.89 & 0.91 \\ \hline \end{tabular} \end{table} Table 4. Performance of our best feature set (LBP Grid 7x7) on two different test sets. Figure 4. Comparison of runtimes across different methods. We take 5 different images from KBD, resize it to different sizes and record the time required to run inference on these. It can be seen that the best performing feature _LBP Grid 7x7_ is 10 times faster than VGG KBD and an AUC of **0.98** on the BHBID Dataset. We compare our features against pretrained as well as lightweight CNN models, and find that they are 6.9 percentage points better than the VGG model (at original image resolution) and are twice as fast when compared to the low-latency Mobilenet.
2306.02431
Sharp coefficients bounds for Starlike functions associated with Gregory coefficients
In this paper we introduced the class $\mathcal{S}_{G}^{\ast }$ of analytic functions which is related with starlike functions and generating function of Gregory coefficients. By using bounds on some coefficient functionals for the family of functions with positive real part, we obtain for functions in the class $\mathcal{S}_{G}^{\ast }$ several sharp coefficient bounds on the first six coeffcients and also further sharp bounds on the corresponding Hankel determinants.
Sercan Kazımoğlu, Erhan Deniz, Hari Mohan Srivastava
2023-06-04T18:26:59Z
http://arxiv.org/abs/2306.02431v1
# Sharp coefficients bounds for starlike functions associated with Gregory coefficients ###### Abstract. In this paper we introduced the class \(\mathcal{S}_{G}^{*}\) of analytic functions which is related with starlike functions and generating function of Gregory coefficients. By using bounds on some coefficient functionals for the family of functions with positive real part, we obtain for functions in the class \(\mathcal{S}_{G}^{*}\) several sharp coefficient bounds on the first six coeffcients and also further sharp bounds on the corresponding Hankel determinants. Key words and phrases:Univalent functions, Starlike function, Gregory coefficients, Hankel determinant. 2010 _Mathematics Subject Classification._ Primary 30C45, Secondary 30C50, 30C80. ## 1. Introduction Let \(\mathcal{A}\) be the class of functions \(f\) which are analytic in the open unit disc \(\mathcal{U}=\left\{z:\left|z\right|<1\right\}\) and normalized by the conditions \(f\left(0\right)=f^{\prime}\left(0\right)-1=0.\) Let us denote by \(\mathcal{S}\) the subclass of \(\mathcal{A}\) containing functions which are univalent in \(\mathcal{U}\). An analytic function \(f\) is subordinate to an analytic function \(g\) (written as \(f\prec g\)) if there exists an analytic function \(w\) with \(w\left(0\right)=0\) and \(\left|w\left(z\right)\right|<1\) for \(z\in\mathcal{U}\) such that \(f\left(z\right)=g\left(w\left(z\right)\right).\) In particular, if \(g\) is univalent in \(\mathcal{U}\), then \(f\left(0\right)=g\left(0\right)\) and \(f\left(\mathcal{U}\right)\subset g\left(\mathcal{U}\right).\) In 1992, Ma and Minda [30] gave a unified presentation of various subclasses of starlike and convex functions by replacing the subordinate function \(\left(1+z\right)\diagup(1-z)\) by a more general analytic function \(\varphi\) with positive real part and normalized by the conditions \(\varphi(0)=1\), \(\varphi^{\prime}(0)>0\) and \(\varphi\) maps \(\mathcal{U}\) onto univalently a region starlike with respect to \(1\) and symmetric with respect to the real axis. They introduced the following general class that envelopes several well-known classes as special cases: \(\mathcal{S}^{*}[\varphi]=\left\{f\in\mathcal{A}:\ zf^{\prime}\left(z\right) \diagup f\left(z\right)\prec\varphi(z)\right\}.\) In literature, the functions belonging to this class is called Ma-Minda starlike function. For \(-1\leq B<A\leq 1\), \(\mathcal{S}^{*}[(1+Az)\diagup(1+Bz)]:=\mathcal{S}^{*}[A,B]\) are called the Janowski starlike functions, introduced by Janowski [20]. The class \(\mathcal{S}^{*}(\beta)\) of starlike functions of order \(\beta\)\((0\leq\beta<1)\) defined by taking \(\varphi(z)=(1+(1-2\beta)z)\diagup(1-z).\) Note that \(\mathcal{S}^{*}=\mathcal{S}^{*}(0)\) is the classical class of starlike functions. By taking \(\varphi(z)=1+2\diagup\pi^{2}(\log\left((1+\sqrt{z})\diagup(1-\sqrt{z})\right) ^{2},\) we obtain the class \(\mathcal{S}_{p}\) of parabolic starlike functions, introduced by Ronning [39]. \(\mathcal{S}^{*}[\beta,-\beta]:=\widehat{\mathcal{S}^{*}}(\beta)=\left\{f\in \mathcal{A}:\left|zf^{\prime}\left(z\right)\diagup f\left(z\right)-1\right|< \beta|zf^{\prime}\left(z\right)\diagup f\left(z\right)+1|\right\}\) has been studied in [1, 2]. Recently, the coefficient problem for \(\mathcal{S}^{*}[\varphi]\) in the case of generated functions of some well-known numbers of \(\varphi\) studied by [9, 14, 15, 25, 37]. For example, the ## 1. Introduction The theory of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the geometry of the the geometry of the geometry of the of the geometry of the geometry of the of the geometry of the geometry of the of the geometry of the geometry of the of the geometry of the geometry of the geometry of the of the geometry of the of the geometry of the geometry of the geometry of the of the geometry of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the geometry of the of the of the geometry of the of the geometry of the of the of the geometry of the of the geometry of the of the of the geometry of the of the of the geometry of the of the of the geometry of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the with its domain of definition as the open unit disk \(\mathcal{U}\). In connection with this function, we define the class \[\mathcal{S}_{G}^{\ast}:=\left\{f:\;f\in\mathcal{S}\text{ and }zf^{\prime}(z)/f(z) \prec\Psi(z)\right\}.\] Graphic of \(\Psi(\mathcal{U})\) as follows: ## 2. Preliminary Results The following lemmas is needed for the main results. **Lemma 2.1**.: _[_28_]_ _If \(p(z)=1+p_{1}z+p_{2}z^{2}+p_{3}z^{3}+\cdots\in\mathcal{P}\)\((p_{1}\geq 0),\) then_ \[2p_{2} = p_{1}^{2}+x(4-p_{1}^{2}) \tag{2.1}\] \[4p_{3} = p_{1}^{3}+2(4-p_{1}^{2})p_{1}x-p_{1}(4-p_{1}^{2})x^{2}+2(4-p_{1}^ {2})\left(1-\left|x\right|^{2}\right)y, \tag{2.2}\] _for some \(x,y\in\mathbb{C}\) with \(\left|x\right|\leq 1\) and \(\left|y\right|\leq 1.\)_ **Lemma 2.2**.: _If \(p(z)=1+p_{1}z+p_{2}z^{2}+p_{3}z^{3}+\cdots\in\mathcal{P}\)\((p_{1}\geq 0),\) then_ \[\left|p_{n}\right|\leq 2\;\left(n\geq 1\right), \tag{2.3}\] _and if \(Q\in[0,1]\) and \(Q\left(2Q-1\right)\leq R\leq Q,\) then_ \[\left|p_{3}-2Qp_{1}p_{2}+Rp_{1}^{3}\right|\leq 2. \tag{2.4}\] _Also_ \[\left|p_{n+k}-\mu p_{n}p_{k}\right| \leq 2\max\left\{1,\left|2\mu-1\right|\right\} \tag{2.5}\] \[= 2\left\{\begin{array}{ll}1,&\text{for }0\leq\mu\leq 1,\\ \left|2\mu-1\right|,&\text{otherwise}\end{array}\right..\] _The inequalities (2.3), (2.4) and (2.5) are taken from [8, 28] and [36], respectively._ **Lemma 2.3** (see [37]).: _Let \(\tau,\ \psi,\ \rho\) and \(\varsigma\) satify the inequalities \(0<\tau<1,\ 0<\varsigma<1\) and_ \[8\varsigma\left(1-\varsigma\right)\left[\left(\tau\psi-2\rho \right)^{2}+\left(\tau\left(\varsigma+\tau\right)-\psi\right)^{2}\right]+ \tau\left(1-\tau\right)\left(\psi-2\varsigma\tau\right)^{2} \tag{2.6}\] \[\leq 4\varsigma\tau^{2}\left(1-\tau\right)^{2}\left(1-\varsigma \right).\] _If \(p(z)=1+p_{1}z+p_{2}z^{2}+p_{3}z^{3}+\cdots\in\mathcal{P}\)\((p_{1}\geq 0),\) then_ \[\left|\rho p_{1}^{4}+\varsigma p_{2}^{2}+2\tau p_{1}p_{3}-\frac{3}{2}\psi p_{1}^ {2}p_{2}-p_{4}\right|\leq 2. \tag{2.7}\] **Lemma 2.4**.: _[_34_]_ _Let \(\overline{\mathcal{U}}=\left\{z:\left|z\right|\leq 1\right\}.\) Also, for any real numbers \(a,\)\(b\) and \(c,\) let the quantity \(Y(a,b,c)=\max_{z\in\overline{\mathcal{U}}}\left\{\left|a+bz+cz^{2}\right|+1- \left|z\right|^{2}\right\}.\) If \(ac\geq 0,\) then_ \[Y(a,b,c)=\left\{\begin{array}{cc}\left|a\right|+\left|b\right|+\left|c \right|&\left|b\right|\geq 2(1-\left|c\right|)\\ 1+\left|a\right|+\frac{b^{2}}{4(1-\left|c\right|)}&\left|b\right|<2(1-\left| c\right|)\end{array}\right..\] _Furthermore, if \(ac<0,\) then_ \[Y(a,b,c)=\left\{\begin{array}{cc}1-\left|a\right|+\frac{b^{2}}{4(1-\left|c \right|)}&(-4ac(c^{-2}-1)\leq b^{2};\text{ }\left|b\right|<2(1-\left|c\right|))\\ 1+\left|a\right|+\frac{b^{2}}{4(1+\left|c\right|)}&b^{2}<\min\left\{4(1+\left| c\right|)^{2},-4ac(c^{-2}-1)\right\}\\ R(a,b,c)&\text{(otherwise)}\end{array}\right.,\] _where_ \[R(a,b,c)=\left\{\begin{array}{cc}\left|a\right|+\left|b\right|-\left|c \right|&(\left|c\right|(\left|b\right|+4\left|a\right|)\leq\left|ab\right|)) \\ -\left|a\right|+\left|b\right|+\left|c\right|&(\left|ab\right|\leq\left|c \right|(\left|b\right|-4\left|a\right|))\\ (\left|a\right|+\left|c\right|)\sqrt{1-\frac{b^{2}}{4ac}}&\text{(otherwise)} \end{array}\right..\] ## 3. Coefficient Estimates Finding the upper bound for coefficients have been one of the central topic of research in geometric function theory as it gives several properties of functions. Therefore, we will be interested in the following problem in this section. Problem: Find \(\sup|a_{n}|\)\((n=2,3,\cdots)\) for certain sublasses of univalent functions. In particular, bound for the second coefficient gives growth and distortion theorems for functions in the class. Another one is the coefficient problem related with Hankel determinants. Similarly, using the Hankel determinants (which also deals with the bound on coefficients), Cantor [7] proved that "if ratio of two bounded analytic functions in \(\mathcal{U},\) then the function is rational". The Hankel determinants [32]\(H_{q}(n)\)\((n=1,2,...,\)\(q=1,2,...)\) of the function \(f\) are defined by \[H_{q}(n)=\left|\begin{array}{cccc}a_{n}&a_{n+1}&...&a_{n+q-1}\\ a_{n+1}&a_{n+2}&...&a_{n+q}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n+q-1}&a_{n+q}&...&a_{n+2q-2}\end{array}\right|\quad(a_{1}=1).\] This determinant was discussed by several authors with \(q=2\). For example, we know that the functional \(H_{2}(1):=a_{3}-a_{2}^{2}\) is known as the Fekete-Szego functional and they consider the further generalized functional \(a_{3}-\mu a_{2}^{2},\) where \(\mu\) is some real number. Estimating for the upper bound of \(\left|a_{3}-\mu a_{2}^{2}\right|\) is known as the Fekete-Szego problem. In 1969, Keogh and Merkes [24] solved the Fekete-Szego problem for the class \(\mathcal{S}^{*}\). The second Hankel determinant \(H_{2}(2)\) is given by \(H_{2}(2):=\left|a_{2}a_{4}-a_{3}^{2}\right|.\) The bounds for the second Hankel determinant \(H_{2}(2)\) obtained for the class \(\mathcal{S}^{*}\) in [21]. Lee et al. [27] established the sharp bound to \(\left|H_{2}(2)\right|\) by generalizing their classes using subordination. Moreover, the quantity given by \(H_{3}(1):=a_{3}(a_{2}a_{4}-a_{3}^{2})-a_{4}(a_{4}-a_{2}a_{3})+a_{5}(a_{3}-a_{2}^{2})\) is called third Hankel determinant. Zaprawa [47] proved that \(|H_{3}(1)|<1\) for \(f\in\mathcal{S}^{*}.\) In 2019, Kwon, Lecko and Sim [26] improved the result of Zaprawa as \(|H_{3}(1)|<8\diagup 9.\) This result the best known upper bound of \(|H_{3}(1)|\) for the class \(\mathcal{S}^{*}.\) Recently, Hankel determinants and Fekete-Szego problem have been considered in many papers of Srivastava and his co-authors (see, for example, [40, 41, 42, 43, 44]) and Deniz and his co-authors (see, [12, 23]). In this paper, we seek sharp upper bounds for the coefficients \(a_{j}\) (\(j=2,3,4,5,6\)) and functionals \(H_{2}(2)\) and \(H_{3}(1)\) for functions \(f\) belonging to the class \(\mathcal{S}^{*}_{G}.\) **Theorem 3.1**.: _Let \(f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots\in\mathcal{S}^{*}_{G}.\) Then,_ \[|a_{n}| \leq \frac{1}{2(n-1)}\quad(n=2,3,4,5)\] \[|a_{6}| \leq \frac{13}{48}.\] _All of the above estimates, except that on \(a_{6},\) are sharp._ Proof.: Since \(f\in\mathcal{S}^{*}_{G},\) there exists an analytic function \(w\) with \(w(0)=0\) and \(|w(z)|<1\) in \(\mathcal{U}\) such that \[\frac{zf^{\prime}(z)}{f(z)}=\Psi(w(z))=\frac{w(z)}{\ln\left(1+w(z)\right)}\quad (z\in\mathcal{U}). \tag{3.1}\] Define the functions \(p\) by \[p(z)=\frac{1+w(z)}{1-w(z)}=1+p_{1}z+p_{2}z^{2}+\cdots\quad(z\in\mathcal{U})\] or equivalently, \[w(z) = \frac{p(z)-1}{p(z)+1}=\frac{p_{1}}{2}z+\frac{1}{2}\left(p_{2}- \frac{p_{1}^{2}}{2}\right)z^{2}+\frac{1}{2}\left(p_{3}-p_{1}p_{2}+\frac{p_{1}^ {3}}{4}\right)z^{3}\] \[+\frac{1}{2}\left(p_{4}-p_{1}p_{3}+\frac{3p_{1}^{2}p_{2}}{4}- \frac{p_{2}^{2}}{2}-\frac{p_{1}^{4}}{8}\right)z^{4}\] \[+\frac{1}{2}\left(p_{5}-\frac{1}{2}p_{1}^{3}p_{2}+\frac{3}{4}p_{1 }p_{2}^{2}+\frac{3}{4}p_{1}^{2}p_{3}-p_{2}p_{3}-p_{1}p_{4}+\frac{1}{16}p_{1}^{ 5}\right)z^{5}+\cdots\] in \(\mathcal{U}.\) Then \(p\) is analytic in \(\mathcal{U}\) with \(p(0)=1\) and has positive real part in \(\mathcal{U}\). By using (3.2) together with \(\frac{w(z)}{\ln(1+w(z))},\) it is evident that \[\Psi(w(z)) = 1+\frac{p_{1}z}{4}+\frac{1}{48}\left(-7p_{1}^{2}+12p_{2}\right)z ^{2}+\frac{1}{192}\left(17p_{1}^{3}-56p_{1}p_{2}+48p_{3}\right)z^{3}\] \[+\frac{1}{11520}\left(-649p_{1}^{4}+3060p_{1}^{2}p_{2}-3360p_{1}p _{3}-1680p_{2}^{2}+2880p_{4}\right)z^{4}\] \[+\frac{1}{46080}\left(\begin{array}{c}1739p_{1}^{5}-10384p_{1}^ {3}p_{2}+12240p_{1}^{2}p_{3}+12240p_{1}p_{2}^{2}\\ -13440p_{2}p_{3}+13440p_{1}p_{4}+11520p_{5}\end{array}\right)z^{5}+\cdots.\] Since \[\frac{zf^{\prime}(z)}{f(z)} = 1+a_{2}z+\left(-a_{2}^{2}+2a_{3}\right)z^{2}+\left(a_{2}^{3}-3a_{2 }a_{3}+3a_{4}\right)z^{3}\] \[+\left(-a_{2}^{4}+4a_{2}^{2}a_{3}-2a_{3}^{2}-4a_{2}a_{4}+4a_{5} \right)z^{4}\] \[+\left(a_{2}^{5}-5a_{2}^{3}a_{3}+5a_{2}a_{3}^{2}+5a_{2}^{2}a_{4}- 5a_{3}a_{4}-5a_{2}a_{5}+5a_{6}\right)z^{5}+\cdots,\] it follows by (3.1), (3.3) and (3.4) that \[a_{2} = \frac{p_{1}}{4}, \tag{3.5}\] \[a_{3} = \frac{1}{24}\left(-p_{1}^{2}+3p_{2}\right),\] (3.6) \[a_{4} = \frac{1}{288}\left(4p_{1}^{3}-19p_{1}p_{2}+24p_{3}\right),\] (3.7) \[a_{5} = -\frac{1}{16}\left(\frac{71}{720}p_{1}^{4}+\frac{11}{24}p_{2}^{2 }+\frac{5}{6}p_{1}p_{3}-\frac{85}{144}p_{1}^{2}p_{2}-p_{4}\right),\] (3.8) \[a_{6} = \frac{\left(\begin{array}{c}2267p_{1}^{5}-15677p_{1}^{3}p_{2}+2 1720p_{1}^{2}p_{3}+23370p_{2}^{2}p_{1}\\ -29520p_{1}p_{4}-33120p_{2}p_{3}+34560p_{5}\end{array}\right)}{691200}. \tag{3.9}\] Thus, we get \[|a_{2}| \leq \frac{1}{2}\;\left(\mbox{from \eqref{eq:2.3}}\right),\] \[|a_{3}| = \frac{1}{8}\left|p_{2}-\frac{1}{3}p_{1}^{2}\right|\leq\frac{1}{4} \;\left(\mbox{from \eqref{eq:2.5}}\right),\] \[|a_{4}| = \frac{1}{12}\left|p_{3}-\frac{19}{24}p_{1}p_{2}+\frac{1}{6}p_{1}^ {3}\right|\leq\frac{1}{6}\;\left(\mbox{from \eqref{eq:2.4}}\right),\] \[|a_{5}| = \frac{1}{16}\left|\frac{71}{720}p_{1}^{4}+\frac{11}{24}p_{2}^{2} +\frac{5}{6}p_{1}p_{3}-\frac{85}{144}p_{1}^{2}p_{2}-p_{4}\right|\leq\frac{1}{ 8}\;\left(\mbox{from \eqref{eq:2.7}}\right).\] We now find the estimates on \(|a_{6}|\;.\) Therefore, from the Lemma 2.3, \(|p_{n}|\leq 2\) and (2.5), we find that \[|a_{6}| = \frac{\left|\begin{array}{c}2267p_{1}^{5}-15677p_{1}^{3}p_{2}+2 1720p_{1}^{2}p_{3}+23370p_{2}^{2}p_{1}\\ -29520p_{1}p_{4}-33120p_{2}p_{3}+34560p_{5}\end{array}\right|}{691200}\] \[\leq \frac{41\,|p_{1}|}{960}\left|\frac{2267}{29520}p_{1}^{4}+\frac{77 9}{984}p_{2}^{2}+\frac{181}{246}p_{1}p_{3}-\frac{15677}{29520}p_{1}^{2}p_{2}-p _{4}\right|+\frac{1}{20}\left|p_{5}-\frac{23}{24}p_{2}p_{3}\right|\] \[\leq \frac{41}{240}+\frac{1}{10}=\frac{13}{48}.\] The first five results are sharp for the function \(f:\mathcal{U}\rightarrow\mathbb{C}\) given by \[f_{1}(z) = z\exp\left(\int_{0}^{z}\frac{\Psi(t)-1}{t}dt\right)=z+\frac{1}{2}z^ {2}+\cdots, \tag{3.10}\] \[f_{2}(z) = z\exp\left(\int_{0}^{z}\frac{\Psi(t^{2})-1}{t}dt\right)=z+\frac{ 1}{4}z^{3}+\cdots,\] (3.11) \[f_{3}(z) = z\exp\left(\int_{0}^{z}\frac{\Psi(t^{3})-1}{t}dt\right)=z+\frac{ 1}{6}z^{4}+\cdots,\] (3.12) \[f_{4}(z) = z\exp\left(\int_{0}^{z}\frac{\Psi(t^{4})-1}{t}dt\right)=z+\frac{ 1}{8}z^{5}+\cdots,\] (3.13) \[f_{5}(z) = z\exp\left(\int_{0}^{z}\frac{\Psi(t^{5})-1}{t}dt\right)=z+\frac{ 1}{10}z^{6}+\cdots. \tag{3.14}\] This completes the proof of Theorem 3.1. **Conjecture 3.2**.: Let \(f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots\in\mathcal{S}_{G}^{*}.\) As we see from the extremal functions \(f_{i}(z),\ i=1,2,3,4,\) the first five coeffients are obtained as our sharp bound estimates for \(\left|a_{n}\right|,\)\(n=2,3,4,5.\) By comparing with the extremal function \(f_{5}(z),\) the sixth coefficient is expected as the sharp bound estimate for \(\left|a_{6}\right|.\) So it is an open question whether the bound for \(\left|a_{6}\right|\) is sharp and if one can show the following estimate \(\left|a_{6}\right|\leq\frac{1}{10}\). Also, it is an open question the sharp inequality \(\left|a_{n}\right|\leq\frac{1}{2(n-1)}\) for all \(n\in\mathbb{N}\backslash\{1\}.\) In 2021, Deniz [13] defined a subclass of starlike functions which is related with generalized telephone numbers. He obtained some sharp bounds of coefficients for this class. These results are as follows: \(\left|a_{2}\right|\leq 1,\)\(\left|a_{3}\right|\leq 1,\)\(\left|a_{4}\right|\leq\)\(\frac{8}{9},\)\(\left|a_{5}\right|\leq\)\(\frac{107}{144}\) and \(\left|a_{6}\right|\leq\frac{2381}{3600}.\) When we compare these results with Theorem 3.1, seen that our results are better. **Theorem 3.3**.: _Let \(f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\cdots\in\mathcal{S}_{G}^{*}.\) Then, the following sharp estimates holds:_ \[\left|a_{3}-\mu a_{2}^{2}\right|\leq\frac{1}{4}\max\{1,\left|\mu-\frac{1}{3} \right|\},\ \ \ (\mu\in\mathbb{C})\] \[\left|a_{2}a_{3}-a_{4}\right|\leq\frac{1}{6}\] _and_ \[\left|a_{2}a_{4}-a_{3}^{2}\right|\leq\frac{1}{16}.\] Proof.: From (3.5) and (3.6), we have \[a_{3}-\mu a_{2}^{2}=\frac{1}{24}\left(-p_{1}^{2}+3p_{2}\right)-\mu\frac{p_{1} ^{2}}{16}=\frac{1}{8}\left(p_{2}-\left(\frac{3\mu+2}{6}\right)p_{1}^{2}\right).\] If we take \(\nu=\frac{3\mu+2}{6}\) in known result (see [24]): \(\left|p_{2}-\nu p_{1}^{2}\right|\leq 2\max\{1,\left|2\nu-1\right|\}\) for \(\mu,\nu\in\mathbb{C},\) we obtain that \[\left|a_{3}-\mu a_{2}^{2}\right|\leq\frac{1}{4}\max\{1,\left|\mu-\frac{1}{3} \right|\}.\] Similarly from (3.5), (3.6) and (3.7), we have \[a_{2}a_{3}-a_{4}{=}-\frac{1}{288}\left(24p_{3}-28p_{1}p_{2}+7p_{1}^{3}\right)\] and so from (2.4) \[\left|a_{2}a_{3}-a_{4}\right|{\leq}\frac{1}{288}\left|24p_{3}-28p_{1}p_{2}+7p_{ 1}^{3}\right|=\frac{1}{12}\left|p_{3}-\frac{7}{6}p_{1}p_{2}+\frac{7}{24}p_{1}^ {3}\right|\leq\frac{1}{6}.\] Now, we investigate last estimate in Theorem 3.3. From (3.5), (3.6) and (3.7) again, we see that \[a_{2}a_{4}-a_{3}^{2}=\frac{2p_{1}^{4}-7p_{1}^{2}p_{2}-18p_{2}^{2}+24p_{1}p_{3}} {1152}:=T,\] which, upon applying Lemma 2.1 and assuming that \(s=p_{1}\in[0,2]\), we can write \[T=\frac{4-s^{2}}{2304}\left(-s^{2}x-3\left(s^{2}+12\right)x^{2}+24s\left(1- \left|x\right|^{2}\right)y.\right)\] If \(s=0\), then \(T=-\frac{1}{16}x^{2}.\) Thus, since \(\left|x\right|\leq 1\), we have \[\left|T\right|\leq\frac{1}{16}. \tag{3.15}\] If \(s=2\), then \[\left|T\right|=0. \tag{3.16}\] We now let \(s\in(0,2).\) Then, we can write \[\left|T\right| = \left|\frac{4-s^{2}}{2304}\left(-s^{2}x-3\left(s^{2}+12\right)x^ {2}+24s\left(1-\left|x\right|^{2}\right)y.\right)\right|\] \[\leq \frac{s\left(4-s^{2}\right)}{96}\left[\left|-\frac{s}{24}x-\frac{ s^{2}+12}{8s}x^{2}\right|+1-\left|x\right|^{2}\right]\] \[= \frac{s\left(4-s^{2}\right)}{96}\left[\left|\widetilde{a}+ \widetilde{b}x+\widetilde{c}x^{2}\right|+1-\left|x\right|^{2}\right]\] where \[\widetilde{a}=0,\text{ }\widetilde{b}=-\frac{s}{24},\text{ }\widetilde{c}=- \frac{s^{2}+12}{8s}.\] It follows that \(\widetilde{a}\widetilde{c}=0.\) Also, we easily see that \[\left|\widetilde{b}\right|-2(1-\left|\widetilde{c}\right|)=\frac{7s^{2}-48s+ 72}{24s}>0.\] Therefore, we have \[\left|T\right| \leq \frac{s\left(4-s^{2}\right)}{96}\left(\left|\widetilde{a}\right| +\left|\widetilde{b}\right|+\left|\widetilde{c}\right|\right)=\frac{s\left(4- s^{2}\right)}{96}\left(\frac{s^{2}+9}{6s}\right)\] \[= \frac{-s^{4}-5s^{2}+36}{576}.\] Let \(t=s^{2}\)\(\left(t\in\left(0,4\right)\right).\) Then, we investigate maximum of the function \(H_{0}\) defined by \[H_{0}(t)=\frac{-t^{2}-5t+36}{576}.\] In that case, we have \[\left|T\right|\leq\left|H_{0}(t)\right|\leq\frac{-t^{2}-5t+36}{576}\leq\frac{1}{16}. \tag{3.17}\] _Remark 3.4_.: By using Theorem 3.1 and Theorem 3.3, we have \[\left|H_{3}(1)\right|\leq\left|a_{3}\right|\left|a_{2}a_{4}-a_{3}^{2}\right|+ \left|a_{4}\right|\left|a_{4}-a_{2}a_{3}\right|+\left|a_{5}\right|\left|a_{3}-a _{2}^{2}\right|\leq\frac{43}{576}.\] If we compare the bound \(\frac{43}{576}\) with the results of Zaprawa [47] and Deniz [13], see that this bound is better. ### Logarithmic Coefficients The following formula defines the logarithmic coefficients \(\beta_{n}\) of \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) that belongs to \(\mathcal{S}\) \[G_{f}\left(z\right):=\log\left(\frac{f\left(z\right)}{z}\right)=2\sum_{n=1}^{ \infty}\beta_{n}z^{n}\text{ for }z\in\mathcal{U}. \tag{3.18}\] In many estimations, these coefficients provide a significant contribution to the concept of univalent functions. In 1985, De Branges [10] proved that \[\sum_{k=1}^{n}k\left(n-k+1\right)\left|\beta_{n}\right|^{2}\leq\sum_{k=1}^{n} \frac{n-k+1}{k}\text{ }\forall\text{ }n\geq 1 \tag{3.19}\] and equality will be achieved if \(f\) has the form \(z/\left(1-e^{i\theta}z\right)^{2}\) for some \(\theta\in\mathbb{R}.\) In its most comprehensive version, this inequality offers the famous Bieber-bach-Robertson-Milin conjectures regarding Taylor coefficients of \(f\in\mathcal{S}.\) We refer to [5, 16, 17] for further details on the proof of De Branges finding. By considering the logarithmic coefficients, Kayumov [22] was able to prove Brennan's conjecture for conformal mappings in 2005. For your reference, we mention a few works that have made major contributions to the research of the logarithmic coefficients. Andreev and Duren [4], Alimohammadi et al. [3], Deng [11], Roth [38], Ye [46], Obradovic et al. [33], and finally the work of Girela [18] are the major contributions to the study of logarithmic coefficients for different subclasses of holomorphic univalent functions. As stated in the definition, it is simple to determine that for \(f\in\mathcal{S},\) the logarithmic coefficients are computed by \[\beta_{1}=\frac{1}{2}a_{2}, \tag{3.20}\] \[\beta_{2}=\frac{1}{2}\left(a_{3}-\frac{1}{2}a_{2}^{2}\right), \tag{3.21}\] \[\beta_{3}=\frac{1}{2}\left(a_{4}-a_{2}a_{3}+\frac{1}{3}a_{2}^{3}\right), \tag{3.22}\] \[\beta_{4}=\frac{1}{2}\left(a_{5}-a_{2}a_{4}+a_{2}^{2}a_{3}-\frac{1}{2}a_{3}^{2 }-\frac{1}{4}a_{2}^{4}\right). \tag{3.23}\] **Theorem 3.5**.: _If \(f\in\mathcal{S}_{G}^{*}\) and has the series representation \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) then_ \[|\beta_{n}|\leq\frac{1}{4n}\ \left(n=1,2,3,4\right). \tag{3.24}\] _These bounds are sharp and can be obtained from the extremal functions \(f_{i}(z)\)\(i=1,2,3,4\) given by (3.10)-(3.13)._ Proof.: Let \(f\in\mathcal{S}_{G}^{*}.\) Then, putting (3.20), (3.21), (3.22) and (3.23) in (3.5), (3.6), (3.7) and (3.8), we get \[\beta_{1} = \frac{p_{1}}{8}, \tag{3.25}\] \[\beta_{2} = \frac{1}{192}\left(-7p_{1}^{2}+12p_{2}\right),\] (3.26) \[\beta_{3} = \frac{1}{1152}\left(17p_{1}^{3}-56p_{1}p_{2}+48p_{3}\right),\] (3.27) \[\beta_{4} = -\frac{1}{92160}\left(649p_{1}^{4}+1680p_{2}^{2}+3360p_{1}p_{3}- 3060p_{1}^{2}p_{2}-2280p_{4}\right). \tag{3.28}\] For \(\beta_{1}\), using (2.3) in(3.25), we obtain \[|\beta_{1}|\leq\frac{1}{4}.\] For \(\beta_{2}\), putting (2.5) in(3.26), we get \[|\beta_{2}|\leq\frac{1}{8}.\] For \(\beta_{3}\), we can rewrite (3.27) as \[|\beta_{3}|=\frac{1}{24}\left|p_{3}-\frac{7}{6}p_{1}p_{2}+\frac{17}{48}p_{1}^{ 3}\right|,\] using (2.4) where \(Q=\frac{7}{12}\) and \(R=\frac{17}{48}\), we obtain \[|\beta_{3}|\leq\frac{1}{12}.\] For \(\beta_{4}\), we can rewrite (3.28) as \[\beta_{4}=-\frac{1}{32}\left(\frac{649}{2880}p_{1}^{4}+\frac{7}{12}p_{2}^{2}+ \frac{7}{6}p_{1}p_{3}-\frac{17}{16}p_{1}^{2}p_{2}-p_{4}\right). \tag{3.29}\] Comparing the right side of (3.29) with inequality (2.7), where \(\rho=\frac{649}{2880}\), \(\varsigma=\frac{7}{12}\), \(\tau=\frac{7}{12}\) and \(\psi=\frac{17}{24}.\) It follows that \[8\varsigma\left(1-\varsigma\right)\left[\left(\tau\psi-2\rho\right)^{2}+\left( \tau\left(\varsigma+\tau\right)-\psi\right)^{2}\right]+\tau\left(1-\tau\right) \left(\psi-2\varsigma\tau\right)^{2}=0.00442226\] and \[4\varsigma\tau^{2}\left(1-\tau\right)^{2}\left(1-\varsigma\right)=0.057435.\] Using (2.6) we deduce that \[|\beta_{4}|\leq\frac{1}{16}.\] ### Inverse Coefficients It is well-known that the function \[f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\in\mathcal{S}\] has an inverse \(f^{-1}\), which is analytic in \(\left|w\right|<1/4\), as we know from Koebe's \(1/4\)-theorem. If \(f\in\mathcal{S}\), then \[f^{-1}(w)=w+A_{2}w^{2}+A_{3}w^{3}+\cdots,\;\left|w\right|<1/4. \tag{3.30}\] Lowner [29] proved that, if \(f\in\mathcal{S}\) and its inverse is given by (3.30), then the sharp estimate \[\left|A_{n}\right|\leq\frac{\left(2n\right)!}{n!\left(n+1\right)!} \tag{3.31}\] holds. It has been shown that the inverse of the Koebe function \(k(z)=z/\left(1-z\right)^{2}\) provides the best bounds for all \(\left|A_{n}\right|\;(n=2,3,\ldots)\) in (3.31) over all members of \(\mathcal{S}\). There has been a good deal of interest in determining the behavior of the inverse coefficients of \(f\) given in (3.30) when the corresponding function \(f\) is restricted to some proper geometric subclasses of \(\mathcal{S}.\) Alternate proofs of the inequality (3.31) have been given by several authors but a simpler proof was given by Yang [45]. Since \(f\left(f^{-1}(w)\right)=w\), using (3.30) it's very clear to see \[A_{2}= -a_{2}, \tag{3.32}\] \[A_{3}= 2a_{2}^{2}-a_{3},\] \[A_{4}= -5a_{2}^{3}+5a_{2}a_{3}-a_{4}.\] **Theorem 3.6**.: _If \(f\in\mathcal{S}_{G}^{*}\) and has the series representation \(f^{-1}(w)=w+A_{2}w^{2}+A_{3}w^{3}+\cdots\) then_ \[\left|A_{2}\right|\leq\frac{1}{2},\] \[\left|A_{3}\right|\leq\frac{5}{12}, \tag{3.33}\] \[\left|A_{4}\right|\leq\frac{31}{72}.\] _These bounds are sharp, except \(A_{4}.\)_ Proof.: Let \(f\in\mathcal{S}_{G}^{*}\) and \(f^{-1}(w)=w+A_{2}w^{2}+A_{3}w^{3}+\cdots.\) If equations (3.5), (3.6) and (3.7) are written in (3.32), we obtain \[A_{2}=-\frac{p_{1}}{4}, \tag{3.34}\] \[A_{3}=\frac{p_{1}^{2}}{6}-\frac{p_{2}}{8},\] (3.35) \[A_{4}=-\frac{1}{576}\left(48p_{3}-128p_{1}p_{2}+83p_{1}^{3} \right). \tag{3.36}\] Because of \(\left|p_{1}\right|\leq 2\), the result for \(\left|A_{2}\right|\leq\frac{1}{2}\) is trivial. We now estimate \(\left|A_{3}\right|\) by using (2.5) in equality (3.35). Thus, we have \[\left|A_{3}\right|\leq\frac{1}{8}\left|p_{2}-\frac{4}{3}p_{1}^{2}\right|\leq \frac{5}{12}.\] From the relations Lemma 2.1 and (3.36), and by some simple calculations, we have \[K=-\frac{4-\sigma^{2}}{24}\left(\frac{31\sigma^{3}}{24\left(4-\sigma^{2}\right)} -\frac{5\sigma}{3}x-\frac{\sigma}{2}x^{2}+\left(1-\left|x\right|^{2}\right)y \right),\] where \(\sigma=p_{1}\in\left[0,2\right],\)\(\left|x\right|\leq 1\) and \(\left|y\right|\leq 1.\) We now investigate upper bound of the \(\left|K\right|\) according to \(\sigma.\) A. If \(\sigma=0,\) then \(T=-\frac{1}{6}\left(1-\left|x\right|^{2}\right)y\) and so we have \(\left|T\right|\leq\frac{1}{6}.\) B. Let \(\sigma=2.\) Then \(T=-\frac{31}{72}\) and so we have \(\left|T\right|\leq\frac{31}{72}.\) C. We now assume that \(\sigma\in\left(0,2\right).\) Then, we can write \[\begin{split}\left|K\right|=&\frac{4-\sigma^{2}}{24 }\left|\frac{31\sigma^{3}}{24\left(4-\sigma^{2}\right)}-\frac{5\sigma}{3}x- \frac{\sigma}{2}x^{2}+\left(1-\left|x\right|^{2}\right)y\right|\\ \leq&\frac{4-\sigma^{2}}{24}\left(\left|\widetilde{ A}+\widetilde{B}x+\widetilde{C}x^{2}\right|+1-\left|x\right|^{2}\right),\end{split} \tag{3.37}\] where \[\widetilde{A}=\frac{31\sigma^{3}}{24\left(4-\sigma^{2}\right)},\text{ } \widetilde{B}=-\frac{5\sigma}{3}\text{ \ and \ }\widetilde{C}=-\frac{\sigma}{2}.\] For the rest of the proof, we use Lemma 2.4. Then \[\widetilde{A}\widetilde{C}=-\frac{31\sigma^{4}}{48\left(4-\sigma^{2}\right)}<0.\] C1. Note that the inequality \[-4\widetilde{A}\widetilde{C}\left(\widetilde{C}^{-2}-1\right)-\widetilde{B}^{ 2}=\frac{31\sigma^{4}}{12\left(4-\sigma^{2}\right)}\left(\frac{4}{\sigma^{2} }-1\right)-\frac{25\sigma^{2}}{9}\leq 0\] which evidently holds for \(s\in\left(-\infty,\infty\right).\) Moreover, the inequality \(\left|\widetilde{B}\right|<2\left(1-\left|\widetilde{C}\right|\right)\) is equivalent to \(\sigma<\frac{3}{4},\) which is true for \(\sigma\in\left(0,\frac{3}{4}\right).\) Then, by (3.37) and Lemma 2.4, \[\begin{split}\left|K\right|\leq&\frac{4-\sigma^{2}} {24}\left(1-\left|\widetilde{A}\right|+\frac{\widetilde{B}^{2}}{4\left(1- \left|\widetilde{C}\right|\right)}\right)\\ =&\frac{7\sigma^{3}+128\sigma^{2}+288}{1728}\leq \frac{2581}{12288}\approx 0.210042\end{split} \tag{3.38}\] for \(\sigma\in\left(0,\frac{3}{4}\right).\) C2. Since \[4\left(1+\left|\widetilde{C}\right|\right)^{2}=\sigma^{2}+4\sigma+4,\text{ \ \ }-4 \widetilde{A}\widetilde{C}\left(\widetilde{C}^{-2}-1\right)=\frac{31\sigma^{2} }{12},\] for \(\sigma\in\left[\frac{3}{4},2\right),\) we see that the inequality \[\frac{25\sigma^{2}}{9}=\widetilde{B}^{2}<\min\left\{4\left(1+\left|\widetilde{ C}\right|\right)^{2},\text{ }-4\widetilde{A}\widetilde{C}\left(\widetilde{C}^{-2}-1\right)\right\}=\frac{3 1\sigma^{2}}{12}\] is equivalent to \(\frac{7\sigma^{2}}{36}<0\), which is false for \(\sigma\in\left[\frac{3}{4},2\right).\) C3. Observe that the inequality \[\left|\widetilde{C}\right|\left(\left|\widetilde{B}\right|+4\left|\widetilde{A} \right|\right)-\left|\widetilde{A}\widetilde{B}\right|=\frac{\sigma^{2}\left(2 9\sigma^{2}-240\right)}{72\left(\sigma^{2}-4\right)}\leq 0\] is false for \(\sigma\in\left[\frac{3}{4},2\right).\) C4. Note that the inequality \[\left|\widetilde{A}\widetilde{B}\right|-\left|\widetilde{C}\right|\left( \left|\widetilde{B}\right|-4\left|\widetilde{A}\right|\right)=\frac{\sigma^{2 }\left(401\sigma^{2}-240\right)}{72\left(4-\sigma^{2}\right)}\leq 0\] is true \(\sigma\in\left[\frac{3}{4},4\sqrt{\frac{15}{401}}\right].\) Then, by (3.37) and Lemma 2.4, \[\begin{split}|K|\leq&\frac{4-\sigma^{2}}{24}\left(- \left|\widetilde{A}\right|+\left|\widetilde{B}\right|+\left|\widetilde{C} \right|\right)\\ =&\frac{1}{576}\left(208\sigma-83\sigma^{3}\right) \leq\frac{3968}{1203}\sqrt{\frac{5}{1203}}\approx 0.212646\end{split} \tag{3.39}\] for \(\sigma\in\left[\frac{3}{4},4\sqrt{\frac{15}{401}}\right].\) C5. It remains to consider the last case in Lemma 2.4, for \(\sigma\in\left(4\sqrt{\frac{15}{401}},2\right).\) Then, by (3.37) \[\begin{split}|K|\leq&\frac{4-\sigma^{2}}{24}\left( \left|\widetilde{A}\right|+\left|\widetilde{C}\right|\right)\sqrt{1-\frac{ \widetilde{B}^{2}}{4\widetilde{A}\widetilde{C}}}\\ =&\frac{\sqrt{400-7\sigma^{2}}\left(19\sigma^{2}+4 8\right)}{576\sqrt{93}}\leq\frac{31}{72}\approx 0.430556\end{split} \tag{3.40}\] is true for \(\sigma\in\left(4\sqrt{\frac{15}{401}},2\right).\) In addition, as can be seen from the figure below, the function \(H_{1}(\sigma)=\frac{\sqrt{400-7\sigma^{2}}\left(19\sigma^{2}+48\right)}{576 \sqrt{93}}\) is increasing in \(\left(4\sqrt{\frac{15}{401}},2\right)\) and takes its maximum value at the limit. From A-C, we have \(|A_{3}|\leq\frac{31}{72}.\)
2308.14927
Cluster Cosmology Redux: A Compact Model of the Halo Mass Function
Massive halos hosting groups and clusters of galaxies imprint coherent, arcminute-scale features across the spectrophotometric sky, especially optical-IR clusters of galaxies, distortions in the sub-mm CMB, and extended sources of X-ray emission. Statistical modeling of such features often rely upon the evolving space-time density of dark matter halos -- the halo mass function (HMF) -- as a common theoretical ground for cosmological, astrophysical and fundamental physics studies. We propose a compact (eight parameter) representation of the HMF with readily interpretable parameters that stem from polynomial expansions, first in terms of log-mass, then expanding those coefficients similarly in redshift. We demonstrate good ($\sim \! 5\%$) agreement of this form, referred to as the dual-quadratic (DQ-HMF), with Mira-Titan N-body emulator estimates for halo masses above $10^{13.7} h^{-1} {\rm M}_\odot$ over the redshift range $0.1 < z < 1.5$, present best-fit parameters for a Planck 2018 cosmology, and present parameter variation in the $\sigma_8 - \Omega_{\rm m}$ plane. Convolving with a minimal mass-observable relation (MOR) yields closed-form expressions for counts, mean mass, and mass variance of cluster samples characterized by some observable property. Performing information-matrix forecasts of potential parameter constraints from existing and future surveys under different levels of systematic uncertainties, we demonstrate the potential for percent-level constraints on model parameters by an LSST-like optical cluster survey of 300,000 clusters and a richness-mass variance of $0.3^2$. Even better constraints could potentially be achieved by a survey with one-tenth the sample size but with a reduced selection property variance of $0.1^2$. Potential benefits and extensions to the basic MOR parameterization are discussed.
Cameron E. Norton, Fred C. Adams, August E. Evrard
2023-08-28T22:58:33Z
http://arxiv.org/abs/2308.14927v1
# Cluster Cosmology Redux: A Compact Representation for the Halo Mass Function ###### Abstract Massive halos hosting groups and clusters of galaxies imprint coherent, arcminute-scale features across the spectrophotometric sky, especially optical-IR clusters of galaxies, distortions in the sub-mm CMB, and extended sources of X-ray emission. Statistical modeling of such features often rely upon the evolving space-time density of dark matter halos - the halo mass function (HMF) - as a common theoretical ground for cosmological, astrophysical and fundamental physics studies. We propose a compact (eight parameter) representation of the HMF with readily interpretable parameters that stem from polynomial expansions, first in terms of log-mass, then expanding those coefficients similarly in redshift. We demonstrate good (\(\sim\) 5%) agreement of this form, referred to as the dual-quadratic (DQ-HMF), with Mira-Titan N-body emulator estimates for halo masses above \(10^{13.7}\,h^{-1}\,\mathrm{M}_{\odot}\) over the redshift range \(0.1<z<1.5\), present best-fit parameters for a Planck 2018 cosmology, and present parameter variation in the \(\sigma_{8}-\Omega_{\mathrm{m}}\) plane. Convolving with a minimal mass-observable relation (MOR) yields closed-form expressions for counts, mean mass, and mass variance of cluster samples characterized by some observable property. Performing information-matrix forecasts of potential parameter constraints from existing and future surveys under different levels of systematic uncertainties, we demonstrate the potential for percent-level constraints on model parameters by an LSST-like optical cluster survey of 300,000 clusters and a richness-mass variance of \(0.3^{2}\). Even better constraints could potentially be achieved by a survey with one-tenth the sample size but with a reduced selection property variance of \(0.1^{2}\). Potential benefits and extensions to the basic MOR parameterization are discussed. ## 1 Introduction The evolving population of galaxy clusters on the sky is a cosmological diagnostic whose value has been recognized since the era when 4-m class telescopes opened the study of clusters at redshifts above 0.5 (Gunn et al., 1986; Peebles et al., 1989; Evrard, 1989). The massive halos that host groups and clusters of galaxies represent a rare event tail of hierarchical structure formation that is sensitive to both the growth rate of linear structure (White et al., 1993) and the nature of the initial fluctuation power spectrum (Dalal et al., 2008). Constraints on cosmological parameters forecast for deep and wide cluster samples two decades ago (e.g., Haiman et al., 2001; Holder et al., 2001; Battye & Weller, 2003) are now emerging from cluster selection methods based on features observed in optical-IR surveys (Gladders et al., 2007; Rozo et al., 2010; Rykoff et al., 2014; Gonzalez et al., 2019; Abdullah et al., 2020; Abbott et al., 2020; Miyatake et al., 2021; Aguena et al., 2021; Wen & Han, 2022; Maturi et al., 2023), thermal Sunyaev-Zel'dovich (SZ) effect on the cosmic microwave background (Sehgal et al., 2011; de Haan et al., 2016; Planck Collaboration et al., 2016; Bocquet et al., 2019) and extended X-ray emission (Bohringer et al., 2007; Vikhlinin et al., 2009; Mehtrens et al., 2012; Pierre et al., 2016; Pacaud et al., 2018; Ider Chitham et al., 2020; Chiu et al., 2023). Local cluster counts are sensitive to the current linear power spectrum amplitude, \(\sigma_{8}\), and matter density parameter, \(\Omega_{\mathrm{m}}\), particularly through the combination, \(S_{8}\equiv\sigma_{8}(\Omega_{\mathrm{m}}/0.3)^{0.5}\)(Allen et al., 2011). Cosmological constraints from the aforementioned studies are sometimes inconsistent. Dark Energy Survey Year One (DES-Y1) analysis, based on counts and mean lensing masses in four richness and three redshift bins with a total sample size of 6500 clusters, find \(S_{8}=0.65\pm 0.04\), significantly (\(4\sigma\)) below the \(0.830\pm 0.013\) value from Planck 2018 CMB analysis (Planck Collaboration et al., 2020). In contrast KIDS-DR3 cluster population analysis (Lesci et al., 2022), based on a data vector similar to that of DES-Y1 derived from an optical sample of nearly 3700 clusters, yields \(S_{8}=0.78\pm 0.04\), \(1\sigma\) consistent with the Planck CMB value. Within a given cosmology, formulating expectations for cluster counts and aggregate lensing masses of samples selected on some observable property is challenged by several sources of systematic uncertainty. The physical extent of massive halos and their preference to form in large-scale overdense regions of the cosmic web creates source confusion; the virial regions of \(M>3\times 10^{13}\,\mathrm{M}_{\odot}\) halos hosting groups and clusters of galaxies cover one-third of the \(\Lambda\)CDM sky within \(z<1.5\)(Voit et al., 2001). Projection tends to boost intrinsic properties (_e.g._, White et al., 2002; Cohn et al., 2007; Costanzi et al., 2019), but the fact that the effects of projected structure on optical, X-ray, and SZ measurements will generally differ reinforces the value of multi-wavelength cluster sample analysis. The statistical relationship between the bulk observable properties of a halo, such as its X-ray temperature, gas or stellar mass, or galaxy richness1, and its true total mass is another source of uncertainty (_e.g._, Salvati et al.2020; Wu et al.2021). This relationship connects the sky+redshift-space abundance of a property-selected cluster population to the space-time density of massive halos. The differential form of the latter point density, known as halo mass function (HMF), is now well characterized in the space of standard \(\Lambda\)CDM cosmological parameters by large N-body simulation campaigns (_e.g._, Bocquet et al.2020, and references therein). Footnote 1: Background-subtracted count, often of red galaxies, within a characteristic radius, (_e.g._, Rozo et al.2009). A convolution of the HMF with the mass-observable relation (MOR) is the basis of survey statistical expectations, and a power-law mean with log-normal variance is a canonical MOR form motivated by cosmological hydrodynamics simulations (_e.g._, Bryan and Norman1998; Angulo et al.2012; Farahi et al.2018; Anbajagane et al.2020, and references therein). Over a wide dynamic range in mass, a single power law mean may be insufficient, especially for hot gas properties (Farahi et al.2018; Pop et al.2022), and the variance may also be mass-dependent (Anbajagane et al.2020). Extensions to accommodate such behavior are discussed in SS5.5. The convolution naturally joins cluster astrophysics, encapsulated by the MOR, to the (primarily) cosmology-driven HMF, and the parameter couplings of these spaces have been explored previously (Evrard et al.2014, hereafter, E14). The model we present here extends previous work by letting the HMF shape parameters vary continuously with redshift. Essentially, E14 introduced approximate HMF forms at a fixed epoch in order to develop expressions for conditional statistics of samples selected by an observable property. This present work develops a continuous space-time representation of the differential space density at high halo masses with the goal of constructing a compact, interpretable form of the HMF. In this paper, we first show that a simple, eight parameter representation captures the near-field (\(z<1.5\)) group/cluster HMF derived by the Mira-Titan universe ensemble (Bocquet et al.2020). Coupled with a log-normally distributed MOR, we derive closed-form expressions for both the evolving space density and the log-mean selected mass of the group/cluster population as a function of the selection property and redshift. With those ingredients, we then perform an information matrix (IM)2 analysis to explore the ability of current and future cluster surveys -- specifically, those based on counts and mean gravitational lensing mass as a function of a chosen selection property and redshift -- to constrain the parameters of this HMF form. The forecasts require information on the expected uncertainty in lensing mass measurements as well as the uncertainty in the MOR variance, and we consider both current estimates and future advances in our projections. Footnote 2: We omit using the proper name associated with this method due to that person’s embrace of eugenics principles. Why consider cluster cosmology as an HMF-centric exercise? The first reason is that the evolving shape of the HMF is interesting in its twin right, as it contains information on both vCDM parameters and other cosmological physics, including light neutrino masses (Marulli et al.2011; Costanzi et al.2013; Hagstotz et al.2019; Hernandez-Aguayo et al.2022; Adamek et al.2022), modified gravity models (Schmidt2009; Zhao et al.2011; Cataneo et al.2016; Arnold et al.2019; Hagstotz et al.2019; Mitchell et al.2021), and non-Gaussian initial fluctuations (Matarrese et al.2000; Sefusatti et al.2007; Grossi et al.2009; Pillepich et al.2010; LoVerde and Smith2011; Harrison and Coles2011; Jung et al.2023; Coulton et al.2023). The astrophysics of galaxy formation affects the HMF shape in non-trivial ways that continue to be studied by cosmological hydrodynamics simulations (Stanek et al.2009; Cui et al.2012, 2014; Martizzi et al.2014; Cusworth et al.2014; Castro et al.2021; Schaye et al.2023). Another reason is that HMF-centric analyses can exploit increasingly tight constraints on the differential comoving volume element, \(dV/dz\), from baryon acoustic oscillations (BAO, Alam et al.2021; Abbott et al.2022) and Type Ia supernovae (SN, Guy et al.2010; Abbott et al.2019; Brout et al.2022; Mitra et al.2023). In current methods of cluster survey analysis, \(dV/dz\) is left free to vary in the space of \(\Lambda\)CDM parameters. From a practical perspective, a compact representation for the HMF at galaxy cluster scales can serve as a consistency check among cluster samples selected at different wavelengths. As the common ground that underlies all cluster samples, the inferred HMF needs to be consistent across surveys and independent of the chosen sample selection property. An important feature of our model is that it naturally incorporates multiple, intrinsically correlated physical properties (Mulroy et al.2019; Farahi et al.2019). We are far from the first to emphasize the HMF shape. The original study of Bahcall and Cen (1993) used counts of nearby clusters selected by optical and X-ray properties to directly estimate the HMF of the low-redshift universe. That work benefited from the insensitivity of nearby volume to cosmological mean density parameters. Subsequent studies derived HMF estimates from X-ray samples (Reiprich and Bohringer2002; Bohringer et al.2017) or optical cluster samples using galaxy richness (Bahcall et al.2003) or velocity dispersion (Pisani et al.2003; Rines et al.2007, 2008) as a proxy for mass. The statistical power of these samples was limited by their moderate sample sizes, typically several hundred systems. Compact HMF representations already exist, but historically they have been expressed in terms of a similarity variable, \(\sigma^{2}(M)\), the rms amplitude of linear perturbations smoothed on a Lagrangian scale \(R\propto M^{1/3}\)(Press and Schechter1974). An assumption about cosmology is required to convert these forms to a function of mass. The Sheth-Tormen (ST) form (Sheth et al.2001) is a popular example, and constraints on the parameters of this model have been published from analysis of magnified images of sub-mm galaxies (Cueli et al.2022) and from counts of GAMA groups and clusters (Driver et al.2022). Because the ST model represents a non-linear function of the similarity variable rather than mass directly, its parameters are difficult to interpret. By expressing the HMF directly in terms of halo mass and redshift, the eight free parameters of our model (see Table 1 below) have clear interpretations as coefficients of polynomial expansions. The aims of this paper are twofold. We first demonstrate the model's ability to reproduce LCDM sky counts from the Mira-Titan emulator3 in the space of fluctuation amplitude, \(\sigma_{8}\), and matter density parameter, \(\Omega_{\rm m}\)(Bocquet et al.2020), and provide Planck 2018 model parameters. We then apply an information matrix approach to estimate potential constraints on DQ-HMF parameters from idealized cluster surveys patterned after existing (DES-Y1, Abbott et al.2020) and future (LSST, Chisari et al.2019) galaxy cluster surveys. The parameter forecasts employ cluster counts and mean weak lensing masses, each derived within finite ranges of selection property and redshift, along with an additional input on the degree of scatter in log-mass at fixed value of the selection property. In SS2 we detail the model's structure, demonstrate its utility at capturing emulator predictions in the space of \(\{\sigma_{8},\Omega_{\rm m}\}\). Expressions for counts and mean mass as a function of an observable property are presented in SS3, and the IM elements we employ for survey analysis are also defined there. Forecasts of parameter constraints from existing and planned cluster surveys are presented in SS4. In SS5 we discuss ideas for implementing the model and review how massive halos tie to many non-Gaussian LSS signatures. Benefits of selecting a sub-sample with reduced property variance are made explicit in SS5.3. An appendix offers a three-parameter toy model that helps illustrate the key role of MOR variance. We employ a cosmology with matter density \(\Omega_{m}=0.311\), baryon density \(\Omega_{b}=0.0489\), Hubble constant \(H_{0}=6.77\,{\rm km\ s^{-1}\ Mpc^{-1}}\), primordial spectral index \(n_{s}=0.967\), and power spectrum normalization \(\sigma_{8}=0.810\), values derived by Planck2018 CMB+BAO analysis. Our measure of halo mass is \(M_{200c}\), the mass defined by a mean interior spherical overdensity of 200 times the critical density, \(\rho_{c}(z)\), and we express this mass in units of \(\,h^{-1}\,{\rm M_{\odot}}\), where \(h=H_{0}/100\,{\rm km\ s^{-1}\ Mpc^{-1}}\). Our spatial density unit for the HMF is \(h^{3}\,{\rm Mpc^{-3}}\). The IM analysis is patterned after optical surveys samples but is generalizable to samples selected by other properties. Relative to X-ray and SZ selection, optical samples have the benefit of distance estimation from spectroscopic or photometric redshifts. We ignore distance uncertainties, as the redshift bins we employ are much wider than typical uncertainties (Rykoff et al., 2014, 2016; Maturi et al., 2023). ## 2 Methods A key component of \(\Lambda\)CDM structure formation is an initially Gaussian random density field whose amplitude grows due to gravity. A spherical collapse model (Gunn & Gott, 1972) argues for a linearly evolved perturbation amplitude threshold at which halos form. Combining these elements, the HMF was originally derived by Press & Schechter (1974) as a derivative with respect to scale of the fraction of mass in the universe that satisfies the collapse condition. At the highest masses, for which only extreme peaks in the density field can have collapsed, this fraction is an error function with large argument, and its derivative leads to a steeply falling HMF with mass. At lower masses, where more modest-sized perturbations can collapse, the HMF transitions to nearly a power-law form. The model presented in SS2.1 represents the high mass portion by the tail of a Gaussian in log-mass, meaning the log of the HMF scales as a negative quadratic with mass. These three coefficients are themselves expanded as polynomials with redshift. While E14 included a cubic log-HMF representation, we defer that approach to future work as the quadratic form captures much of the information available in cluster counts, as we show in SS 2.2 below. HMF parameter values in the space of \(\{\sigma_{8},\Omega_{\rm m}\}\) are presented in SS2.3. ### A compact form for the cluster-scale HMF The HMF describes the comoving spatial number density of halos as a function of mass and redshift. Considering a small volume, \(dV\), at some redshift, \(z\), the probability that the center of a halo of mass, \(M\), lies within that volume defines the differential HMF \[dp\equiv\left[\frac{dn(M,z)}{d{\rm ln}M}\right]\,d{\rm ln}M\,dV. \tag{1}\] The convention of number density per logarithmic unit of mass used above implies that the HMF has dimension of inverse volume per logarithmic unit of mass. We express the HMF amplitude in units of \(h^{3}\,{\rm Mpc^{-3}}\). We introduce an eight parameter model that employs low-order polynomial forms in log-mass and redshift. Letting \(\mu\equiv{\rm ln}(M/M_{p})\), where \(M_{p}\) is a characteristic (pivot) mass scale, we begin with the E14 quadratic form for the log of the HMF, \[{\rm ln}\left[\frac{dn(\mu,z)}{d\mu}\right]=-\sum_{i=0}^{2}\,\frac{1}{i!}\, \,\beta_{i}(z)\,\,\mu^{i}. \tag{2}\] The characteristic mass is essentially the pivot scale of a quadratic expansion of the log HMF, with \(\beta_{0}(z)\) is the normalization, \(\beta_{1}(z)\) the local slope, and \(\beta_{2}(z)\) the curvature of \({\rm ln}[dn/d\mu]\). We choose a pivot mass of \(10^{14.3}\,h^{-1}\,{\rm M_{\odot}}\) and apply this form for \(M\geq 10^{13.7}\,h^{-1}\,{\rm M_{\odot}}\). Below this mass scale the HMF transitions to a pure power-law form (_e.g._, Sheth et al., 2001, and references therein). The explicit negative sign on the RHS of equation (2) is used so that the \(\beta_{i}(z)\) parameters take on positive values. We choose this form over an explicit Gaussian representation because the latter would imply a global representation over a very wide mass range. Instead, we are operating on a relatively narrow mass range, roughly 1.5 decades wide, out on the Gaussian's tail, so a description using canonical location and width of a normal distribution is not as useful or meaningful. The E14 analysis used only \(\beta_{i}\) values defined at a few specific redshifts. We extend that work by allowing the first pair of coefficients to run as quadratic functions of \((1+z)\), \[\beta_{i}(z)=\beta_{i,n}+\beta_{i,z}\,(z-z_{p})+\frac{1}{2}\beta_{i,z2}\,(z-z _{p})^{2}\,\,\,;\,\,i\in\{0,1\}, \tag{3}\] where \(z_{p}\) is a pivot redshift which we take to be 0.5. The mass curvature evolves linearly with redshift, \[\beta_{2}(z)=\beta_{2,n}+\beta_{2,z}\,(z-z_{p}). \tag{4}\] Hereafter, we refer to equations (2), (3) and (4) as the dual-quadratic (DQ-HMF) model. Table 2 summarizes the DQ-HMF parameters for the default \(\Lambda\)CDM Planck2018 cosmology. The following section describes how we obtained these values using the Mira-Titan emulator. \begin{table} \begin{tabular}{c l l} **Parameter** & **Definition** & **Value(s)** \\ \hline \(\beta_{i}(z)\) & HMF evolving shape in \(\mu_{i},i\in[0,2]\) & see Fig. 2 \\ \(\beta_{i,n}\) & normalization of \(\beta_{i}\) at \(z_{p}\) & \(\{12.32,2.26,0.75\}\) \\ \(\beta_{i,z}\) & redshift gradient of \(\beta_{i}\) at \(z_{p}\) & \(\{2.38,1.35,0.53\}\) \\ \(\beta_{i,z}\) & redshift curvature of \(\beta_{i}\) at \(z_{p}\) & \(\{1.39,0.45\}\) \\ \(M_{p}\) & pivot mass & \(10^{14.3}\,h^{-1}\,{\rm M_{\odot}}\) \\ \(z_{p}\) & pivot redshift & 0.5 \\ \(M_{\rm lim}\) & minimum fit mass & \(10^{13.7}\,h^{-1}\,{\rm M_{\odot}}\) \\ \end{tabular} \end{table} Table 1: Summary of DQ-HMF model parameters. The \(\beta_{i}(z)\) terms represent negatives of the normalization, slope and curvature of the log-space HMF at redshift \(z\) using \(\mu={\rm ln}(M/M_{p})\), equation (2), and units of \(\,h^{3}\,{\rm Mpc^{-3}}\). Rows two through four fits the eight core HMF parameters: elements of the polynomial redshift expansions of the \(\beta_{i}(z)\) terms around \(z_{p}\), equations (3) and (4). Values for these parameters are determined by fitting to \(\Lambda\)CDM Mira-Titan emulator predictions listed in Table 2. The final three rows list our choices of pivot locations in mass and redshift as well as the minimum mass scale of the Mira-Titan fits. ### Fitting to Mira-Titan expectations We evaluate the model using \(\Lambda\)CDM expectations based on the Mira-Titan emulator (Bocquet et al., 2020) using a process guided by expectations for the LSST survey sky area of \(18,000\) deg\({}^{2}\)(Ivezic et al., 2019). We use fourteen redshift bins ranging from \(z_{\rm min}=0.1\) to \(z_{\rm max}=1.5\), each of width \(\Delta z=0.1\). At the central redshift, \(z_{j}\), of each bin, we evaluate the Mira-Titan HMF and convert it to a differential number density function for an LSST-like sky area, \[\frac{dN(\mu,z_{j})}{d\mu}=\frac{dn(\mu,z_{j})}{d\mu}\,\Delta V_{j}, \tag{5}\] where \(\Delta V_{j}\) is the volume of an 18,000 sq degree survey between redshifts \(z_{j}-0.05\) and \(z_{j}+0.05\). We then integrate this form to obtain counts per bin in twenty \(\mu\)-bins between masses of \(10^{13.7}\,h^{-1}\,{\rm M}_{\odot}\) and \(10^{15}\,h^{-1}\,{\rm M}_{\odot}\). We assign a Poisson uncertainty to each bin, and obtain best-fit parameters by minimizing \(\chi^{2}\) across the combined set of 280 sampled count values. This approach emphasizes fitting at lower masses, the range that provides the majority of information in cosmological surveys.4 Footnote 4: Indeed, Wu et al. (2021) employ counts above a single threshold, rather than differential counts, in their forecasting of \(S_{8}\) error from future surveys. Figure 1 compares the eight-parameter DQ-HMF differential model counts to the Mira-Titan emulator values. For clarity, we show four redshifts selected from the full range used in the fit; other redshifts behave similarly. The differences between the DQ form and the emulator expectations are below 5% at masses \(<5\times 10^{14}\,h^{-1}\,{\rm M}_{\odot}\), increasing to tens of percent at the highest masses. To contextualize the differences in the lower panel we first note that the fits are best at the lowest masses, where the information content is highest. In addition, the finite volumes of the Mira-Titan N-body ensemble yield an HMF uncertainty of \(\sim 10\%\) at \(10^{15}\,h^{-1}\,{\rm M}_{\odot}\)(Bocquet et al., 2020). Finally, the emulator is based on universes in which the clustered matter is purely collisionless matter (dark matter with an optional minority neutrino component). The gravitational back-reaction effects of baryons cycling through the process of compact object formation can drive HMF deviations larger than 5% over the mass and redshift range shown (Stanek et al., 2009; Cui et al., 2012, 2014; Martizzi et al., 2014; Cusworth et al., 2014; Castro et al., 2021; Schaye et al., 2023). Given these uncertainties, the DQ-HMF model can be considered a sufficient representation of the cluster population in the late universe. The model parameters resulting from the Mira-Titan emulator fits for a Planck 2018 cosmology are listed in Table 2. Points in Figure 2 show \(\beta_{i}(z)\) values determined by fitting the HMF at each sampled redshift, while lines show the redshift-continuous DQ fit with quadratic behavior of the HMF normalization and slope and linear behavior of the HMF curvature. Halos at the pivot mass scale become increasingly rare with increasing redshift -- the normalization varies by roughly a factor of 100 over the redshift range shown -- and the HMF shape at the pivot mass becomes both steeper and more strongly curved at earlier times. In mapping to observable properties, the value of the slope, \(\beta_{1}(z)\), is particularly important as it controls the amplitude of a convolution-induced bias (often referred to as Eddington bias) discussed below. The local slope at the pivot mass of \(10^{14.3}\,h^{-1}\,{\rm M}_{\odot}\) steepens from \(-2\) at \(z=0.2\) to \(-4\) at \(z=1.5\), implying that the magnitude of this bias will grow by a factor of two over this redshift range. In the IM analysis below we explore two idealized cases, one patterned after the existing DES-Y1 cluster sample, which covers roughly 5000 deg\({}^{2}\) over redshifts, \(0.2<z<0.65\) and another patterned after the wider, 18000 deg\({}^{2}\), and deeper LSST survey. Figure 3 shows the DQ model expectations for LSST survey counts of massive halos with \(M_{\rm 200c}\geq 10^{13.7}\,h^{-1}\,{\rm M}_{\odot}\) in 0.1-wide redshift bins. A quarter million such halos should lie in the range \(0.1<z<1.5\), with the population strongly peaked near our chosen pivot redshift of 0.5. ### Relating HMF shape to cosmological parameters The DQ-HMF shape is generic in \(\Lambda\)CDM cosmologies. Here we use the same Mira-Titan sky expectation fitting process to map how DQ-HMF parameters vary in the canonical cluster cosmology plane of \(\{\Omega_{\rm m},\ \sigma_{8}\}\). All other cosmological parameters are held constant in this exercise. \begin{table} \begin{tabular}{c|c} **Parameter** & **Value** \\ \hline \(\beta_{0,n}\) & 12.32 \\ \(\beta_{0,z}\) & 2.38 \\ \(\beta_{0,z2}\) & 1.39 \\ \(\beta_{1,n}\) & 2.26 \\ \(\beta_{1,z}\) & 1.35 \\ \(\beta_{1,z2}\) & 0.45 \\ \(\beta_{2,n}\) & 0.75 \\ \(\beta_{2,z}\) & 0.53 \\ \end{tabular} \end{table} Table 2: DQ-HMF parameters of the Mira-Titan \(\Lambda\)CDM model. The \(\beta_{0}\) normalization at \(z_{p}=0.5\) is equivalent to a space density of \(10^{5-8.85}\,{\rm Mpc}^{-3}\) for Hubble parameter \(h=0.677\). Figure 1: The upper panel shows DQ-HMF fits (solid) to the Mira-Titan emulator expectations (dashed) for counts of halos above \(10^{13.7}\,h^{-1}\,{\rm M}_{\odot}\) centered at redshifts shown in the legend over the Rubin-LSST area of \(18,000\) deg\({}^{2}\). A total of 280 sampled counts — 20 mass bins in each of fourteen redshift shells covering the interval \(0.1<z<1.5\) — are used to fit the eight parameters of the model (see Table 2); we show only a subset for clarity. Poisson uncertainties applied to each binned count yield a model that best fits lower masses. The lower panel displays the fractional deviation of the fits, \(N_{\rm DQ}/N_{\rm MiraTitan}-1\), with the grey band highlighting 5% agreement. Baryon effects associated with galaxy formation drive deviations at this level of larger (_e.g._, Castro et al., 2021), and the emulator itself is uncertain at the 10%-level at \(10^{15}\,h^{-1}\,{\rm M}_{\odot}\)(Bocquet et al., 2020). Figure 4 shows the resultant behavior of the HMF shape parameters, with the top row showing normalizations, \(\beta_{i,\mathbf{n}}\), at the pivot redshift, the middle the gradients with redshift, \(\beta_{i,z}\), and the bottom row the redshift curvature values for the HMF normalization and mass gradient, \(\beta_{0,z2}\) and \(\beta_{1,z2}\), respectively. (Recall that the HMF curvature evolves only linearly with redshift, meaning \(\beta_{2,z2}=0\).) Since the amplitude at the pivot redshift, \(\beta_{0,\mathbf{n}}\), is the primary controller of counts, it is not surprising that its contours tend to follow loci of \(\sigma_{8}\sqrt{\Omega_{\rm m}}\simeq\) const. in the top-left panel. The negative of the HMF log-mass slope at \(z_{P}\) (top middle panel) is sensitive only to \(\sigma_{8}\), reducing from 2.8 to 1.9 as \(\sigma_{8}\) increases from 0.7 to 0.9. The negative of the mass curvature at the pivot redshift (upper right) behaves somewhat orthogonal to \(\beta_{0,\mathbf{n}}\), with values ranging from 0.6 to 0.9 over the range shown. The rate at which the HMF shape shifts over time depends is also dependent on cosmology. The middle row of Figure 4 shows redshift gradients of the mass expansion terms. The gradient of the normalization, \(\beta_{0,z}\), is highly sensitive to \(\sigma_{8}\), scaling inversely from a low of 1.8 at \(\sigma_{8}=0.9\) to a high of 3.5 at \(\sigma_{8}=0.7\). The redshift evolution of the local HMF slope, \(\beta_{1,z}\), exhibits sensitivity similar to that the curvature normalization, \(\beta_{2,\mathbf{n}}\), with an amplitude variation of nearly a factor of two. The three terms of the highest order, \(\beta_{0,z2}\), \(\beta_{1,z2}\) and \(\beta_{2,z}\), display mildly non-linear behaviors in the space of \(\sigma_{8}\) and \(\Omega_{\rm m}\). Both the redshift gradient of the HMF curvature, \(\beta_{2,z}\), and the second redshift derivative of the HMF slope, \(\beta_{1,z2}\), primarily depend on \(\Omega_{\rm m}\), but the latter shifts behavior at high \(\sigma_{8}\). Not surprisingly, the highest-order parameters are anticipated to be the least well constrained in our IM analysis below. Note that the features displayed in Figure 4 emerge from a model employing a fixed pivot mass and redshift. Alternative choices, such as scaling the pivot mass with \(\Omega_{\rm m}\), would lead to slightly different outcomes. Given that Planck CMB+BAO analysis limits the matter density to within a few percent (\(\Omega_{\rm m}=0.3111\pm 0.0065\)) (Planck Collaboration et al., 2020), the shifts in the practical regions of these panels would be quite modest in size. #### 2.3.1 Massive neutrinos The Mira-Titan emulator allows for a non-zero neutrino mass. Using a total neutrino mass \(\sum m_{\nu}=0.5\) eV while keeping all other parameters constant, we find reductions in the HMF curvature and in several redshift gradients terms of order \(-0.05\). The largest shift of \(-0.1\) occurs in the rate of change of the HMF slope, \(\beta_{1,z}\). In terms of the effect on the linear growth rate of perturbations, this is roughly equivalent to reducing \(\Omega_{\rm m}\) by 0.02. Recent CMB lensing analysis by the ACT collaboration (Madhavacheril et al., 2023) limits the neutrino family total mass to 0.12 eV at 95% confidence, equivalent to \(\Omega_{\nu}h^{2}=0.001\). For this smaller neutrino mass, DQ-HMF parameters shift at the level of 0.01 or smaller. This level of error is potentially achievable in future surveys, albeit under optimistic conditions that would take many years to develop, as discussed in SS5. ## 3 Observable features of cluster samples Because the true 3D mass measure of the theoretical HMF is not directly observable, proxies that correlate with that mass measure are required. We use a minimal MOR based on a power-law relation with log-normal scatter, a common assumption of many survey analysis models (Rozo et al., 2010; Sehgal et al., 2011; de Haan et al., 2016; Planck Collaboration et al., 2016; Bocquet et al., 2019; Abdullah et al., 2020; Chiu et al., 2022; Lesci et al., 2022). Generically motivated by central limit theorem arguments (see, _e.g._, Adams & Fatuzzo, 1996, for an application to star formation), this form is also measured in total gas and stellar mass statistics of halos realized by cosmological hydrodynamics simulations (Farahi et al., 2018; Truong et al., 2018; Anbajagane et al., 2020). X-ray scaling relations (Pratt et al., 2009) and lensing analysis of CAMIRA clusters (Chiu et al., 2020) support this form empirically. Generalizations of this approach are discussed in SS5. The MOR model is described in SS3.1, followed by expressions for counts, mean mass and mass variance for samples selected by an observed property (SS3.2). The ingredients of our IM analysis are presented in SS3.3. The first three rows of Table 3 list the parameters needed to describe the MOR: a slope, normalization and variance. The last three rows introduce data quality measures used in the IM analysis. Figure 3: Anticipated LSST-area (18000 deg\({}^{2}\)) halo counts with masses, \(M_{200z}\geq 10^{13.7}\,h^{-1}\,\mathrm{M}_{\odot}\), in 0.1-wide redshift bins covering the range, \(0.1<z<0.5\). The total population of 225,000 peaks near the pivot redshift, \(z_{P}=0.5\). Figure 2: DQ-HMF model parameters, \(\beta_{i}\left(z\right)\), derived from fitting Mira-Titan sky count expectations in 0.1-wide redshift shells centered at the redshifts given by the points. A Planck 2018 \(\Lambda\)CDM cosmology is assumed. Lines show the fits to the redshift-continuous forms, equations (3) and (4), with parameter values given in Table 2. ### Mass-conditioned property likelihood Let \(S\) be the observable property used for sample selection, made dimensionless by the choice of a convenient reference unit, and let \(s\equiv\ln(S)\). The MOR kernel is assumed to be Normal, \[P(s|\mu)=\mathcal{N}(\overline{s}(\mu),\sigma^{2})=\frac{1}{\sqrt{2\pi}\sigma} \exp\left\{-\frac{[s-\overline{s}(\mu)]^{2}}{2\sigma^{2}}\right\}, \tag{6}\] where \(\sigma^{2}\) is the variance in \(s\) at fixed halo mass. The mean selection property scales as a power-law in mass, meaning linearly in log-space, \[\overline{s}(\mu)=\varpi+\alpha\,\mu. \tag{7}\] While carrying value as a mass proxy, \(s\) is not a perfect indicator of mass. We consider only cases where \(\alpha\neq 0\) and \(\sigma^{2}>0\). At fixed variance, steeper proxies are better at selecting mass (see equation (11) below). In most practical cases of bulk observable properties, such as galaxy count or velocity dispersion or X-ray gas mass, the simple maxim that "bigger is bigger" holds, and so we generally expect that \(\alpha>0\).5 Footnote 5: There may be potential exceptions to \(\alpha>0\) scaling, such as the total mass of cold phase gas within the halo. For survey forecasting purposes, we assign values to the MOR parameters given in the right column of Table 3. The normalization, \(\varpi=3.1\), is equivalent to an optical richness, \(\lambda=22\), at the pivot mass scale of \(10^{14.3}\,h^{-1}\,\mathrm{M}_{\odot}\), and we assume a slope of unity. Both values are consistent with the mass-richness relation of HSC clusters (Murata et al., 2019). Other studies have found somewhat different values (see Abdullah et al., 2022, and references therein) but we do not seek to resolve those differences here. The variance of Figure 4: Contours showing DQ-HMF parameter values in the \(\sigma_{\mathrm{S}}\) and \(\Omega_{\mathrm{m}}\) plane. Values are made positive by definition, equation (1), and all use the same pivot mass and redshift given in Table 1. The HMF normalization, \(\beta_{0,\mathrm{m}}\) (top left panel), follows the familiar negative slope traditionally derived from cluster counts. The mass gradient, \(\beta_{1,\mathrm{m}}\) (top middle), is mainly sensitive to \(\sigma_{\mathrm{S}}\) while the curvature, \(\beta_{2,\mathrm{m}}\) (top right), adds information somewhat orthogonal to that of \(\beta_{0,\mathrm{m}}\). The redshift gradient and curvature terms in the middle and lower rows display a range of behaviors, with the normalization terms sensitive only to \(\sigma_{\mathrm{S}}\) and the curvature’s redshift derivative is sensitive primarily to \(\Omega_{\mathrm{m}}\). The highest-order parameter, \(\beta_{1,\mathrm{2}}\), shows non-monotonic behavior within a narrow range of values. A \(20\times 20\) sampling grid was used in the domain shown. \begin{table} \begin{tabular}{c l l} **Parameter** & **Definition** & **Value(s)** \\ \hline \(\varpi\) & MOR normalization & 3.1 \\ \(\alpha\) & MOR slope & 1.0 \\ \(\sigma^{2}\) & MOR variance & (0.3)2 \\ \(\epsilon(M)\) & fractional error in mean mass & see Table 4 \\ \(\epsilon_{\mathrm{U}\mu\mathrm{}}\) & fractional error in mass variance & see Table 4 \\ \end{tabular} \end{table} Table 3: Non-HMF parameters. The first three rows describe the mass–observable relation (MOR), equations (6) and (7). The variance, \(\sigma^{2}\), is in the observed property variance at fixed mass; its inverse, the mass variance at fixed observed property, \(\sigma^{2}_{\mu}\), is given by equation (11). The two bottom rows are assumed fractional errors in mean mass and mass variance used in the IM analysis. \(0.3^{2}\) is consistent with estimates derived from X-ray observations of DES-Y1 clusters (Farahi et al., 2019). Although, in general, \(\varpi\), \(\alpha\), and \(\sigma^{2}\) could be functions of redshift, and the latter two also functions of mass, we consider them to be constants for the purpose of this work. In the analytic expressions below, one may simply replace these constants with appropriate functions. As this would introduce more degrees of freedom, and more sources of degeneracy, into the model, we defer such extensions to future work. Our focus here is to establish a baseline model for cluster sample statistics derived with the minimum of astrophysical complications. ### Counts, mean masses, and mass variance Motivated by DES-Y1 (Abbott et al., 2020) and similar analysis, we now consider two key observable quantities: i) the counts and; ii) mean masses of galaxy clusters disaggregated into bins of redshift and \(s\). Convolving the HMF, equation (2), with the MOR kernel, equation (6), results in an analytic form for the space density of clusters as a function of the selection property. While originally derived in E14, that paper did not write the form explicitly in terms of \(s\) but rather implicitly in terms of the mean selected mass (see equations (5), (10) and (11) of that work). The explicit expression is \[\begin{split}&\ln\left[\frac{dn(s,z)}{ds}\right]=\ln A-\beta_{0}(z) \\ &\quad-\frac{\beta_{2}(z)(s-\varpi)^{2}+2\alpha\beta_{1}(z)(s- \varpi)-\beta_{1}^{2}(z)\sigma^{2}}{2(\alpha^{2}+\beta_{2}(z)\sigma^{2})}, \end{split} \tag{8}\] where \[A=\frac{1}{\sqrt{\alpha^{2}+\beta_{2}(z)\sigma^{2}}}. \tag{9}\] The logarithmic space density is quadratic in the observable, as expected. The last term in the second row of the expression reflects the so-called Eddington bias in mean selected mass, a topic to which we now turn. Following E14, Bayes' theorem implies that the mass distribution of clusters selected at fixed observable property, \(s\), is log-normally distributed with mean \[\langle\mu|s,z\rangle=\frac{(s-\varpi)/\alpha-\beta_{1}(z)\sigma^{2}/\alpha^{ 2}}{1+\beta_{2}(z)\sigma^{2}/\alpha^{2}}. \tag{10}\] The first term in the numerator is simply the inverse of the mean MOR scaling. This value is lowered by the second term, which is approximately the HMF slope, \(\beta_{1}(z)\), times the mass variance. This is the same mathematics as Eddington bias, but the source of variance differs. Eddington's scatter arose from flux measurement errors, which can be reduced by better observations. The scatter we are dealing with here is intrinsic to the population, driven by stochastic processes within a coeval population of equal-mass halos, and so cannot be reduced by improved measurement. We suggest _convolution bias_ is a more appropriate label for this effect. Note that the log-mean mass is actually _linear_ in the log-observable, \(s\), rather than quadratic. This behavior arises from an exact cancellation in the quadratic terms of the Bayes' theorem derivation. The result has an important implication ; additional information must be added to our IM analysis in order to invert the information matrix. We take this extra constraint to be the mass variance conditioned on the observable, \(s\), which is related to the MOR variance by \[\sigma_{\mathbf{\mu}}^{2}(z)=\frac{\sigma^{2}}{\alpha^{2}+\beta_{2}(z)\sigma^{2}}. \tag{11}\] Values of \(\beta_{2}(z)\) are of order unity, so when the MOR scatter is small then the mass scatter can be approximated by the simpler expectation, \(\sigma_{\mathbf{\mu}}=\sigma/|\alpha|\). In our IM analysis below, we impose a fractional uncertainty, \(\epsilon_{\text{Var}\mathbf{\mu}}\), on empirical estimates of this mass variance. Rather than log-mean mass, what is directly measured via cumulative, or _stacked_, analysis of a cluster ensemble is a mean mass, with lensing mass derived from stacking weak lensing galaxy shear patterns (McClintock et al., 2019) or virial mass derived from an ensemble velocity likelihood (Farahi et al., 2016) being two viable methods. The log of this mean mass is shifted6 high by \(\sigma_{\mathbf{\mu}}^{2}/2\) from equation (10), leading to the result Footnote 6: For a log-normally distributed \(x\) having mean, \(\langle x\rangle\), and standard deviation, \(\sigma\), the mean of \(\sigma^{x}\) is \(\exp[\langle x\rangle+\sigma^{2}/2]\). \[\ln\langle M\mid s,z\rangle=\frac{(s-\varpi)/\alpha-(\beta_{1}(z)-1/2)\sigma^ {2}/\alpha^{2}}{1+\beta_{2}(z)\sigma^{2}/\alpha^{2}}. \tag{12}\] A systematic error floor on this quantity, \(\epsilon_{(\mathbf{M})}\), is employed in the following IM analysis. ### Information Matrix Analysis We use an information matrix approach to forecast DQ-HMF model parameter uncertainties anticipated from current and future cluster surveys. Such analysis, while necessarily idealized, is helpful in guiding intuition and exposing parameter degeneracies. The observable measures we consider are traditional elements (Payrene et al., 2023) of counts and mean system masses in the observable property (_e.g._,richness) and redshift bins as well as estimates of the mass variance at fixed property within each redshift bin. The last two rows of Table 3 list control parameters that characterize sample data quality for mean mass and mass variance measurements. #### 3.3.1 Counts and mean masses To derive counts, \(N_{s,z}\), within richness and redshift bins, we integrate the differential form, equation (8), in our chosen cosmology \[N_{s,z}=\int_{z_{\text{min}}}^{z_{\text{max}}}dz\ \frac{dV}{dz}\ \int_{z_{\text{min}}}^{z_{\text{max}}}ds\ \frac{dn(s,z)}{ds}. \tag{13}\] Here the \(\{s,z\}\) subscript denotes bins defined by the chosen limits of integration. Values for these limits are given in the relevant survey application sections of SS4. An exact form for the expected mean mass in each bin requires a volume-weighted integral of the exponential of equation (12). Motivated by the mean value theorem, we take a simpler approach by evaluating equation (12) at the median property value and midpoint redshift of each bin, \[\ln(M)_{s,z}=\ln\langle M\mid\text{med}[s],(z_{\text{min}}+z_{\text{max}})/2\rangle, \tag{14}\] where the median value of \(s\) is determined by integrating the counts in each bin. #### 3.3.2 Degrees of freedom We note that surveys with limited dynamic range in either redshift or selection property will be incapable of returning significant constraints on higher order terms of the DQ form. The cases examined in SS 4 are progressively more ambitious in terms of sample size and data quality. A survey limited to a single redshift shell centered on the pivot redshift, for example, returns no redshift gradient information, so the HMF parameters \(\beta_{i,z}\) and \(\beta_{i,z,2}\) are irrelevant, leaving only the three normalizations, \(\beta_{i,n}\). These parameters, joined with the three MOR parameters, make a total of six. At noted above, the forms of the observable counts, equation (13), and lensing mass measurements, equation (12), would return only _five_ independent quantities: three from the quadratic counts and two from the linear log-mean mass. A data vector consisting only of counts and mean lensing mass is thus insufficient to uniquely constrain all six model parameters. To produce an soluble system, we add an empirical constraint on \(\sigma_{\mu}^{2}\), equation (11). For optically-selected DES-Y1 clusters this quantity has been derived by Farahi et al. (2019) using X-ray temperatures of roughly 200 systems, finding \(\sigma_{\mu}=0.30\pm 0.04\) (stat) \(\pm 0.09\) (sys). This estimate motivates our use of \(0.3^{2}\) for the default MOR variance. As improved mass estimates from lensing and dynamics become available for larger numbers of clusters, the uncertainty on this constraint is bound to improve. #### 3.3.3 Information matrix We assume Poisson uncertainties in binned counts, a fractional error, \(\epsilon_{(M)}\), in each mean mass measurement and a fractional error, \(\epsilon_{\text{var}\mu\mu}\) in the mass variance measurement. Using \(\mathbf{p}\) to represent the set of model parameters, the information matrix for a single redshift bin takes the form \[\mathcal{T}_{ij,z}=\sum_{s}\left(\frac{1}{Ns_{s},z}\frac{\partial N _{s,z}}{\partial p_{i}}\frac{\partial N_{s,z}}{\partial p_{j}}+\frac{1}{ \epsilon_{(M)}^{2}}\frac{\partial\ln(M)_{s,z}}{\partial p_{i}}\frac{\partial \ln(M)_{s,z}}{\partial p_{j}}\right)\\ +\frac{1}{\epsilon_{\text{Var}\mu}^{2}}\frac{\partial\ln\sigma_{ \mu}^{2}}{\partial p_{i}}\frac{\partial\ln\sigma_{\mu}^{2}}{\partial p_{j}}. \tag{15}\] The first term assumes Poisson variance in the count within each observable property bin and the second assumes a constant fractional uncertainty of the mean mass measured in each bin. The final term accounts for uncertainty in the mass variance, which is taken to be property-independent but depends on redshift through the \(\beta_{2}(z)\) term in equation (11). We evaluate this term at the midpoint of the redshift bin under consideration. For the survey-specific expectations, the full information matrix is determined by a sum over all redshift bins \[\mathcal{T}_{ij} = \sum_{z}\,\mathcal{T}_{ij,z}. \tag{16}\] For the DES-Y1 case, the three redshift bins used in the Abbott et al. (2020) analysis, \(z\in[0.2,0.35)\), \([0.35,0.5)\) and \([0.5,0.65)\) are employed. For LSST, we use seven equally spaced redshift bins covering the interval \(0.1<z<1.5\). Appendals A provides explicit analysis of a reduced toy model based on a single redshift and property bin and having only three free parameters. This example helps illustrate the coupling of HMF and MOR parameters but the simplified scenario (two of the three MOR parameters are known) limits generalization of the results to the more complex survey applications. ## 4 HMF parameter forecasts We now explore potential parameter constraints from current and future optical cluster surveys under two conditions for the quality of derived mean mass per bin and mass variance. We refer to these conditions as Weak and Strong, with the latter having a factor two or better levels of uncertainty compared to the former. The choices of fractional errors in mean mass per bin, \(\epsilon_{(M)}\), and mass variance, \(\epsilon_{\text{var}\mu}\), are summarized in Table 4. The Weak choices for DES-Y1 of \((0.1,0.6)\) are based on current systematic uncertainties estimates (McClintock et al., 2019; Farahi et al., 2019), while the Strong assumption improves each by a factor of two. The LSST Weak quality are slightly improved over DES-Y1 Strong, and the LSST Strong values represent a further improvement of a factor four, to \(\epsilon_{(M)}=0.01\) and \(\epsilon_{\text{var}\mu}=0.05\). The latter constraints are certainly aspirational and will require substantial effort to achieve. For example, the fractional error in mass scatter, \(\Delta\sigma_{\mu}/\sigma_{\mu}=\epsilon_{\text{var}\mu}/2\), is only 2.5% in the LSST Strong case, an uncertainty of only 0.0075 on a central value of 0.300. For the LSST Weak case, the uncertainty in mass scatter would be a less stringent value of 0.015. In common with much IM analysis, the spirit of this work is to reveal best-case DQ-HMF+MOR parameter constraints from analysis of counts, mean mass and mass variance assuming no prior knowledge. Our example applications are tuned to optical cluster surveys with galaxy richness, \(\lambda\), as the observable selection property but the model can be generalized to searches at other wavelengths. For example, on the mass scales investigated here, hydrodynamic simulations suggest that hot gas mass has a smaller intrinsic variance, \(\sim 0.1^{2}\), at the pivot mass scale for redshifts, \(z<1\)(Farahi et al., 2018). Benefits of a sharper proxy are discussed in SS 5.3. ### Current survey application: DES-Y1 The DES-Y1 survey identified 6500 optical clusters with \(\lambda\geq 20\) lying at redshifts \(0.2<z<0.65\) within roughly 5000 square degrees of the southern sky using the redMaPP algorithm (Rykoff et al., 2016). For each of the three redshift bins, counts and mean weak lensing masses within four richness bins of \(\lambda\in[20,30)\), \([30,45)\), \([45,60)\) and \(\geq 60\) were determined by Abbott et al. (2020) and McClintock et al. (2019), respectively. The DES-Y1 cluster counts above a richness of 20 in the \([0.2,0.35)\), \([0.35,0.5)\) and \([0.5,0.65)\) redshift ranges were 1352, 2556 and 2596, respectively. Our reference model, which uses a different cosmology and MOR, yields a similar total count but with slightly different counts per redshift bin (1498, 2286, and 2752), a level of agreement acceptable for the purpose of illustration. We employ the same four richness bins as DES-Y1 in the IM analysis. Due to the limited redshift range in the DES-Y1 sample, we ignore the highest-order terms from the \(\beta_{i}\) redshift expansions, \begin{table} \begin{tabular}{c|c c|c c} & \multicolumn{2}{c|}{DES-Y1} & \multicolumn{2}{c}{LSST} \\ Level & \(\epsilon_{(M)}\) & \(\epsilon_{\text{var}\mu\mu}\) & \(\epsilon_{(M)}\) & \(\epsilon_{\text{var}\mu\mu}\) \\ \hline Weak & 0.10 & 0.60 & 0.04 & 0.20 \\ Strong & 0.05 & 0.30 & 0.01 & 0.05 \\ \end{tabular} \end{table} Table 4: Assumed fractional errors mean mass and mass variance for the IM analysis, equation 15. For each sample, two levels of quality, Weak and Strong, are used, with the latter improving over the former by a factor of two (DES-Y1) or four (LSST). equations (3) and (4). The model thus has eight degrees of freedom consisting of five HMF and three MOR parameters. Applying the IM analysis using the counts and mean masses in these twelve bins, along with the uncertainty in mass variance within each redshift bin, yields the parameter constraints listed in Table 5 for the Weak or Strong quality assumptions. Figure 5 plots these parameter uncertainties, with the orange line offering a reference value of 0.1 for future reference to LSST sample expectations. Under the Weak quality case, the normalization at the pivot mass and redshift, \(\beta_{0,n}\), is forecast to have an uncertainty of 0.23, implying a fractional uncertainty of 26% in the number density. The redshift gradient of the normalization, \(\beta_{0,z}\), is slightly better constrained, at 0.17, which represents a 7% fractional uncertainty on a central value of 2.38 in \(\Lambda\)CDM. This result is helped by the fact that the MOR is independent of redshift, the change in counts across redshift in each richness bin feeds information primarily to \(\beta_{0,z}\). The mass slope of the HMF at the pivot redshift, \(\beta_{1,n}\), is forecast to have an uncertainty of 0.29, which is 14% of its \(\Lambda\)CDM central value of 2.38. The highest order terms, those describing the redshift gradient of the slope with mass, \(\beta_{1,z}\), and the mass curvature at \(z_{P}\), \(\beta_{2,n}\), are forecast to be weakly constrained with errors \(>0.5\) on central values of 1.33 and 0.75, respectively. Under the Weak quality case, forecast errors on MOR normalization and slope are \(\sim 0.1\). The intrinsic variance of the observable conditioned on mass, \(\sigma^{2}\) is anticipated to be returned within an error of 0.035 on a central value of 0.09. Note that these constraints come entirely from the sample itself. In practice, one might imagine external priors on these parameters being imposed in informative ways. The couplings between parameters for the Weak quality case are displayed in Figure 6. From the analytic expressions for the space density, equation (8), and log-mean mass, equation (10), we can anticipate significant degeneracies among MOR and HMF parameters. Unsurprisingly, the two normalization parameters, \(\beta_{0,n}\) and \(\varpi\), are strongly coupled, as are the two slope measures, \(\beta_{1,n}\) and \(\alpha\). The MOR intrinsic variance, \(\sigma^{2}\), couples strongly to all of these parameters. The fact that the DES-Y1 richness threshold of 20 lies close to the MOR normalization, \(e^{\varpi}=22\), means that the limiting mass scale lies close to the pivot mass \(M_{p}\). Because counts and mean masses in higher richness bins provide leverage to only one side of \(M_{p}\), there is non-zero covariance between the HMF normalization, slope and curvature at the pivot redshift. These correlations are somewhat weaker than those associated with the MOR. Because the MOR is assumed to be a pure power-law, with zero curvature, there is very weak coupling between MOR parameters and the HMF curvature, \(\beta_{2,n}\). Forecasts for the Strong quality case are shown as filled circles in Figure 5. The MOR sector receives the primary benefit of improved quality in mean mass and mass variance, with improvements close to a factor of two. In the HMF sector, the pivot normalization and mass slope, \(\beta_{0,n}\) and \(\beta_{1,n}\), see significant improvement while the remaining terms improve only modestly. Because the counts in each bin are the same for the two cases, the redshift gradient parameters, \(\beta_{0,z}\) and \(\beta_{1,z}\), are little improved. A picture that emerges is that improved measurements of mean mass and mass variance tighten the MOR sector, and these improvements filter primarily into the HMF pivot normalization and slope and secondarily to higher-order HMF parameters. This behavior is repeated in the LSST analysis below. Note that the forecast uncertainty in HMF pivot normalization does not include any contribution from errors in distance measurements. At the forecast level of 0.12 for the Strong quality case, current volume uncertainties are sub-dominant, though still contribute at the level of \(\sim\) 0.08 (Alam et al., 2017; Abbott et al., 2022). ### Future survey application: LSST The Rubin Observatory Legacy Survey of Space and Time (LSST) will be both wider and deeper than DES (Ivezic et al., 2019; Chisari et al., 2019). The increase in depth will yield improved measurements of galaxy shapes and colors, and this improvement should translate to more precise estimates of weak lensing mass. For the quality of mean mass estimates, we employ systematic error levels of 0.04 (Weak) and 0.01 (Strong). For the quality level of mass variance we assume values of 0.20 and 0.05, respectively. The LSST Weak values represent modest improvements over the DES Strong case. To push this idealized case further, we anticipate that improvements in optical cluster finding will allow for a factor of two reduction in the sample richness limit, to a value of 10. The overall number of clusters expected using this observable threshold is 380,000, and the redshift distribution of the counts is similar to that shown for the mass-limited case of Figure 3. Note, however, that in the IM analysis we use seven redshift bins, each of width 0.2, between \(0.1<z\leq 1.5\), as well as five richness bins comprised of the four DES-Y1 bins joined with \(\lambda\in[10,20)\). The total number of terms Figure 5: IM-forecasted parameter uncertainties for a DES-Y1-like cluster sample under the data quality cases listed in Table 4. A reduced DQ-HMF model based on the five lowest-order terms (left panel) is employed. The orange line indicates an 0.1 reference value, reproduced in Figure 7. The right panel shows MOR parameter forecasts. Informative priors on these could potentially reduce HMF parameter uncertainties. Parameter correlations are shown in Figure 6. \begin{table} \begin{tabular}{c|c|c} & \multicolumn{2}{c}{Quality} \\ Parameter & Weak & Strong \\ \hline \(\beta_{0,n}\) & 0.23 & 0.12 \\ \(\beta_{0,z}\) & 0.17 & 0.16 \\ \(\beta_{1,n}\) & 0.29 & 0.22 \\ \(\beta_{1,z}\) & 0.70 & 0.67 \\ \(\beta_{2,n}\) & 0.53 & 0.51 \\ \(\sigma^{2}\) & 0.035 & 0.018 \\ \(\varpi\) & 0.092 & 0.048 \\ \(\alpha\) & 0.11 & 0.068 \\ \end{tabular} \end{table} Table 5: DQ-HMF and MOR parameter constraints anticipated from a sample patterned after \(\lambda>20\) DES-Y1 clusters, shown in Figure 5. in the IM is 77, consisting of 35 counts, 35 mean masses, and seven mass variance measures. The larger counts and better quality assumptions increase the volume of the IM matrix determinant relative to the DES-Y1 case (see the toy model in Appendix A). We thus employ the full set of eight DQ-HMF parameters. Figure 7 and listed in Table 6 show that, despite the increase in model dimension, the larger information content improves the constraints on all parameters relative to DES-Y1. In the left panel of Figure 7, the horizontal dashed line reproduces the 0.1 amplitude in Figure 5. While none of the forecast uncertainties fall below this value for DES-Y1, in the LSST-Weak case all but two high-order parameters, \(\beta_{1,z2}\) and \(\beta_{2,z}\), lie below it. In the LSST-Weak case, all three HMF shape parameters at the pivot redshift, \(\beta_{0,n}\), \(\beta_{1,n}\) and \(\beta_{2,n}\) are forecast to have uncertainties of 4 percent or better. For the Strong quality case, the pivot normalization and mass slope have forecast errors of one percent. \begin{table} \begin{tabular}{c|c|c} & \multicolumn{2}{c}{Quality} \\ Parameter & Weak & Strong \\ \hline \(\beta_{0,n}\) & 0.028 & 0.0088 \\ \(\beta_{0,z}\) & 0.039 & 0.034 \\ \(\beta_{0,z2}\) & 0.088 & 0.085 \\ \(\beta_{1,n}\) & 0.025 & 0.0093 \\ \(\beta_{1,z}\) & 0.052 & 0.047 \\ \(\beta_{1,z2}\) & 0.14 & 0.13 \\ \(\beta_{2,n}\) & 0.031 & 0.024 \\ \(\beta_{2,z}\) & 0.12 & 0.090 \\ \(\sigma^{2}\) & 0.0062 & 0.0016 \\ \(\varpi\) & 0.015 & 0.0039 \\ \(\alpha\) & 0.012 & 0.0034 \\ \end{tabular} \end{table} Table 6: DQ-HMF and MOR parameter constraints anticipated from a sample patterned after \(\lambda>10\) LSST clusters, shown in Figure 7. Figure 6: Parameter covariance forecast for a survey patterned after DES-Y1 under Weak quality assumptions (see Table 4). Figure 7: IM-forecasted uncertainties on DQ-HMF (left) and MOR (right) parameters from an LSST-like optical survey of \(\lambda>10\) clusters under the quality assumptions listed in Table 4. Five richness bins and seven redshift bins across \(0.1<z<1.5\) are employed. The reference value of 0.1 is repeated from Figure 5. Parameter correlations are displayed in Figure 8. Tight constraints on the three MOR parameters energy, shown in the right panel of Figure 7. For the Weak quality case, the MOR normalization and slope are forecast to have uncertainties of 0.015 and 0.012, respectively. In the Strong case, the forecast errors below 0.004 for both. Such sub-percent errors reflect the powerful potential of LSST-era samples, but achieving such tight constraints will be difficult in practice, as discussed in SS5 below. The property variance uncertainties translate to errors on the richness scatter (square root of variance), of \(\pm 0.01\) and \(\pm 0.003\), in the Weak and Strong cases, respectively. These values, roughly three and one percent fractional errors on the central value \(\sigma=0.3\), will again be quite challenging to achieve in practice. The full IM covariance structure under Strong data quality assumptions for LSST is shown in Figure 8. As in the DES-Y1 example, the MOR variance and normalization parameters (\(\sigma^{2}\) and \(\varpi\)) remain strongly coupled, as are the MOR and HMF normalizations (\(\varpi\) and \(\beta_{0,n}\)) and slopes (\(\alpha\) and \(\beta_{1,n}\)). For the LSST case, our choice of pivot redshift, \(z_{p}=0.5\), lies below both the median sample redshift and the redshift midpoint of 0.8. As a result, redshift evolution parameters of the DQ-HMF are coupled. For example, the redshift gradient of the HMF normalization, \(\beta_{0,z}\), is mildly correlated with its redshift curvature, \(\beta_{0,z2}\), and the redshift curvature of the local slope, \(\beta_{1,z2}\). While a more optimal choice of pivot redshift would reduce these correlations, we maintain a common pivot redshift for both samples analyzed here in order to provide a fair comparison of potential gains. Figure 8: Parameter covariance for an LSST-like survey under Strong quality constraints for weak lensing mass and mass variance measurements. Relative to the DES-Y1 analysis, the lower richness threshold assumed for the case of LSST offers leverage below the pivot mass scale of \(10^{14.3}\,h^{-1}\,\mathrm{M}_{\odot}\). Unlike the DES-Y1 case, the HMF shape parameters at the pivot redshift (\(\beta_{0,n}\), \(\beta_{1,n}\), \(\beta_{2,n}\)) are largely uncorrelated for the case of LSST. Increasing the richness limit to 20, the overall sample size drops to 60,000, a factor of roughly \(2^{2.5}\) lower than the richness 10 counts. Parameter constraints are degraded accordingly, with forecast errors in the Strong case of 0.04 in \(\beta_{0,n}\) and 0.08 in \(\beta_{1,n}\) and \(\beta_{1,n}\). ## 5 Discussion In SS 2.2 we showed that a compact form sufficiently captures the near-field, space-time density of high mass halos derived from an N-body emulator, where sufficiency here is in relation to current systematic uncertainties associated with the effects of galaxy formation feedback. The eight DQ-HMF parameters have straightforward interpretations as polynomial coefficients in log-mass and redshift. A key benefit of the model is that convolution with a log-normal MOR produces closed-form expressions for observable features of group and cluster samples: counts, mean mass, and mass variance as a function of an observable property and redshift. Information matrix analysis designed around existing and planned optical cluster surveys indicate that potential constraints from an LSST-scale survey could be percent-level on many DQ-HMF and MOR parameters. Achieving such precise constraints will be challenging. We begin by discussing the role of volume uncertainties, projection effects and more complex MOR forms. We then briefly touch on sample selection, focusing on the potential benefits of joining multi-wavelength samples. In the case of LSST, we show that constraints on all model parameters could be improved using a one-tenth sub-sample of clusters having a tighter mass proxy, with intrinsic property variance of \(0.1^{2}\) rather than \(0.3^{2}\). Machine learning techniques that employ all available measurements could provide a pathway to classifying such a sample, particularly if tuned accurately by synthetic data from cosmological hydrodynamics simulations. ### Comoving volume uncertainties In the IM forecasts above, we have ignored uncertainties in the comoving comoving comoving cosmic volume. Uncertainties in cosmic volume will introduce additional error to the normalization terms, \(\beta_{0,x}\in\{\beta_{0,n},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ form used above, with three parameters, is likely to require some extensions for precise survey likelihood application. Based largely on the behavior of halos in cosmological hydrodynamics simulations, we briefly outline modifications that may apply to different observable properties. _MOR shapes from cosmological hydrodynamics simulations._ Large samples of high-mass halos from cosmological hydrodynamics simulations provide the means to test the MOR kernel for multiple observable properties. In BAHAMAS+MACSIS simulations, the hot gas mass and the total stellar mass within \(R_{2006}\) follow log-normal kernel shapes (Farahi et al., 2018, hereafter F18). The existence of a log-normal PDF for the total stellar mass of halos was confirmed using three independent cosmological hydrodynamics simulations by Anbajagane et al. (2020). That work also finds slight skewness in halo mass-conditioned statistics for the total number of satellite galaxies, \(N_{\rm sat}\), and the BCG stellar mass, \(M_{\star,\rm BCG}\). A common Gaussian mixture fit is derived for the normalized \(N_{\rm sat}\) kernel, with \(79\pm 1\) percent of halos in a dominant component with mean, \(0.28\pm 0.01\), and scatter, \(0.68\pm 0.01\), and the remaining 21% component having mean \(-1.04\pm 0.05\) and scatter \(1.13\pm 0.02\). More work is needed to understand intrinsic MOR shapes for other observable properties, such as X-ray luminosity and temperature of thermal SZ decrement amplitude, and efforts to verify statistic forms from different cosmological hydrodynamics methods are also warranted. _Running of MOR parameters with redshift and/or halo mass._ The property normalization, \(\varpi\), is likely to evolve with redshift. A self-similarity assumption (Kaiser, 1986) that ties physical properties to the evolving critical density is often used to express, \(\varpi(z)\), in terms of powers of \(E(z)\equiv H(z)/H_{0}\).7 Under strict self-similarity, the total stellar or gas mass fractions are independent of redshift. In the BAHAMAS+MACSIS simulations, F18 find modest (several percent) redshift dependence in both measures, with the gas mass fraction declining, and stellar mass fraction increasing, slightly from \(z=1\) to \(z=0\). These shifts are mildly mass-dependent, being larger at lower halo masses that are more strongly influenced by galaxy evolution. Free parameters introduced to capture deviations in normalization from self-similarity would couple most strongly to the DQ-HMF normalization parameters, \(\beta_{0,x}\). The intrinsic property variance, \(\sigma^{2}\), of hot gas and stellar mass was also found to run weakly with mass and redshift by F18. Footnote 7: This form is appropriate for the critically-thresholded \(M_{2006}\) halo mass convention employed here; using mean mass rather than critical density in the spherical overdensity condition leads to powers of \(1+z\) instead of \(E(z)\). The constancy of the MOR slope, \(\alpha\), is also a simplification that may require modification for some properties. For example, F18 find that the slopes of hot gas mass and stellar mass vary modestly with both halo mass scale and, for the former, redshift. At lower halo masses, the hot gas mass slope steepens to values above unity, and the stellar mass scaling becomes shallower than unity. A parameter introduced to describe an MOR slope gradient, \(d\alpha/d\mu\), would couple most strongly to the MOR curvature, \(\beta_{2,n}\). Extending further to allow for this parameter to run linearly with \((1+z)\) would then couple to \(\beta_{2,z}\). ### "Gold Sample" Selection with Machine Learning using Multiple Properties Cluster samples are generally defined by a threshold in a single observed selection property. The DES-Y1 sample, for example, is limited by red galaxy richness, \(\lambda\geq 20\). The mapping between a set of observed clusters and their underlying host halos is assumed to be bijective; a chosen halo maps uniquely to a single cluster, and vice-versa. This is not always the case8, and multi-wavelength studies are critical to understanding how frequently this assumption is violated. In a recent joint study of cluster samples identified independently by X-ray and optical observations in roughly 60 deg\({}^{2}\) of sky, Upsdell et al. (2023) find that only one of 178 X-ray sources has two optical clusters identified along the same line of sight. Such effects, as well as more prosaic issues such as survey masking (_e.g._, Rykoff et al., 2016), will affect cluster selection and require calibration by multi-wavelength observations and simulations. Footnote 8: See the spectacular case of Planck Sunyaev-Zeldovich source PSZ1 510, which represents a near perfect alignment on the sky of two rich (\(\lambda\sim 80\)) clusters offset by 0.1 in redshift (Rozo et al., 2015). Joint property analysis of large cluster samples can improve cosmological parameter constraints (Cunha, 2009) because combining multiple observable properties can substantially reduce mass variance relative to single-property characterization Ho et al. (2023). The anti-correlation of hot gas and stellar mass contents observed in the LoCuSS sample (Farahi et al., 2019) is an important feature; selecting on just these two intrinsic properties in the Magneticum simulation yields a variance in halo mass of 0.052 (Ho et al., 2023). #### 5.3.1 Potential Gains of a Gold Sample There is potential to improve DQ-HMF parameter constraints using a selection approach that identifies a Gold Sample of clusters with reduced intrinsic MOR variance. For this example, we imagine a classifier returning 10% of the overall population with intrinsic MOR variance, 0.1\({}^{2}\). While a significant improvement over the 0.3\({}^{2}\) value used in our default analysis, we note that, for high halos masses, the hot gas mass is seen to have such a small variance (Truong et al., 2018; Farahi et al., 2018; Pop et al., 2022; Farahi et al., 2022; Pellissiier et al., 2023). Using the reduced, three-parameter model of Appendix A as a guide, the information volume scaling of \(N\sigma^{-4}\) for low-scatter proxies (other parameters held fixed), equation (A15), would imply that the improvement in MOR variance wins over the decrease in sample size. Figure 9 confirms this to be the case. The filtered cluster subsample with 10 percent of the counts but 0.1\({}^{2}\) variance yields improvements in all HMF parameters, with the biggest gains occurring for the highest order quantities, \(\beta_{0,x2}\), \(\beta_{1,z2}\) and \(\beta_{2,z}\). As discussed in SS2.3.1, the shifts in such higher-order terms caused by massive neutrinos are of the order 0.01, potentially within reach of Gold Sample analysis. Machine learning (ML) techniques have been demonstrated to yield improved estimates of galaxy cluster masses from noisy observations derived from simulations of massive halos (Ntampaka et al., 2016, 2019; Cohn & Battaglia, 2020; Krippendorf et al., 2023; Ho et al., 2023), and sample selection in the low signal-to-noise regime has been explored by Kosiba et al. (2020). Symbolic regression has been used to identify property combinations that minimize mass variance (Wadekar et al., 2023) and random forest techniques have been used to classify galaxies into orbit classes using projected phase space information (Aung et al., 2023; Farid et al., 2022). We encourage other researchers to explore whether ML methods can be trained to identify a Gold Sample with characteristics similar to that assumed above. Synthetic sky maps and catalogs are essential elements for such studies, and more effort is needed to move beyond single wavelength products (DeRose et al., 2019; LSST Dark Energy Science Collaboration (LSST DESC) et al., 2021; Wechsler et al., 2022; Kovacs et al., 2022; Frontiere et al., 2022; Troxel et al., 2023) toward synthetic lightcone products with joint stellar, gas, and dark matter properties (Omori, 2022; Osato & Nagai, 2023; Schaye et al., 2023). Deep learning methods could facilitate production of such maps (Han et al., 2021). As multiple synthetic skies that jointly meet the requirements of surveys in optical/IR, sub-millimeter and X-ray become available, methods for sample selection can be cross-verified, trained on one simulation methodology and tested on another. ### Lensing and Correlated LSS Measures Massive halos impose peaks in weak lensing maps on arcminute scales, and tangential shear analysis has long been a staple method of estimating the underlying true halo masses of galaxy clusters (Tyson et al., 1990; Miralda-Escude, 1991; Kaiser & Squires, 1993; Luppino & Kaiser, 1997), see the review of Hoekstra et al. (2013). Weak lensing peaks contain information on cosmological parameters including neutrino mass (Ajani et al., 2020; Zurcher et al., 2022; Liu et al., 2023). In addition, the spatial auto- and cross-correlations of galaxies, gravitational lensing and both thermal and kinetic SZ maps contain some degree of information about massive halos, and higher-order statistical signatures at non-linear scales are even more strongly connected.9 The spatial clustering of the cluster population itself is a signal that improves cosmological inference (Majumdar & Mohr, 2004; Euclid Collaboration et al., 2022), and the power spectrum and bispectrum of massive halos contains potentially powerful information on primordial non-Gaussianities (Coulton et al., 2023). Footnote 9: For example, this Snowmass2021 Letter of Interest. Cluster counts offer complementary information to other cosmological probes, especially as the population is sensitive to both cosmic geometry and the gravitational growth of structure (Frieman et al., 2008; Cunha et al., 2009). A recent study that combines DES redMaPPek cluster counts with spatial correlations of galaxy and lensing demonstrates the value of this approach (To et al., 2021). Clusters could be used to independently assess a recent CMB+LSS finding of a \(4.2\sigma\) larger than \(\Lambda\)CDM growth factor index (Nguyen et al., 2023). These types of studies could potentially benefit from a compact mass function form, as DQ-HMF parameters could be used either as informative priors or as part of the focus of posterior likelihood evaluation. ### Other Caveats and Extensions We mention here a few additional caveats and potential extensions. _Alternative Mass Conventions._ In N-body simulations, the mass of a halo is typically defined by percolation or spherical over-density approaches (White, 2001), spherical (see, _e.g._, Diemer, 2020, and references therein). For the spherical overdensity approach, common choices for the interior mean density threshold and/or the reference density (critical or mean mass are typical choices) induce scale-dependent shifts in mass. The resultant HMF forms are follow similar forms, however, remain similar and can be converted using mean mass density profile shapes (see Appendix B of Evrard et al., 2002). We suspect, but do not attempt to prove here, that a compact representation would be valid for most, if not all, existing conventions for true halo mass. _Alternative Formulations for Extended Dynamic Range._ Our model aims at near-field studies of groups and clusters. To extend to model to lower-mass halos, one could include a transition mass scale below which the HMF would become pure power-law. We note that the pure power law form at low masses ignores effects of baryon feedback during galaxy formation. A recent internal structure study of halos across nearly six orders of magnitude in mass in the IllustrisTNG simulations (Anbajagane et al., 2022) finds wiggles in dark matter halo scaling relations near the Milky Way mass of \(10^{12}\,h^{-1}\,\mathrm{M}_{\odot}\), where star formation efficiency in the late universe peaks (Behroozi et al., 2013). This finding suggests that the HMF may also have a localized deviation from a pure power-law form at that scale. The near-field halos above our chosen limiting mass of \(10^{13.7}\,h^{-1}\,\mathrm{M}_{\odot}\) comprise several percent of the overall matter density at \(z<1.5\), but this fraction becomes negligible at much higher redshifts. The mass scale associated with the most extreme few percent of the halo population declines with redshift, reaching Milky Way-scale halos that host bright galaxies at \(z>8\), as seen in JWST observations (Boylan-Kolchin, 2023). To span a wider range in redshift, one could redesign the model by reframing the normalization. Instead of the number density at fixed mass, \(\beta_{0}(z)\), one could employ a mass scale at fixed number density parameter, for example, \(M_{-6}(z)\) to represent the mass scale at which the comoving space density is \(10^{-6}\,\,h^{3}\,\mathrm{Mpc}^{-3}\). To avoid cosmic volume uncertainties, the space density itself could be re-framed in observable terms, in units of number per square degree per unit redshift. _Beyond binning._ The IM forecasts employ binned values for key sample characteristics of counts and mean mass. As multiple observable properties become available for larger population ensembles, a likelihood analysis that considers each system's true mass as additional model parameters (Mulroy et al., 2019) could prove powerful. _Multi-property statistics._ The expressions derived in E14 for Figure 9: HMF parameter constraints for the LSST-Strong case with MOR variance, \(\sigma^{2}=0.3^{2}\) (filled circles, same as Figure 7), are compared to those from a cleaner subset (‘Gold Sample’) consisting of 10% of the former sample with a reduced MOR variance of \(0.1^{2}\) (open circles). The clean subset yields improvements, particularly in the higher-order quantities such as \(\beta_{0,z2}\), \(\beta_{1,z2}\), and \(\beta_{2,z}\). Note the logarithmic scale on the constraint amplitude. selection property-conditioned statistics still apply. We emphasize above only the mean mass and mass variance conditioned on the selection property, \(s_{a}\), but expressions for one or more additional properties, \(s_{b}\) (see equations (12) through (14) of E14) remain applicable, except now the HMF mass-shape parameters are explicitly redshift dependent, \(\beta_{i}\rightarrow\beta_{i}(z)\). ## 6 Summary We introduce a compact representation for the differential space density of high mass halos that host groups and clusters and demonstrate its utility to match well the output of the Mira-Titan emulator of purely collisionless universes for masses \(>10^{13.7}\)\(h^{-1}\) M\({}_{\odot}\) in the near cosmic field of redshifts \(z<1.5\). Convolving with a minimal MOR yields analytic forms for the space density and property-selected statistics that explicitly expose parameter degeneracies and that are fast to compute. Such a compact representation offers a common ground for cluster sample analysis independent of selection method. With roughly one million halos above \(10^{14}\) M\({}_{\odot}\) available on the full sky (Allen et al., 2011), and studies of protoclusters at moderate redshifts in ascendancy (Alberts & Noble, 2022), there is abundant information available from galaxy cluster surveys. Unlocking that information will require careful modeling of sample selection, an endeavor that will be aided by sophisticated sky maps (_e.g._, Schaye et al., 2023). Near-term, more efforts to empirically study the MOR using high quality multi-wavelength data are needed. As the sample size of clusters with multiple well-measured properties grows from tens (_e.g._, Mulroy et al., 2019) to hundreds (_e.g._, Giles et al., 2022; Upsdell et al., 2023) to thousands, the detailed form of the multi-property MOR will come into focus, which can unlock more precise estimates of the underlying true mass of each system and, via collective study, the HMF and its behavior over cosmic time. Acknowledgments.This work was initiated under NSF-REU Grant 2149884 and was also supported by NASA ADAP Grant 80NSSC-22K0476. This work employed open-source software of NumPy(van der Walt et al., 2011), SciPy(Virtanen et al., 2020), and Matplotlib(Hunter, 2007). We dedicate this paper to the memory of Nick Kaiser, in honor of his seminal works on galaxy clusters, including spatial clustering (Kaiser, 1984), property scaling and evolution (Kaiser, 1986, 1991), gravitational lensing mass estimates (Kaiser & Squires, 1993; Luppino & Kaiser, 1997) and gravitational redshifts (Kaiser, 2013). Part of this work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611.
2304.13345
Deformation and $K$-theoretic Index Formulae on Boundary Groupoids
Boundary groupoids were introduced by the second author, which can be used to model many analysis problems on singular spaces. In order to investigate index theory on boundary groupoids, we introduce the notion of {\em a deformation from the pair groupoid}.Under the assumption that a deformation from the pair groupoid $M \times M$ exists for Lie groupoid $\mathcal{G}\rightrightarrows M$, we construct explicitly a deformation index map relating the analytic index on $\mathcal{G}$ and the index on the pair groupoid. We apply this map to boundary groupoids of the form $\mathcal{G} = M_0 \times M_0 \sqcup G \times M_1 \times M_1 \rightrightarrows M=M_0\sqcup M_1$, where $G$ is an exponential Lie group, to obtain index formulae for (fully) elliptic (pseudo)-differential operators on $\mathcal{G}$, with the aid of the index formula by M. J. Pflaum, H. Posthuma, and X. Tang. These results recover and generalize our previous results for renormalizable boundary groupoids via the method of renormalized trace.
Yu Qiao, Bing Kwan So
2023-04-26T07:33:56Z
http://arxiv.org/abs/2304.13345v1
# Deformation and \(K\)-theoretic index formulae on boundary groupoids ###### Abstract. Boundary groupoids were introduced by the second author, which can be used to model many analysis problems on singular spaces. In order to investigate index theory on boundary groupoids, we introduce the notion of _a deformation from the pair groupoid_. Under the assumption that a deformation from the pair groupoid \(M\times M\) exists for Lie groupoid \(\mathcal{G}\rightrightarrows M\), we construct explicitly a deformation index map relating the analytic index on \(\mathcal{G}\) and the index on the pair groupoid. We apply this map to boundary groupoids of the form \(\mathcal{G}=M_{0}\times M_{0}\sqcup G\times M_{1}\times M_{1}\rightrightarrows M =M_{0}\sqcup M_{1}\), where \(G\) is an exponential Lie group, to obtain index formulae for (fully) elliptic (pseudo)-differential operators on \(\mathcal{G}\), with the aid of the index formula by M. J. Pflaum, H. Posthuma, and X. Tang. These results recover and generalize our previous results for renormalizable boundary groupoids via the method of renormalized trace. _Mathematics Subject Classification_ (2020): 58J20, 46L80, 19K56, 58H15 _Keywords_: \(K\)-theoretic index, index map, deformation from the pair groupoid, Lie algebroid, boundary groupoid, submanifold groupoid ###### Contents * 1 Introduction * 1.1 Structure of the paper * 2 Preliminaries * 2.1 Lie groupoids, Lie algebroids, Pseudodifferential operators on Lie groupoids, and groupoid \(C^{*}\)-algebras * 2.2 Invariant submanifolds and composition series * 2.3 Boundary groupoids and submanifold groupoids * 2.4 The tangent groupoid and the adiabatic groupoid * 3 The deformation groupoid and the deformation index map * 3.1 Construction of deformations from the pair groupoid * 3.2 Obstruction to the existence of deformations from the pair groupoid * 3.3 The deformation index map * 4 Fredholm and \(K\)-theoretic index of (fully) elliptic operators on boundary groupoids * 4.1 The odd co-dimension case * 4.2 The even co-dimension case * 4.3 The Fredholm index for fully elliptic operators ## 1. Introduction The Atiyah-Singer index theorem is one of the greatest mathematics achievements in the twentieth century, which states that the analytic index of an elliptic differential operator is equal to its topological counterpart. There have been a plenty of results generalizing the Atiyah-Singer index theorem to other pseudo-differential calculi, constructed for different purposes [1, 3, 8, 21, 24, 31, 37]. These pseudo-differential calculi can be realized as groupoid pseudo-differential calculi on certain groupoid \(\mathcal{G}\rightrightarrows M\) with \(M\) compact. Then the subalgebras of operators of order zero and \(-\infty\) can be completed to \(C^{*}\)-algebras \(\mathfrak{U}(\mathcal{G})\) and \(C^{*}(\mathcal{G})\), respectively, which yields the short exact sequence \[0\to C^{*}(\mathcal{G})\to\mathfrak{U}(\mathcal{G})\to\mathfrak{U}/C^{*}( \mathcal{G})\to 0. \tag{1}\] For any elliptic operator of order zero, applying \(K\)-theory functor, we can define its analytic index as the image in \(K_{0}(C^{*}(\mathcal{G}))\) under the boundary map. For continuous family groupoids, the analytic index is similarly defined by Lauter, Monthubert, and Nistor in [26]. In the 1990s, Connes gives a "Lie groupoid proof" of the Atiyah-Singer index theorem in his famous book [18, Section 2]. As the starting point, given a closed manifold \(M\), Connes constructed the so-called _tangent groupoid_, which is a kind of _deformation_ of the pair groupoid \(M\times M\). This construction induces a map (which we shall denote by \(\operatorname{ind}_{\mathcal{T}(M\times M)}\) below) from the \(K_{0}\)-group of the tangent bundle to \[K_{0}(C^{*}(M\times M))\cong K_{0}(\mathcal{K}).\] Applying a mapping cone arguments as described in, for example, [1, Section 3] and [13, Section 4.1], one proves that the index map corresponding to (1) is equal to \(\operatorname{ind}_{\mathcal{T}(M\times M)}\) composed with the principal symbol. Next, Connes constructed a topological index map, by embedding \(M\) into \(\mathbb{R}^{N}\) for some large enough \(N\), and considering Thom isomorphism and Morita equivalence (between groupoids). Then he showed that these two index maps coincide. Connes' proof of the Atiyah-Singer index theorem triggers a large number of subsequent works in index theory through Lie groupoids. For instance, based on the analysis and index problems on manifolds with corners, many groupoids were constructed [1, 10, 12, 21, 28, 32, 36]. These constructions and index theorems depend heavily on the existence of boundary defining functions and embedding the manifold under question to a cube instead of \(\mathbb{R}^{N}\). Meanwhile, Androulidakis and Skandalis associate the holonomy groupoid to a singular foliation and investigated corresponding properties [4], which opens the door to explore singular foliations via Lie groupoids. Alternatively, one can study the simplified index problem on Lie groupoids by pairing with cohomology [11, 17, 38, 39, 40]. To study the Fredholm index of fully elliptic operators on manifolds with boundary, Carrillo-Rouse, Lescure, and Monthubert replace the tangent bundle in the adiabatic groupoid construction by some "non-commutative tangent bundle" [11]. In that case, it is (the \(C^{*}\)-algebra of) the continuous family groupoid \[(M_{1}\times M_{1}\times\mathbb{R})\times(0,1)\sqcup TM\times\{0\}\rightrightarrows M \times[0,1]\subset\mathcal{T}(\mathcal{G}).\] Similar arguments are utilized by Debord, Lescure, and Nistor for the case of conical pseudo-manifolds [20]. In this paper, we consider index formulae for pseduo-differential on boundary groupoids of the form (with two orbits) \[\mathcal{G}:=M_{0}\times M_{0}\sqcup G\times M_{1}\times M_{1}\rightrightarrows M, \tag{2}\] where \(M_{0}=M\setminus M_{1}\) and the isotropy subgroup \(G\) is an exponential Lie group of dimension equal to the codimension of the manifold \(M_{1}\) in \(M\). These Lie groupoids are often holonomy groupoids integrating some singular foliations [2, 3, 5, 6, 19]. In this case, it is not clear whether an embedding analogous to the manifold with corners exists. Fortunately, the \(K\)-theory of these Lie groupoid \(C^{*}\)-algebras is computed in [13], namely \[K_{0}(C^{*}(\mathcal{G}))\cong \mathbb{Z}, K_{1}(C^{*}(\mathcal{G}))\cong \mathbb{Z} \text{if $M_{1}$ of odd codimension $\geq 3$},\] \[K_{0}(C^{*}(\mathcal{G}))\cong \mathbb{Z}\oplus\mathbb{Z}, K_{1}(C^{*}(\mathcal{G}))\cong \{0\} \text{if $M_{1}$ of even codimension}.\] Hence in order to derive an index formula for elliptic operators (or just their principal symbols), it suffices to produce one integer in the odd codimension case and two integers in the even codimension case, which would completely describe its index in \(K_{0}(C^{*}(\mathcal{G}))\). Moreover, the pushforward induced by _extension map_ \[K_{0}(C^{*}(M_{0}\times M_{0}))\xrightarrow{e_{\mathcal{G},M_{0}}}K_{0}(C^{* }(\mathcal{G}))\] is an isomorphism in the odd case and is injective in the even case. This implies the Fredholm index of a fully elliptic operator on \(\mathcal{G}\) is determined by its \(K_{0}(C^{*}(\mathcal{G}))\) index. In the special case when a renormalized trace can be defined, we computed the index [41] using arguments similar to that of [34]. We found that in the odd case with codimension \(\geq 3\), the \(\eta\)-term of the renormalized index formula vanishes, hence both the Fredholm and \(K_{0}(C^{*}(\mathcal{G}))\) index is just given by the Atiyah-Singer integral. Moreover, one could expect a deeper description of the relationship between the isomorphism \(K(C^{*}(M\times M))\cong K(C^{*}(\mathcal{G}))\) and the vanishing of eta term. In this paper, we take a different approach. We shall introduce the notion of a _deformation from the pair groupoid_ (See Definition 3.1), where we deform the pair groupoid \(M\times M\) to our desired groupoid \(\mathcal{G}\rightrightarrows M\). Here, we would like to point out two major differences between our definition and that for the tangent groupoid and the adiabatic groupoid, even for the construction of the deformation to the normal cone [22, 23, 24]: 1. For the tangent groupoid, the fiber at \(t=0\) is the tangent bundle of \(M\); whereas, for a deformation from the pair groupoid, the fiber at \(t=0\) is the groupoid \(\mathcal{G}\), usually _not_ a vector bundle. 2. Given a closed manifold \(M\), there always is the associated tangent groupoid. But a deformation from the pair groupoid for \(\mathcal{G}\rightrightarrows M\) may not exist. (Thus, we exploit a little obstruction theory for the existence of a deformation from the pair groupoid.) Hence, unlike the tangent groupoid and adiabatic groupoid, our idea seems to go backwards. That is, given a Lie groupoid \(\mathcal{G}\rightrightarrows M\), we ask if \(\mathcal{G}\rightrightarrows M\) can be obtained by deforming the pair groupoid \(M\times M\), in other words, we are looking for a bigger Lie groupoid \(\mathcal{D}\) which realizes such deformation. For a Lie groupoid \(\mathcal{G}\rightrightarrows M\), once a deformation \(\mathcal{D}\) from the pair groupoid \(M\times M\) exists, we are able to construct an index map \[\operatorname{ind}_{\mathcal{D}}:K_{0}(C^{*}(\mathcal{G}))\to K_{0}(C^{*}(M \times M\rightrightarrows M))\cong\mathbb{Z},\] which satisfies the following property. **Theorem 1.1**.: _With the notion as above, the composition_ \[K_{0}(C^{*}(M_{0}\times M_{0}))\cong\mathbb{Z}\xrightarrow{e_{\mathcal{G},M_{ 0}}}K_{0}(C^{*}(\mathcal{G}))\xrightarrow{\operatorname{ind}_{\mathcal{D}}}K _{0}(C^{*}(M\times M))\cong\mathbb{Z}\] _is an isomorphism._ With this theorem at hand, we establish the following main theorem. **Theorem 1.2** (Theorem 3.9).: _Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid and \(A\) the associated Lie algebroid of \(\mathcal{G}\). Suppose that a deformation \(\mathcal{D}\) from the pair \(M\times M\) exists. Then one has the commutative diagram_ \[\begin{array}{ccc}K_{0}(C^{*}(A))&&\operatorname{ind}_{\mathcal{T}(\mathcal{G })}&\succ K_{0}(C^{*}(\mathcal{G}))\\ \\ \underset{\mathbb{V}}{\cong}&&\operatorname{ind}_{\mathcal{D}}\\ K_{0}(C^{*}(TM))&&\operatorname{ind}_{\mathcal{T}(M\times M)}&\succ K_{0}(C^ {*}(M\times M)),\end{array} \tag{3}\] _where the top map is just the analytic index map constructed via the adiabatic groupoid and the bottom map is the Atiyah-Singer index._ The above theorems can be used to simplify index problems on boundary groupoids to those on the pair groupoid, which enables us to apply the pairing considered in [40] to obtain our index formulae. **Theorem 1.3** (Theorems 4.3 and 4.6).: _Suppose that \(\mathcal{G}\rightrightarrows M\) is a boundary groupoid of the form (2), and a deformation from the pair groupoid exists for \(\mathcal{G}\). Let \(\Psi\) be an elliptic pseudo-differential operator on \(\mathcal{G}\). One has the index formulae:_ 1. _if_ \(\dim G\geq 3\) _is odd, then_ \[\operatorname{ind}(\Psi)=\operatorname{ind}_{\mathcal{T}(M\times M)}(\partial[ \sigma(\Psi)])=\int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge\operatorname{ch}( \sigma[\Psi]),\Omega_{\pi^{!}TM}\rangle;\] 2. _if_ \(\dim G\) _is even, then_ \[\operatorname{ind}(\Psi)= \int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge\operatorname{ch}(\sigma [\Psi]),\Omega_{\pi^{!}TM}\rangle\] \[\bigoplus\int_{T^{*}M_{1}\times\mathfrak{g}}\langle\hat{A}(T^{*}M _{1}\oplus\mathfrak{g})\wedge\operatorname{ch}(\sigma[\Psi]|_{M_{1}}),\Omega^ {\prime}_{\pi^{!}TM_{1}}\rangle.\] ### Structure of the paper Section 2 is devoted to reviewing basic definitions and facts related to Lie groupoids, such as Lie algebroids, boundary groupoids, submanifold groupoids, the tangent groupoid. In Section 3, given a Lie groupoid \(\mathcal{G}\rightrightarrows M\), we define the notion of _a deformation from the pair groupoid_ to \(\mathcal{G}\), show that such deformation exists for submanifold groupoids, and briefly discuss obstruction to existence of such deformation groupoids. Then, under the assumption that a deformation \(\mathcal{D}\) from the pair groupoid to \(\mathcal{G}\) exists, we construct explicitly an index map \[\operatorname{ind}_{\mathcal{D}}:K_{0}(C^{*}(\mathcal{G}))\to K_{0}(C^{*}(M \times M))\cong\mathbb{Z},\] which is similar to that of the tangent (or adiabatic) groupoid, and show that it is compatible with the analytic index map, hence implies Theorem 1.2. Finally in Section 4, after reviewing the Atiyah-Singer formula appearing in [40], we combine these results with Theorem 1.2 to establish index formulae for (fully) elliptic (pseudo)-differential operators on boundary groupoids of the form \[\mathcal{G}:=M_{0}\times M_{0}\sqcup G\times M_{1}\times M_{1}\rightrightarrows M,\] where \(G\) is an exponential Lie group. These index formulae are essentially the Atiyah-Singer formula, hence we give a \(K\)-theoretic proof of the results in [41] without renormalized trace. **Acknowledgment.** We would like to thank Prof. Xiang Tang for explaining his paper [40] to us. Qiao was partially supported by the NSFC (grant nos. 11971282, 12271323). ## 2. Preliminaries Lie groupoids, Lie algebroids, Pseudodifferential operators on Lie groupoids, and groupoid \(C^{*}\)-algebras We review first basic knowledge of Lie groupoids, Lie algebroids, pseudodifferential calculus on Lie groupoids, and groupoid \(C^{*}\)-algebras [3, 26, 27, 29, 30, 33, 37, 42, 44]. **Definition 2.1**.: A Lie groupoid \(\mathcal{G}\rightrightarrows M\) consists of the following data: 1. manifolds \(M\), called the space of units, and \(\mathcal{G}\); 2. a unit map (inclusion) \(\mathbf{u}:M\to G\); 3. submersions \(\mathbf{s},\mathbf{t}:\mathcal{G}\to M\), called the source and target maps respectively, satisfying \[\mathbf{s}\circ\mathbf{u}=\mathrm{id}_{M}=\mathbf{t}\circ\mathbf{u};\] 4. a multiplication map \(\mathbf{m}:\mathcal{G}^{(2)}:=\{(g,h)\in\mathcal{G}\times\mathcal{G}\,:\, \mathbf{s}(g)=\mathbf{t}(h)\}\to\mathcal{G}\), \((g,h)\mapsto gh\), which is associative and satisfies \[\mathbf{s}(gh)=\mathbf{s}(h),\quad\mathbf{t}(gh)=\mathbf{t}(g),\quad g( \mathbf{u}\circ\mathbf{s}(g))=g=(\mathbf{u}\circ\mathbf{t}(g))g;\] 5. an inverse diffeomorphism \(\mathbf{i}:\mathcal{G}\to\mathcal{G}\), \(g\mapsto g=g^{-1}\), such that \(\mathbf{s}(g^{-1})=\mathbf{t}(g)\), \(\mathbf{t}(g^{-1})=\mathbf{s}(g)\), and \[gg^{-1}=\mathbf{u}(\mathbf{t}(g)),\quad g^{-1}g=\mathbf{u}(\mathbf{s}(g)).\] _Remark 2.2_.: In this paper, all groupoids are assumed to be Hausdorff. **Definition 2.3**.: A _homomorphism_ between Lie groupoids \(\mathcal{G}\rightrightarrows M_{1}\) and \(\mathcal{H}\rightrightarrows M_{2}\) is by definition a functor \(\phi:\mathcal{G}\to\mathcal{H}\) which is smooth both on the unit space \(M_{1}\) and on \(\mathcal{G}\). Two Lie groupoids \(\mathcal{G}\) and \(\mathcal{H}\) are said to be _isomorphic_ if there are homomorphisms \(\phi:\mathcal{G}\to\mathcal{H}\) and \(\psi:\mathcal{H}\to\mathcal{G}\) such that \(\psi\circ\phi\) and \(\phi\circ\psi\) are identity homomorphisms on \(\mathcal{G}\) and \(\mathcal{H}\) respectively. Lie groupoids are closely related with Lie algebroids. Here we recall the definition. **Definition 2.4**.: A _Lie algebroid_\(A\) over a manifold \(M\) is a vector bundle \(A\) over \(M\), together with a Lie algebra structure on the space \(\Gamma(A)\) of the smooth sections of \(A\) and a bundle map \(\nu:A\to TM\), extended to a map between sections of theses bundles, such that 1. \(\nu([X,Y])=[\nu(X),\nu(Y)]\); 2. \([X,fY]=f[X,Y]+(\nu(X)f)Y\), for all smooth sections \(X\) and \(Y\) of \(A\) and any smooth function \(f\) on \(M\). The map \(\nu\) is called the _anchor_. Give a Lie groupoid \(\mathcal{G}\) with units \(M\), we can associate a Lie algebroid \(A(\mathcal{G})\) to \(\mathcal{G}\) as follows. (For more details, see [29, 30].) The \(\mathbf{s}\)-vertical subbundle of \(T\mathcal{G}\) for \(\mathbf{s}:\mathcal{G}\to M\) is denoted by \(T^{\mathbf{s}}(\mathcal{G})\) and called simply the \(\mathbf{s}\)-vertical bundle for \(\mathcal{G}\). It is an involutive distribution on \(\mathcal{G}\) whose leaves are the components of the \(\mathbf{s}\)-fibers of \(\mathcal{G}\). (Here involutive distribution means that \(T^{\mathbf{s}}(\mathcal{G})\) is closed under the Lie bracket, i.e. if \(X,Y\in\mathfrak{X}(\mathcal{G})\) are sections of \(T^{\mathbf{s}}(\mathcal{G})\), then the vector field \([X,Y]\) is also a section of \(T^{\mathbf{s}}(\mathcal{G})\).) Hence we obtain \[T^{\mathbf{s}}\mathcal{G}=\mathrm{Ker}\,\mathbf{s}_{*}=\bigcup_{x\in M}T \mathcal{G}_{x}\subset T\mathcal{G}.\] The _Lie algebroid_ of \(\mathcal{G}\), denoted by \(A(\mathcal{G})\) (or simply \(A\) sometimes), is defined to be \(T^{\mathbf{s}}(\mathcal{G})|_{M}\), the restriction of the \(\mathbf{s}\)-vertical tangent bundle to the set of units \(M\). In this case, we say that \(\mathcal{G}\)_integrates_\(A(\mathcal{G})\). _Example 2.5_.: The Lie algebroid of the pair groupoid \(M\times M\) is the tangent bundle \(TM\) with the usual Lie bracket on vector fields and the anchor map is the identity. Two Lie groupoids may share the same Lie algebroid. For instance, both the pair groupoid \(M\times M\) and the fundamental groupoid associated to \(M\) integrate to the tangent Lie algebroid \(TM\). Let \(E\to M\) be a vector bundle. Recall [37] that an \(m\)-th order pseudo-differential operator on \(\mathcal{G}\) is a right invariant, smooth family \(P=\{P_{x}\}_{x\in M}\), where each \(P_{x}\) is an \(m\)-th order classical pseudo-differential operator on sections of \(\mathbf{t}^{*}E\to\mathbf{s}^{-1}(x)\). We denote by \(\Psi^{m}(\mathcal{G},E)\) (resp. \(D^{m}(\mathcal{G},E)\)) the algebra of uniformly supported, order \(m\) classical pseudo-differential operators (resp. differential operators). Recall [26] that one defines the strong norm for \(P\in\Psi^{0}(\mathcal{G},E)\) \[\|P\|:=\sup_{\rho}\|\rho(P)\|,\] where \(\rho\) ranges over all bounded \(*\)-representations of \(\Psi^{0}(\mathcal{G},E)\) satisfying \[\|\rho(P)\|\leq\sup_{x\in M}\Big{\{}\int_{\mathbf{s}^{-1}(x)}|\kappa_{P}(g)|\, \mu_{x},\int_{\mathbf{s}^{-1}(x)}|\kappa_{P}(g^{-1})|\,\mu_{x}\Big{\}},\] whenever \(P\in\Psi^{-\dim M-1}(\mathcal{G},E)\) with (continuous) kernel \(\kappa_{P}\). **Definition 2.6**.: The \(C^{*}\)-algebras \(\mathfrak{U}(\mathcal{G})\) and \(C^{*}(\mathcal{G})\) are defined to be the completion of \(\Psi^{0}(\mathcal{G},E)\) and \(\Psi^{-\infty}(\mathcal{G},E)\) respectively with respective to the strong norm \(\|\cdot\|\). One also defines the reduced \(C^{*}\)-algebras \(\mathfrak{U}_{r}(\mathcal{G})\) and \(C^{*}_{r}(\mathcal{G})\) by completing \(\Psi^{0}(\mathcal{G},E)\) and \(\Psi^{-\infty}(\mathcal{G},E))\) respectively with respect to the reduced norm \[\|P\|_{r}:=\sup_{x\in M}\big{\{}\|P_{x}\|_{L^{2}(\mathbf{s}^{-1}(x))}\big{\}}.\] Recall that if the strong and reduced norm coincide, then \(\mathcal{G}\) is called _(metrically) amenable_, which is the case for the groupoids we shall consider. _Example 2.7_.: Let \(M\) be a smooth manifold. Then the tangent bundle \(TM\) has a Lie groupoid structure, by regarding its fibers as a bundle of Lie groups. Its reduced groupoid \(C^{*}\)-algebra \(C^{*}_{r}(TM)\) can be canonically identified with \(C_{0}(T^{*}M)\) via Fourier transform, where \(T^{*}M\) is the cotangent bundle of \(M\). This example can be generalized to vector bundles. ### Invariant submanifolds and composition series Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid. **Definition 2.8**.: Let \(S\) be any subset of \(M\). We denote by \[\mathcal{G}_{S}:=\mathbf{s}^{-1}(S)\cap\mathbf{t}^{-1}(S)\] the _reduction_ of \(\mathcal{G}\) to \(S\). The reduction \(\mathcal{G}_{S}\) is a sub-groupoid of \(\mathcal{G}\). In particular, if \(S=\{x\}\), then \(\mathcal{G}_{x}:=\mathcal{G}_{S}\) is called the _isotropy group_ at \(x\). If \(S\subseteq M\) is an embedded submanifold such that \(\mathbf{s}^{-1}(S)=\mathbf{t}^{-1}(S)\), we say that \(S\subset M\) is an _invariant submanifold_. **Definition 2.9**.: Given a closed invariant submanifold \(S\) of \(\mathcal{G}\). For any groupoid pseudo-differential operator \(P=\{P_{x}\}_{x\in M}\in\Psi^{m}(\mathcal{G},E)\), we define the restriction \[P|_{\mathcal{G}_{S}}(P):=\{P_{x}\}_{x\in\mathcal{G}_{S}}\in\Psi^{m}(\mathcal{ G}_{S},E).\] Restriction extends to a map from \(\mathfrak{U}(\mathcal{G})\) to \(\mathfrak{U}(\mathcal{G}_{M^{\prime}})\) and also from \(C^{*}(\mathcal{G})\) to \(C^{*}(\mathcal{G}_{M^{\prime}})\). We denote both such restriction maps, and also the induced \(K\)-group homomorphisms, by \(r_{\mathcal{G},S}\). **Notations 2.10**.: _Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid, \(U\) be an open subset of \(M\). Then \(\mathcal{G}_{U}:=\mathbf{s}^{-1}(U)\cap\mathbf{t}^{-1}(U)\rightrightarrows U\) is an open sub-groupoid of \(\mathcal{G}\). Any elements in \(C^{*}(\mathcal{G}_{U})\) extends to \(C^{*}(\mathcal{G})\) by \(0\). We denote such extension map by \(\varepsilon_{\mathcal{G},U}\). It is a homomorphism of \(C^{*}\) algebras and hence induces a map from \(K^{\bullet}(C^{*}(\mathcal{G}_{U}))\) to \(K^{\bullet}(C^{*}(\mathcal{G}))\), which we shall still denote by \(\varepsilon_{\mathcal{G},U}\)._ Now suppose we are given a groupoid \(\mathcal{G}\rightrightarrows M\) (\(M\) not necessarily compact), and closed invariant submanifolds \[M=\bar{M_{0}}\supset\bar{M_{1}}\supset\cdots\supset\bar{M_{r}}.\] For simplicity we shall denote \(\bar{\mathcal{G}}_{i}:=\mathcal{G}_{\bar{M_{i}}}\). Recall that \(SA^{\prime}\) denotes the sphere sub-bundle of the dual of the Lie algebroid \(A(\mathcal{G})\) of \(\mathcal{G}\). **Definition 2.11**.: Let \(\sigma:\Psi^{m}(\mathcal{G})\to C^{\infty}(SA^{\prime})\) denotes the principal symbol map. For each \(i=1,\cdots,r\), define the _joint symbol maps_ \[\mathbf{j}_{i}:\Psi^{m}(\mathcal{G})\to C^{\infty}(SA^{\prime})\oplus \Psi^{m}(\bar{\mathcal{G}}_{i}),\quad\mathbf{j}_{i}(P):=(\sigma(P),P|_{\mathcal{ G}_{i}}). \tag{4}\] The map \(\mathbf{j}_{i}\) extends to a homomorphism from \(\mathfrak{U}(\mathcal{G})\) to \(C_{0}(SA^{*})\oplus\mathfrak{U}(\bar{\mathcal{G}}_{i})\). We say that \(P\in\Psi^{m}(\mathcal{G})\) is _elliptic_ if \(\sigma(P)\) is invertible, and it is called _fully elliptic_ if \(\mathbf{j}_{1}(P)\) is invertible (which implies \(P\) is elliptic). **Definition 2.12**.: Denote by \(\mathcal{J}_{0}:=\overline{\Psi^{-1}(\mathcal{G})}\subset\mathfrak{U}( \mathcal{G})\), and \(\mathcal{J}_{i}\subset\mathfrak{U}(\mathcal{G}),i=1,\cdots,r\) the null space of \(\mathbf{j}_{r-i+1}\). By construction, it is clear that \[\mathcal{J}_{0}\supset\mathcal{J}_{1}\supset\cdots\supset\mathcal{J}_{r}.\] Also, any uniformly supported kernels in \(\Psi^{-\infty}(\mathcal{G}_{\bar{M_{i}}\setminus\bar{M_{j}}},E|_{\bar{M_{i} }\setminus\bar{M_{j}}})\), can be extended to a kernel in \(\Psi^{-\infty}(\mathcal{G}_{\bar{M_{i}}},E|_{\bar{M_{i}}})\) by zero. This induces a \(*\)-algebra homomorphism from \(C^{*}(\mathcal{G}_{\bar{M_{i}}\setminus\bar{M_{j}}})\) to \(C^{*}(\mathcal{G}_{\bar{M_{i}}})\). We shall use the following key fact. **Lemma 2.13**.: _[_26_, Lemma 2 and Theorem 3]_ _One has short exact sequences_ \[0\to\mathcal{J}_{i+1}\to\mathcal{J}_{i}\to C^{*}( \mathcal{G}_{\bar{M_{i}}\setminus\bar{M_{i+1}}})\to 0, \tag{6}\] \[0\to C^{*}(\mathcal{G}_{\bar{M_{i}}\setminus\bar{M_{j}}}) \to C^{*}(\bar{\mathcal{G}}_{i})\to C^{*}(\mathcal{\bar{G}}_{j}) \to 0,\quad\forall\,j>i. \tag{5}\] ### Boundary groupoids and submanifold groupoids In this paper we are interested in some more specific classes of groupoids. To begin with, let us recall the definition of boundary groupoids in [44, 43]. **Definition 2.14**.: Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid with \(M\) compact. We say that \(\mathcal{G}\) is a boundary groupoid if: 1. the singular foliation defined by the anchor map \(\nu:A\to TM\) has finite number of leaves \(M_{0},M_{1},\cdots,M_{r}\subset M\) (which are invariant submanifolds), such that \(\dim M=\dim M_{0}>\dim M_{1}>\cdots>\dim M_{r}\); 2. For all \(k=0,1,\cdots,r\), \(\bar{M_{k}}:=M_{k}\cup\cdots\cup M_{r}\) are closed submanifolds of \(M\); 3. For \(k=0\), \(\mathcal{G}_{0}:=\mathcal{G}_{M_{0}}\) is the pair groupoid, and for \(k=1,2,\cdots,r\), we have \(\mathcal{G}_{k}:=\mathcal{G}_{M_{k}}\cong G_{k}\times M_{k}\times M_{k}\) for some Lie group \(G_{k}\); 4. For each \(k=0,1,\cdots,r\), there exists an unique sub-bundle \(\bar{A}_{k}\subset A|_{\bar{M_{k}}}\) such that \(\bar{A}_{k}|_{M_{k}}=\ker(\nu|_{M_{k}})\) (\(=\mathfrak{g}_{k}\times M_{k}\)). Boundary groupoids are closely related to Fredholm groupoids and blowup groupoids. Roughly speaking, Fredholm groupoids are those on which Fredholmness of a pseudodifferential operator is completely characterized by its ellipticity and invertibility at the boundary. For the definition and basic properties of Fredholm groupoids, one may consult [14, 15, 16]. The basic result relevant to our discussion is that boundary groupoids are often amenable and Fredholm groupoids. **Lemma 2.15**.: _(See [27, Lemma 7]) For any boundary groupoid of the form \(\mathcal{G}=(M_{0}\times M_{0})\cup(\mathbb{R}^{q}\times M_{1}\times M_{1})\),_ \[C^{*}(\mathcal{G})\cong C^{*}_{r}(\mathcal{G}).\] _In other words, \(\mathcal{G}\) is (metrically) amenable._ Moreover, since the additive group \(\mathbb{R}^{q}\) is amenable, the groupoid of the form \(\mathbb{R}^{q}\times M_{1}\times M_{1}\) is (topologically) amenable. By [16, Theorem 4.3], we have the following proposition. **Proposition 2.16**.: _The groupoid \(\mathcal{G}=(M_{0}\times M_{0})\cup(\mathbb{R}^{q}\times M_{1}\times M_{1})\) is a Fredholm groupoid._ Given a boundary groupoid, one naturally considers the sequence of invariant submanifolds \[M\supset\bar{M}_{1}\supset\dots\supset\bar{M}_{r},\] where \(\bar{M}_{i}\) is given to be (2) of Definition 2.14. We have the short exact sequence \[0\to C^{*}(\mathcal{G}_{\bar{M}_{i}\setminus\bar{M}_{j}})\to C^{*}(\bar{ \mathcal{G}}_{i})\to C^{*}(\bar{\mathcal{G}}_{j})\to 0,\quad\forall\,j>i, \tag{7}\] which induces the \(K\)-theory six-terms exact sequences: \[\begin{CD}K_{1}(C^{*}(\mathcal{G}_{M_{i}}))@>{}>{}>K_{1}(C^{*}(\bar{ \mathcal{G}}_{i}))@>{}>{}>K_{1}(C^{*}(\bar{\mathcal{G}}_{i+1}))\\ @V{}V{}V@V{}V{}V\\ K_{0}(C^{*}(\bar{\mathcal{G}}_{i+1}))@<{}<{}<K_{0}(C^{*}(\bar{\mathcal{G}}_{i})) @<{}<{}<K_{0}(C^{*}(\mathcal{G}_{M_{i}}))\end{CD} \tag{8}\] If \(r=1\), and \(G_{1}\) is solvable, connected and simply connected (i.e. exponential), then the system (8) is greatly simplified - to begin with, there is only one exact sequence. Indeed, Carrillo-Rouse and the second author computed the \(K\)-theory of boundary groupoids with \(r=1\) in [13], namely: \[\begin{CD}K_{0}(C^{*}(\mathcal{G}))\cong\mathbb{Z},&K_{1}(C^{*}(\mathcal{G}) )\cong\mathbb{Z},\qquad\qquad\text{if $\dim G\geq 3$ odd};\\ K_{0}(C^{*}(\mathcal{G}))\cong\{0\},&K_{1}(C^{*}(\mathcal{G}))\cong\mathbb{ Z}\oplus\mathbb{Z},\qquad\qquad\text{if $\dim G$ even}.\end{CD}\] Next, we recall _submanifold groupoids_, which form a sub-class of boundary groupoids in [13]. _Example 2.17_ (Submanifold groupoids).: Suppose that \(M_{1}\) is a closed embedded sub-manifold of \(M\) of codimension \(q\geq 2\). Let \(f\in C^{\infty}(M)\) be any smooth function which vanishes on \(M_{1}\) and strictly positive over \(M_{0}=M\setminus M_{1}\) (if \(M\) is connected then \(M_{0}=M\setminus M_{1}\) is still connected and \(f\) must either be strictly positive or strictly negative on \(M_{0}\)). Then there exists a Lie algebroid structure over \[A_{f}:=TM\to M\] with anchor map \[(z,w)\mapsto(z,f(z)\cdot w),\quad\forall(z,w)\in TM,\] and Lie bracket on sections \[[X,Y]_{A_{f}}:=f[X,Y]+(X\cdot f)Y-(Y\cdot f)X.\] This Lie algebroid is almost injective (since the anchor is injective, in fact an isomorphism, over the open dense subset \(M_{0}=M\setminus M_{1}\)), and hence it integrates to a Lie groupoid that is a quasi-graphoid (see [19, Theorems 2 and 3]). By [35], \(A_{f}\) integrates to a Lie groupoid of the form \[\mathcal{G}=M_{0}\times M_{0}\sqcup\mathbb{R}^{q}\times M_{1}\times M_{1} \rightrightarrows M=M_{0}\sqcup M_{1}.\] This groupoid is called a submanifold groupoid. _Example 2.18_ (Renormalizable boundary groupoids).: For even more specific examples of submanifold groupoids, in [41], we considered \[f=r^{N}\] for some fixed even integers \(N\), where \(r:=d(M_{1},\cdot)\) is the Riemannian distance function from \(M_{1}\) with respect to some metric. The resulting groupoid is called a _renormalizable boundary groupoid_. In [41], we showed that the \(\eta\)-term for renormalizable groupoids vanishes in the index formula via the method of renormalized trace. ### The tangent groupoid and the adiabatic groupoid Let us recall that for a closed manifold \(M\), Connes [18] constructed the tangent groupoid, which is a Lie groupoid of the form \[\mathcal{T}(M\times M):=M\times M\times(0,1]\sqcup TM\times\{0\}\rightrightarrows M \times[0,1],\] where the differentiable structure of \(\mathcal{T}(M\times M)\) is defined by fixing some Riemannian metric on \(M\) and then using the following maps from some open subsets of \(TM\times[0,1]\) to \(\mathcal{T}(M\times M)\) \[\mathbf{x}(p,X,\epsilon):=(p,\exp_{p}(-\epsilon X),\epsilon),\text { for }\epsilon>0,\] \[\mathbf{x}(p,X,0):=(p,X,0),\] as charts. The tangent groupoid construction generalizes to any Lie groupoid \(\mathcal{G}\). One considers the adiabatic groupoid \[\mathcal{T}(\mathcal{G}):=\mathcal{G}\times(0,1]\sqcup A\times\{0\} \rightrightarrows M\times[0,1],\] where \(A\) is the Lie algebroid of \(\mathcal{G}\). Debord and Skandalis [24] further generalized the construction by replacing the set of units \(M\subset\mathcal{G}\) with arbitrary sub-groupoid \(\mathcal{H}\subset\mathcal{G}\), and considered gluing \(\mathcal{N}^{\mathcal{G}}_{\mathcal{H}}\), the normal bundle of \(\mathcal{H}\) in \(\mathcal{G}\), to \(\mathcal{G}\times\mathbb{R}\setminus\{0\}\). The resulting object is naturally a Lie groupoid, which they call the deformation to the normal cone. For simplicity we return to the case of the adiabatic groupoid \(\mathcal{T}(\mathcal{G})\). One naturally constructs the index map \[\operatorname{ind}_{\mathcal{T}(\mathcal{G})}:=r_{\mathcal{T}(\mathcal{G}),M \times\{1\}}\circ r_{\mathcal{T}(\mathcal{G}),M\times\{0\}}^{-1}, \tag{9}\] where \(r_{\mathcal{T}(\mathcal{G}),M\times\{1\}}:C^{*}(\mathcal{T}(\mathcal{G})) \to C^{*}(\mathcal{G})\) and \(r_{\mathcal{T}(\mathcal{G}),M\times\{0\}}:C^{*}(\mathcal{T}(\mathcal{G})) \to C^{*}(A)\) are respectively the restriction maps to the sub-groupoids of \(\mathcal{T}(\mathcal{G}))\) over \(M\times\{1\}\) and \(M\times\{0\}\). Let \(\Psi\) be any classical elliptic pseudo-differential operator on \(\mathcal{G}\rightrightarrows M\). Its principal symbol \(\sigma(\Psi)\) is an invertible element in \(C(SA^{\prime})\), where \(A^{\prime}\) is the dual bundle of \(A\) and \(SA^{\prime}\) denotes the sphere bundle of \(A^{\prime}\). We identify the algebra of Schwartz functions on \(A^{\prime}\) with the (fiber-wise) convolution algebra \(C^{*}(A)\) through Fourier transform to obtain \[K_{0}(C(A^{\prime}))\cong K_{0}(C^{*}(A)),\] and let \[\partial:K_{1}(SA^{\prime})\to K_{0}(C(A^{\prime}))\cong K_{0}(C^{*}(A))\] be the connecting map induced by the short exact sequence \[0\to C(A^{\prime})\to C(\bar{A}^{\prime})\to(SA^{\prime})\to 0.\] Then by [13], the analytic index of \(\Psi\) is given by \[\operatorname{ind}_{\mathcal{T}(\mathcal{G})}(\partial[\sigma(\Psi)])\in C^{*} (\mathcal{G}),\] which, if \(\varPsi\) is of order \(0\), coincides with the image of \([\varPsi]\in K_{1}(\mathfrak{U}(\mathcal{G}))\) under the connecting map induced by the short exact sequence \[0\to C^{*}(\mathcal{G})\to\mathfrak{U}(\mathcal{G})\to\mathfrak{U}/C^{*}( \mathcal{G})\to 0.\] ## 3. The deformation groupoid and the deformation index map Motivated by the tangent groupoid construction in the previous section, we introduce the following definition, which plays a central role in the paper. **Definition 3.1**.: Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid. _A deformation from the pair groupoid \(M\times M\) to \(\mathcal{G}\)_ is a Lie groupoid \(\mathcal{D}\rightrightarrows M\times[0,1]\) with (groupoid) isomorphisms \[\mathcal{D}_{M\times(0,1]} \cong M\times M\times(0,1],\] \[\mathcal{D}_{M\times\{0\}} \cong\mathcal{G}.\] _Example 3.2_.: Let \(M\) be a closed manifold. Regard \(TM\to M\) as a Lie groupoid, then Connes' tangent groupoid, \(\mathcal{T}(M\times M)\) gives a deformation from the pair groupoid to \(TM\). ### Construction of deformations from the pair groupoid The following fact is another motivation to introduce the above definition. **Proposition 3.3**.: _There exists a deformation from the pair groupoid for submanifold groupoids defined in Example 2.17._ Proof.: We use the same notation as in Example 2.17. To construct the groupoid \(\mathcal{D}\rightrightarrows M\times[0,1]\), we begin with defining the Lie algebroid of \(\mathcal{D}\). We set \(\tilde{A}_{f}\to M\times[0,1]\) to be the vector bundle pullback of \(A_{f}\). Sections of \(\tilde{A}_{f}\) are just vector fields on \(M\times[0,1]\) which are tangential to the \(M\) direction. Hence one can define the anchor map \[\nu((z,s),w)=((z,s),(f(z)+s)\cdot w),\quad\forall(z,w)\in TM,s\in[0,1],\] and Lie bracket on sections by \[[X,Y]_{\tilde{A}_{f}}:=(f+s)[X,Y]+(X\cdot f)Y-(Y\cdot f)X.\] The Lie algebroid \(\tilde{A}_{f}\) is isomorphic to \(A_{f}\) on \(M\times\{0\}\) and \(TM\) on \(M\times\{s\}\) for each \(s>0\), respectively. Using [19] and [35, Theorem 1], one observes that \(\tilde{A}_{f}\) integrates to a Lie groupoid of the form \[\mathcal{G}\sqcup(M\times M\times(0,1])\rightrightarrows M\times[0,1],\] which we denote by \(\mathcal{D}\). ### Obstruction to the existence of deformations from the pair groupoid In this subsection we point out some notable necessary condition for \(\mathcal{G}\) in order for a deformation from the pair groupoid to exist. **Proposition 3.4**.: _Suppose that \(\mathcal{G}\rightrightarrows M\) is a Lie groupoid such that a deformation from the pair groupoid exists. Denote by \(A\) the Lie algebroid of \(\mathcal{G}\). Then we have_ \[A\cong TM,\] _as vector bundles._ Proof.: Let \(\mathcal{D}\rightrightarrows M\times[0,1]\) be a deformation from the pair groupoid. Denote its Lie algebroid by \(\tilde{A}\to M\times[0,1]\). Then one has \[\tilde{A}\big{|}_{M\times\{0\}}\cong A,\quad\tilde{A}\big{|}_{M\times\{1\}} \cong TM.\] Hence an isomorphism can be constructed by, say, fixing a connection \(\tilde{\nabla}\) on \(\tilde{A}\) and then parallel transport along the family of curves \((p,t),t\in[0,1]\) for each \(p\in M\) The existence of a deformation from the pair groupoid also imposes necessary conditions on the structural vector fields of \(\mathcal{G}\) (i.e. the image of the anchor map). For simplicity, we consider the case when \(M_{1}\) is a single point \(\{p\}\). Suppose a deformation \(\mathcal{D}\) from the pair groupoid for \(\mathcal{G}\) exists. Denote the Lie algebroids of \(\mathcal{G}\) and \(\mathcal{D}\) by \(A\) and \(\tilde{A}\) respectively, and their anchor maps by \(\nu\) and respectively \(\tilde{\nu}\). Then it is obvious that \[\left.\begin{aligned} \tilde{A}\right|_{M\times(0,1]}\cong& TM \times(0,1],\\ \tilde{A}\big{|}_{M\times\{0\}}\cong& A.\end{aligned}\] Suppose that \(s\) is a nowhere vanishing local section of \(A\) around \(p\), such that \(\nu(s)(p)=0\). Then the section \(s\) extends to some nowhere vanishing local section \(\tilde{s}\) of \(\tilde{A}\) over an open neighborhood of \(\{p\}\times[0,1]\). Because \(\tilde{\nu}\) is injective except at \((p,0)\), it follows that \(\tilde{\nu}(\tilde{s})\) is a family of local vector fields on \(M\) parameterized by \([0,1]\), which is non-vanishing on \(M\times\{t\}\), \(1\geq t>0\) and has an isolated zero at \(p\) on \(M\times\{0\}\). This implies that the Hopf-Poincare index of \(\nu(s)=\tilde{\nu}(\tilde{s})\big{|}_{M\times\{0\}}\) equals zero. Hence, we are able to make the following counterexample which shows that a boundary groupoid may not possess a deformation from the pair groupoid. _Example 3.5_.: There exist Lie algebroids spanned by structural vector fields of non-zero degrees. For example, on \(\mathbb{R}^{2}\), the vector fields \[x\tfrac{\partial}{\partial x}+y\tfrac{\partial}{\partial y}\ \text{ and }\ y \tfrac{\partial}{\partial x}-x\tfrac{\partial}{\partial y}\] have isolated zero of degree \(1\), and commute each other. Moreover, their flows give an action of the abelian Lie group \(\mathbb{R}^{2}\) on itself. Thus the tangent Lie algebroid of the action groupoid \(\mathbb{R}^{2}\rtimes\mathbb{R}^{2}\) is spanned by these vector fields, which contradicts to the above necessary condition that the Hopf-Poincare index is equal to zero. _Remark 3.6_.: The above example exhibits that there are certain obstructions to the existence of such deformation for boundary groupoids. At present, besides Proposition 3.4, we do not know of any other necessary conditions or sufficient conditions to guarantee the existence of such deformations in general, which will be left as future work. ### The deformation index map In this subsection, we always assume that a deformation from the pair groupoid \[\mathcal{D}:=\mathcal{G}\sqcup(M\times M\times(0,1])\rightrightarrows M\times [0,1]\] exists for the given Lie groupoid \(\mathcal{G}\rightrightarrows M\). We shall use \(\mathcal{D}\) to construct an index, similar to [18, 45]. Observe that \(\mathcal{D}\) has closed saturated subgroupoids \(\mathcal{D}|_{M\times\{0\}}\cong\mathcal{G},\mathcal{D}|_{M\times\{1\}}\cong M\times M\). Because the sequence \[K^{\bullet}(C^{*}(\mathcal{D}|_{M\times(0,1]}))\cong K^{\bullet}(C^{*}(M \times M\times(0,1]))\cong 0\xrightarrow{\varepsilon_{\mathcal{D},M\times(0,1]}}K^{ \bullet}(C^{*}(\mathcal{D}))\] \[\xrightarrow{r_{\mathcal{D},M\times\{0\}}}K^{\bullet}(C^{*}( \mathcal{G}))\xrightarrow{\partial}K^{\bullet+1}(C^{*}(\mathcal{D}|_{M\times(0,1]}))\] is exact, therefore the map \(r_{\mathcal{D},M\times\{0\}}\) is invertible. Hence it is natural to introduce the following index map. **Definition 3.7**.: The deformation index map is defined to be \[\operatorname{ind}_{\mathcal{D}}:=r_{\mathcal{D},M\times\{1\}}\circ r_{ \mathcal{D},M\times\{0\}}^{-1}:K^{\bullet}(C^{*}(\mathcal{G}))\to K^{ \bullet}(C^{*}(M\times M)).\] From the definition, the following theorem is immediate, and implies Theorem 1.1. **Theorem 3.8**.: _One has_ \[\operatorname{ind}_{\mathcal{D}}\circ\varepsilon_{\mathcal{G},M_{0}}=\varepsilon _{M\times M,M_{0}}.\] Proof.: Observe that \[M_{0}\times M_{0}\times[0,1]=\mathcal{D}_{M_{0}\times[0,1]}\] is an open subgroupoid. For any \(u\in C^{*}(M_{0}\times M_{0})\), let \(\tilde{u}\in M_{0}\times M_{0}\times[0,1]\) be the pullback of \(s\) by the projection to the \(M_{0}\times M_{0}\) factor (i.e., \(\tilde{u}\) is just \(u\) on each \(M_{0}\times M_{0}\times\{s\}\)). Extend \(\tilde{u}\) to \(\mathcal{D}\) and get \(\varepsilon_{\mathcal{D},M_{0}\times[0,1]}(\tilde{u})\). When restricted to \(s=0\) and \(s=1\), it is clear that \(\varepsilon_{\mathcal{D},M_{0}\times[0,1]}(\tilde{u})\) equals \[\varepsilon_{\mathcal{G},M_{0}}(u)\ \text{ and }\ \varepsilon_{M\times M,M_{0}}(u),\] respectively. Passing to \(K\)-theory, one obtains \[r_{\mathcal{D},M\times\{0\}}\circ\varepsilon_{\mathcal{D},M_{0}\times[0,1]}([ \tilde{u}])=\varepsilon_{\mathcal{G},M_{0}}([u]),\] and \[r_{\mathcal{D},M\times\{1\}}\circ\varepsilon_{\mathcal{D},M_{0}\times[0,1]}([ \tilde{u}])=\varepsilon_{M\times M,M_{0}}([u]),\] for any class \([u]\in K^{\bullet}(M_{0}\times M_{0})\). Hence, we have \[\operatorname{ind}_{\mathcal{D}}\circ\varepsilon_{\mathcal{G},M_ {0}}([u])= r_{\mathcal{D},M\times\{1\}}\circ r_{\mathcal{D},M\times\{0\}}\circ r _{\mathcal{D},M\times\{0\}}\circ\varepsilon_{\mathcal{D},M_{0}\times[0,1]}([ \tilde{u}])\] \[= \varepsilon_{M\times M,M_{0}}([u]),\qed\] which completes the proof. We turn to the proof of Theorem 1.2, which is reformulated as follows. **Theorem 3.9**.: _Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid. Suppose there exists a deformation groupoid \(\mathcal{D}\) from the pair groupoid \(M\times M\). Then one has the commutative diagram_ \[\begin{array}{ccc}K_{0}(C^{*}(A))&\operatorname{ind}_{\mathcal{T}( \mathcal{G})}&\succ K_{0}(C^{*}(\mathcal{G}))\\ \\ \underset{\text{\tiny$\vee$}}{\text{\tiny$\vee$}}&\\ K_{0}(C^{*}(TM))&\operatorname{ind}_{\mathcal{T}(M\times M)}&\succ K_{0}(C^{* }(M\times M)),\end{array} \tag{10}\] _where the top map is just the analytic index map constructed via the adiabatic groupoid and the bottom map is the Atiyah-Singer index._ Proof.: Let \(\mathcal{T}(\mathcal{D})\rightrightarrows(M\times[0,1])\times[0,1]\) be the adiabatic groupoid of \(\mathcal{D}\). Recall that as a set \(\mathcal{T}(\mathcal{D})=\mathcal{D}\times(0,1]\sqcup\tilde{A}\). Hence \(\mathcal{T}(\mathcal{D})\) restricted to \((M\times\{0\})\times[0,1]\) and \((M\times\{1\})\times[0,1]\) are respectively \[\mathcal{T}(\mathcal{G})\rightrightarrows M\times[0,1]= \mathcal{G}\times(0,1]\sqcup A,\] \[\text{and }\mathcal{T}(M\times M)\rightrightarrows M\times[0,1]= M\times M\times(0,1]\sqcup TM,\] the adiabatic groupoid of \(\mathcal{G}\) (respectively the adiabatic groupoid of \(M\times M\)). One can further restrict \(\mathcal{T}(\mathcal{G})\) and \(\mathcal{T}(M\times M)\) to \(M\times\{0\}\) or \(M\times\{1\}\), resulting in the commutative diagram (11) \[\begin{array}{ccc}C^{*}(A)\!\prec\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Then we pass to the corresponding \(K\)-group maps. Observe that \(r_{\mathcal{T}(\mathcal{D}),(M\times\{0\})\times[0,1]}\) fits in the six-term exact sequence \[K^{\bullet}(C^{*}(\mathcal{T}(\mathcal{D})_{(M\times(0,1])\times[0,1])}) \xrightarrow{\frac{r_{\mathcal{T}(\mathcal{D}),(M\times(0,1])\times[0,1]}}{ \longrightarrow}}K^{\bullet}(C^{*}(\mathcal{T}(\mathcal{D})))\] and moreover we have \[\mathcal{T}(\mathcal{D})_{(M\times(0,1])\times[0,1]}\cong T(\mathcal{D}_{M \times(0,1]})\cong\mathcal{T}(M\times M)\times(0,1],\] whose convolution \(C^{*}\)-algebra is contractible. Therefore the map \(r_{\mathcal{T}(\mathcal{D}),(M\times\{0\})\times[0,1]}\) is invertible. Similarly, the maps \(r_{\mathcal{T}(\mathcal{G}),M\times\{0\}},r_{\tilde{A},M\times\{0\}},r_{ \mathcal{T}(\mathcal{D}),(M\times[0,1])\times\{0\}},\)\(r_{\mathcal{T}(M\times M),M\times\{0\}}\) and \(r_{\mathcal{D},M\times\{0\}}\) are all isomorphisms between corresponding \(K\)-groups. Recall (9) that \[\operatorname{ind}_{\mathcal{T}(\mathcal{G})}\mathrel{\mathop{:}}= r_{\mathcal{T}(\mathcal{G}),M\times\{1\}}\circ r_{\mathcal{T}( \mathcal{G}),M\times\{0\}}^{-1},\] \[\operatorname{ind}_{\mathcal{T}(M\times M)}\mathrel{\mathop{:}}= r_{\mathcal{T}(M\times M),M\times\{1\}}\circ r_{\mathcal{T}(M\times M),M \times\{0\}}^{-1}\] are just Connes' analytic index maps. Lastly, using the same pull-back arguments as the proof of Theorem 3.8, one sees that the \(K\)-theory map \[r_{\tilde{A},M\times\{1\}}\circ r_{\tilde{A},M\times\{0\}}^{-1}\] is just equivalent to the isomorphism in Proposition 3.4. Hence one ends up with the commutative diagram \[K_{0}(C^{*}(A)) \overset{\operatorname{ind}_{\mathcal{T}(\mathcal{G})}}{\succ}K_ {0}(C^{*}(\mathcal{G}))\] \[\overset{\cong}{\underset{\mathbb{V}}{\overset{\operatorname{ind}_ {\mathcal{T}}}{\vee}}} \overset{\operatorname{ind}_{\mathcal{T}(M\times M)}}{\longrightarrow}K_ {0}(C^{*}(M\times M)), \tag{12}\] which completes the proof. ## 4. Fredholm and \(K\)-theoretic index of (fully) elliptic operators on boundary groupoids In this section, we combine the map in Theorem 1.2 and the Atiyah-Singer index formula to compute the index. Before doing that, we briefly recall the Atiyah-Singer index formula, in particular, we shall use the version appearing in [40, Theorem 5.1]. Let \(\mathcal{G}\rightrightarrows M\) be a Lie groupoid and \(A\) the associated Lie algebroid of \(\mathcal{G}\). Define the line bundle \[L\mathrel{\mathop{:}}=\wedge^{top}T^{*}M\otimes\wedge^{top}A.\] We suppose further that \(\mathcal{G}\) is unimodular, i.e., there exists an invariant nowhere vanishing section \(\Omega\) of \(L\). Then the characteristic map \(\chi_{\Omega}\) defines a map from the groupoid cohomology to cyclic cohomology by [40, Equation (5)] \[\chi_{\Omega}(\varphi_{1}\otimes\cdots\varphi_{k})(a_{0}\otimes \cdots\otimes a_{k})\] \[\mathrel{\mathop{:}}= \int_{M}\Big{(}\int_{g_{0}\cdots g_{k}=1_{x}}a_{0}(g_{0})\varphi _{1}(g_{1})a_{1}(g_{1})\cdots\varphi_{k}(g_{k})a_{k}(g_{k})\Big{)}\Omega(x)\] (on the level of cochains). Then one combines \(\chi_{\Omega}\) with the canonical pairing between cyclic homology and cohomology, the Connes-Chern character map, which maps from \(K_{0}(\Psi_{c}^{-\infty}(\mathcal{G}))\) to the cyclic homology of \(\Psi_{c}^{-\infty}(\mathcal{G})\), and the van Est map, to obtain a pairing \[\langle[\varphi],\alpha\rangle_{\Omega}\in\mathbb{C}\] for any \([\varphi]\in K_{0}(\Psi_{c}^{-\infty}(\mathcal{G})),\alpha\in H^{\bullet}(A, \mathbb{C})\). Then the main result of [40, Theorem 5.1] states that such pairing can be computed by the following formula. **Lemma 4.1**.: _For any elliptic pseudo-differential operator \(\varPsi\) and \(\alpha\in H^{2k}(A,\mathbb{C})\), we have_ \[\langle\operatorname{ind}(\varPsi),\alpha\rangle_{\Omega}=(2\pi\sqrt{-1})^{-k }\int_{A^{\prime}}\langle\pi^{*}\alpha\wedge\hat{A}(A^{\prime})\wedge \operatorname{ch}(\sigma[\varPsi]),\Omega_{\pi^{!}A}\rangle,\] _where \(\hat{A}(A^{\prime})\in H^{\bullet}(TM)\) is the \(A\)-hat genus, \(\operatorname{ch}:K_{0}(C(A^{\prime}))\to H^{even}(TM)\) is the Chern Character (which can be defined by, say, the Chern-Weil construction), and \(\pi^{!}A\) is the pull-back Lie algebroid of \(A\) along the projection \(\pi:A^{\prime}\to M\)._ Let us consider the particular case when \(\mathcal{G}=M\times M\) is the pair groupoid, hence \(\mathcal{A}=TM\), and \(L\) is trivial. One obtains an invariant nowhere vanishing section \(\Omega\) of \(L\) by considering, \[\Omega:=Cdx^{1}\wedge\cdots\wedge dx^{\dim M}\otimes\tfrac{\partial}{\partial x ^{i}}\wedge\cdots\wedge\tfrac{\partial}{\partial x^{\dim M}} \tag{13}\] locally, for any constant \(C\neq 0\). Also, the constant function \(1\) obviously defines a class \([1]\in H^{0}(A,\mathbb{C})\). Applying Lemma 4.1, one gets \[\langle\operatorname{ind}(\varPsi),1\rangle_{\Omega}=\int_{T^{*}M}\langle\hat {A}(T^{*}M)\wedge\operatorname{ch}(\sigma[\varPsi]),\Omega_{\pi^{!}TM}\rangle. \tag{14}\] Since \(\operatorname{ind}(\varPsi)\) is an integer, by choosing an appropriate normalization \(C\) for \(\Omega\), we have \[\operatorname{ind}(\varPsi)=\langle\operatorname{ind}(\varPsi),1\rangle_{ \Omega},\] and the identity (14) simplifies to \[\operatorname{ind}(\varPsi)=\int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge \operatorname{ch}(\sigma[\varPsi]),\Omega_{\pi^{!}TM}\rangle. \tag{15}\] _Remark 4.2_.: In order to apply Lemma 4.1, one needs a non-trivial \(\mathcal{G}\)-invariant Lie algebroid class. By [25] the construction of such class is non-trivial, because the Lie algebroid associated to a boundary groupoid is in general not unimodular. However, in the following subsections we shall use a deformation from the pair groupoid to simplify the problem to that of a regular groupoid, which is clearly unimodular, so that the Lemma 4.1 can be applied. In the sequel, we focus on boundary groupoids of the form \[\mathcal{G}:=M_{0}\times M_{0}\sqcup G\times M_{1}\times M_{1}\rightrightarrows M =M_{0}\sqcup M_{1}, \tag{16}\] where \(G\) is an exponential Lie group, and assume that a deformation from the pair groupoid exists for \(\mathcal{G}\). ### The odd co-dimension case Recall that if the isotropy subgroup \(G\) is exponential and of odd dimension \(\geq 3\), then \[K_{0}(M_{0}\times M_{0})\cong\mathbb{Z}\xrightarrow{\varepsilon_{\mathcal{G},M_{0}}}C^{*}(\mathcal{G})\cong\mathbb{Z} \tag{17}\] is an isomorphism. Identifying \(K_{0}(M_{0}\times M_{0})\cong K_{0}(M\times M)\), one sees from Theorem 1.1 that \(\operatorname{ind}_{\mathcal{D}}\) is the inverse of \(\varepsilon_{\mathcal{G},M_{0}}\). Given any elliptic pseudodifferential operator \(\varPsi\), in order to compute the integer \[\operatorname{ind}(\varPsi)=\operatorname{ind}_{\mathcal{T}(\mathcal{G})}( \partial[\sigma(\varPsi)]),\] we apply Theorem 1.2 to obtain \[\operatorname{ind}(\varPsi)=\operatorname{ind}_{\mathcal{T}(\mathcal{G})}( \partial[\sigma(\varPsi)])=\operatorname{ind}_{\mathcal{T}(M\times M)}( \partial[\sigma(\varPsi)]), \tag{18}\] where we regard \(\partial[\sigma]\in K_{0}(C^{*}(TM))\) on the rightmost expression. Observe that \(\operatorname{ind}_{\mathcal{T}(M\times M)}(\partial[\sigma])\) is just the analytic index of the pair groupoid, Hence we apply Lemma 4.1 (with the normalization of (15)) to conclude the following theorem. **Theorem 4.3**.: _Suppose that \(\mathcal{G}\rightrightarrows M\) is a boundary groupoid of the form (16) with \(\dim G\geqslant 3\) odd, and a deformation from the pair groupoid exists for \(\mathcal{G}\). Let \(\varPsi\) be an elliptic pseudo-differential operator on \(\mathcal{G}\). One has the index formula_ \[\operatorname{ind}(\varPsi)=\operatorname{ind}_{\mathcal{T}(M\times M)}( \partial[\sigma(\varPsi)])=\int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge \operatorname{ch}(\sigma[\varPsi]),\Omega_{\pi^{!}TM}\rangle. \tag{19}\] _Remark 4.4_.: By Proposition 3.3, we see that a deformation from the pair groupoid always exists for submanifold groupoids. Because renormalizable boundary groupoids form a special class of submanifold groupoids, Theorem 4.3 applies and implies that the index is given only by the Atiyah-Singer term. Comparing with the results of [8, 9] (for higher co-dimension) and also [41], one sees that the \(\eta\)-term vanishes for elliptic pseudo-differential operators on \(\mathcal{G}\). Therefore we give a completely new proof that generalizes these previous results. _Example 4.5_.: Recall [27, Section 6] that for any Lie groupoid the vertical de Rham operator \(d\) is an (invariant) groupoid differential operator. Its vector representation computes its Lie algebroid cohomology. Fixing a Riemannian metric on \(A\), one can then define the formal adjoint \(d^{*}\) of \(d\), which leads one to construct the Euler operator \(d+d^{*}\), and also the signature operator. More generally, one can construct generalized Dirac operators. For illustration here, we only consider the case when the Lie algebroid \(A\) is spin, and \(\mathcal{G}\) is of the form (16) with \(M_{1}\) of odd codimension, where a deformation from the pair groupoid exists. Let \(S_{A}\) be the spinor bundle of \(A\). Following [27, Section 6], one constructs the (groupoid) Dirac operator \(D_{A}\) associated with the Levi-Civita connection. One the other hand, Lemma 3.4 implies that \(M\) is also a spin manifold, and the spinor bundle \(S_{TM}\) is isomorphic to \(S_{A}\). Hence it is standard to construct the (groupoid) Dirac operator \(D\) associated with the Levi-Civita connection. One sees from its explicit construction that the principal symbol of \(D_{A}\) is equal to that of \(D\) (however \(D_{A}\) and \(D\) are very different as groupoid differential operators). Hence by Theorem 3.9, the index of \(D_{A}\) is the same as the index of \(D\). Theorem 4.3 then gives an explicit formula for \(\operatorname{ind}(D_{A})\). Lastly recall [7, Section 3] that one can generalize the construction of the Dirac operator by tensoring \(S_{A}\) with a vector bundle \(W\), and in this case the Chern form \(\operatorname{ch}(\sigma[D_{A}])\) can be written explicitly using the twisting curvature. ### The even co-dimension case On the other hand, if \(G\) is of even dimension, one has short exact sequence \[0\to K_{0}(M_{0}\times M_{0})\cong\mathbb{Z}\xrightarrow{\varepsilon_{ \mathcal{G},M_{0}}}K_{0}(C^{*}(\mathcal{G}))\xrightarrow{r_{\mathcal{G},M_{1 }}}K_{0}(C^{*}(\mathcal{G}_{M_{1}}))\cong\mathbb{Z}\to 0 \tag{20}\] and Theorem 1.1 canonically identifies \[K_{0}(C^{*}(\mathcal{G}))\cong\mathbb{Z}\oplus\mathbb{Z}\] via \(\operatorname{ind}_{\mathcal{D}}\oplus r_{\mathcal{G},M_{1}}\). For the \(\operatorname{ind}_{\mathcal{D}}\) component, the arguments for the odd case apply without any change, and the result is the same Atiyah-Singer integral (19). To compute \[r_{\mathcal{G},M_{1}}\circ\operatorname{ind}_{\mathcal{T}(\mathcal{G})}( \partial[\sigma]),\] we observe that restriction of \(\mathcal{T}(\mathcal{G})\) to \(M_{1}\times[0,1]\) is just \(\mathcal{T}(\mathcal{G}_{1})\) which is the adiabatic groupoid of \(\mathcal{G}_{1}=G\times M_{1}\times M_{1}\), hence we obtain the commutative diagram \[C^{*}(A)\lhd\quad\begin{matrix}r_{\mathcal{T}(\mathcal{G}),M\times\{0\}}&C^{* }(\mathcal{T}(\mathcal{G}))\end{matrix}\quad\begin{matrix}r_{\mathcal{T}( \mathcal{G}),M\times\{1\}}&\lhd C^{*}(\mathcal{G})\\ \rhd C^{*}(\mathcal{G})\end{matrix} \tag{21}\] \[\begin{matrix}r_{A,M_{1}}&\rhd r_{\mathcal{T}(\mathcal{G}),M_{1}\times[0,1]}& r_{\mathcal{G},M_{1}}\\ \rhd V&\rhd V\\ C^{*}(A|_{M_{1}})\lhd C^{*}(\mathcal{T}(\mathcal{G}_{1}))&\rhd C^{*}( \mathcal{G}_{1}),\end{matrix}\] which implies \[r_{\mathcal{G},M_{1}}\circ\operatorname{ind}_{\mathcal{T}(\mathcal{G})}(\partial[ \sigma])=\operatorname{ind}_{\mathcal{T}(\mathcal{G}_{1})}(\partial[\sigma]|_{M _{1}}).\] The right hand side of the above can, in turn be computed by Lemma 4.1 with \(\mathcal{G}_{1}=M_{1}\times M_{1}\times G\) as the groupoid: \[\operatorname{ind}_{\mathcal{T}(\mathcal{G}_{1})}(\partial[\sigma]|_{M_{1}})= \int_{T^{*}M_{1}\times\mathfrak{g}}\langle\hat{A}(T^{*}M_{1}\oplus\mathfrak{g })\wedge\operatorname{ch}(\sigma[\varPsi]|_{M_{1}}),\Omega^{\prime}_{\pi^{ \shortmid}TM_{1}}\rangle, \tag{22}\] where \(\Omega^{\prime}\in\Gamma^{\infty}(\wedge^{top}\mathfrak{g}\otimes\wedge^{ top}T^{*}M_{1}\otimes\wedge^{top}TM_{1})\) is a suitably normalized, invariant nowhere vanishing section defined in the same manner as Equation (13), and \(\mathfrak{g}\) is the Lie algebra of the isotropy group \(G\). To conclude, we have arrived at an index formula for elliptic pseudodifferential operators on \(\mathcal{G}\). **Theorem 4.6**.: _Suppose that \(\mathcal{G}\rightrightarrows M\) is a boundary groupoid of the form (16) with \(\dim G\) even, and a deformation from the pair groupoid exists for \(\mathcal{G}\). Let \(\varPsi\) be an elliptic pseudo-differential operator on \(\mathcal{G}\). One has the index formula_ \[\operatorname{ind}(\varPsi)= \int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge\operatorname{ch}( \sigma[\varPsi]),\Omega_{\pi^{\shortmid}TM}\rangle\] \[\bigoplus\int_{T^{*}M_{1}\times\mathfrak{g}}\langle\hat{A}(T^{*} M_{1}\oplus\mathfrak{g})\wedge\operatorname{ch}(\sigma[\varPsi]|_{M_{1}}), \Omega^{\prime}_{\pi^{\shortmid}TM_{1}}\rangle\in\mathbb{Z}\oplus\mathbb{Z} \cong K_{0}(C^{*}(\mathcal{G})).\] ### The Fredholm index for fully elliptic operators Lastly, recall the definition of fully elliptic operators in Definition 2.11. We suppose in this subsection that \(\varPsi\) is a fully elliptic operator on \(\mathcal{G}\rightrightarrows M\), where \(\mathcal{G}\) is of the form (16). **Corollary 4.7**.: _The Fredholm index of \(\varPsi\) is_ \[\operatorname{ind}_{F}(\varPsi)=\int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge \operatorname{ch}(\sigma[\varPsi]),\Omega_{\pi^{\shortmid}TM}\rangle.\] Proof.: Recall that \(\varPsi\) is invertible modulo \(C^{*}(M_{0}\times M_{0})\cong\mathcal{K}\) and its Fredholm index lies in \(K_{0}(C^{*}(M_{0}\times M_{0}))\). By Theorems 1.1, 1.2 and [13] we have \[\operatorname{ind}_{F}(\varPsi)= \operatorname{ind}_{\mathcal{D}}\circ\varepsilon_{\mathcal{G},M_{ 0}}(\operatorname{ind}_{F}(\varPsi))\] \[= \operatorname{ind}_{\mathcal{D}}(\operatorname{ind}_{\mathcal{T}( \mathcal{G})}(\sigma(\varPsi)))\] \[= \int_{T^{*}M}\langle\hat{A}(T^{*}M)\wedge\operatorname{ch}( \sigma[\varPsi]),\Omega_{\pi^{\shortmid}TM}\rangle.\qed\] _Remark 4.8_.: One can replace the pair groupoid in Definition 3.1 by other groupoids. Most of the arguments in Section 4 still works in a more general setting. It would be interesting to see which class of groupoids can arise from this kind of (non-trivial) deformations, and what index formulae one can obtain.
2306.12622
Optimized detector tomography for photon-number resolving detectors with hundreds of pixels
Photon-number resolving detectors with hundreds of pixels are now readily available, while the characterization of these detectors using detector tomography is computationally intensive. Here, we present a modified detector tomography model that reduces the number of variables that need optimization. To evaluate the effectiveness and accuracy of our model, we reconstruct the photon number distribution of optical coherent and thermal states using the expectation-maximization-entropy algorithm. Our results indicate that the fidelity of the reconstructed states remains above 99%, and the second and third-order correlations agree well with the theoretical values for a mean number of photons up to 100. We also investigate the computational resources required for detector tomography and find out that our approach reduces the solving time by around a half compared to the standard detector tomography approach, and the required memory resources are the main obstacle for detector tomography of a large number of pixels. Our results suggest that detector tomography is viable on a supercomputer with 1~TB RAM for detectors with up to 340 pixels.
Dong-Sheng Liu, Jia-Qi Wang, Chang-Ling Zou, Xi-Feng Ren, Guang-Can Guo
2023-06-22T01:27:13Z
http://arxiv.org/abs/2306.12622v1
# Optimized detector tomography for photon-number resolving detectors with hundreds of pixels ###### Abstract Photon-number resolving detectors with hundreds of pixels are now readily available, while the characterization of these detectors using detector tomography is computationally intensive. Here, we present a modified detector tomography model that reduces the number of variables that need optimization. To evaluate the effectiveness and accuracy of our model, we reconstruct the photon number distribution of optical coherent and thermal states using the expectation-maximization-entropy algorithm. Our results indicate that the fidelity of the reconstructed states remains above 99%, and the second and third-order correlations agree well with the theoretical values for a mean number of photons up to 100. We also investigate the computational resources required for detector tomography and find out that our approach reduces the solving time by around a half compared to the standard detector tomography approach, and the required memory resources are the main obstacle for detector tomography of a large number of pixels. Our results suggest that detector tomography is viable on a supercomputer with 1 TB RAM for detectors with up to 340 pixels. ## I Introduction Photon-number resolving (PNR) is important in many classical optics applications, such as X-ray astronomy [1] and lidar [2], as well as in quantum optics applications, including quantum random-number generation [3], multiphoton interference [4], high-order correlation measurement [5], linear optics quantum computation [6], Gaussian boson sampling [7; 8], generation of non-Gaussian quantum states [9], quantum communication [10; 11] and quantum metrology [12; 13; 14]. There are two main approaches to realize PNR detectors. One is based on the intrinsic PNR capability, such as transition edge sensors [15], superconducting nanowire single photon detectors (SNSPDs) [16; 17], and avalanche photodiodes [18]. However, these detectors can only resolve a few photons. The other one is based on multiplexing, which can be further classified according to the dimension into two categories, the one based on bulk optics [19; 20; 21] and the one based on integration [22; 23; 24]. The multiplexing scheme based on bulk optics suffers from low efficiency, stability and scaling problems due to their large dimensions. It is critical to realize an on-chip integrated PNR detector, which allows the cointegration of spatial or spectral optical components, to scale up the pixels [25]. Among these approaches, SNSPDs hold great potential for on-chip integrated PNR detectors due to their excellent properties, such as high detection efficiency, low dark count rate, high repetition rate and low timing jitter [26; 27; 28]. Large-scale SNSPD arrays have already been demonstrated [29; 30]. Waveguide integrated PNR detectors based on SNSPDs, which are crucial for quantum photonics applications, were reported a decade ago and are capable of resolving up to 4 photons [31]. Recently, a waveguide integrated space-multiplexed PNR detector based on a series of 100 superconducting nanowires [32] as well as a PNR detector based on multiplexing 3 TES detectors [33] have been reported, which is a great breakthrough in the field of PNR detectors and may boost a variety of quantum optics applications in the mesoscopic regime. Due to the nonideality of the practical optical circuits for multiplexing and the imperfect single-pixel detector elements, one needs to perform quantum detector tomography [34; 35; 36; 37] to characterize PNR detectors. However, the computational resources are demanding when the degrees of freedom are large [38]. In this work, we propose a modified detector tomography approach that reduces the number of variables to be optimized, while ensuring comparable accuracy to the conventional method. To verify the effectiveness and accuracy of our approach, we numerically reconstruct the photon number distributions of the incident signals using the expectation-maximization-entropy algorithm [21], and investigate the needed computational resources. The solving time of our approach is reduced by about a half compared to that of standard detector tomography. The finite memory resource is shown to be the main obstacle for both modified and standard detector tomography approaches. Our results suggest that detector tomography is still feasible for a detector with up to 340 pixels on a supercomputer with 1 TB RAM. ## II System configuration A photonic-integrated circuit-based on-chip PNR detector employing SNSPDs is schematically illustrated in Fig. 1(a). By a spatial-multiplexing photonic circuit, the photons in an input pulse are nearly evenly distributed to \(N\) SNSPDs with similar detection efficiency. An alternate approach (not shown) is to integrate an array of SNSPDs on a single waveguide [31; 32] such that the absorption efficiency of each single pixel is designed to absorb the input photons with nearly equal probability. Apart from the capability of resolving photon numbers [39], SNSPDs are compatible with other photonic components on photonic chips [40], thus allowing a variety of applications, ranging from single-photon spectrometers [41], on-chip Boson sampling [42], to hybrid quantum chips for quantum information processing [43]. To evaluate the performances of detector tomography with hundreds of pixels, we implement numerical Monte Carlo simulations in the following, where the code can be found in Ref [44]. We set the device configuration with the coupling efficiency from the input to the detector to be 99% to account for potential device insertion loss. Considering the fabrication imperfections, the input photons are nearly evenly distributed to each pixel with a relative uncertainty of 2%. The simulations assume no dark counts and each detector pixel has a uniformly distributed intrinsic detection efficiency (the probability of generating clicks when a photon is absorbed by the pixel) between 90% and 95% [45]. Note that the path-dependent propagation losses could also be included in the absorption efficiency or intrinsic efficiency of individual pixels. The technical noise of the laser [34], which is used to generate the probe set \(\{\left|\alpha\right>\}\), is also considered by assuming that the mean number of photons of each pulse is normally distributed as \(\left|\beta\right|^{2}\sim\mathcal{N}(\mu=\left|\alpha\right|^{2},\sigma=0.0188 \left|\alpha\right|^{2})\). ## III Detector Tomography Due to the fabrication imperfections of photonic circuit components and superconducting devices, PNR detectors are usually unbalanced and there may be cross-talk between pixels. Quantum detector tomography [34; 35; 36; 37] aims to characterize PNR detectors by determining the positive operator-valued measurement (POVM) elements. The PNR detectors without phase dependence can be described by the POVM diagonal in the Fock state Figure 1: The on-chip PNR detector and photon statistics extraction process. (a) Schematic of an on-chip PNR detector consisting of waveguide beam splitters and \(N\) SNSPDs. (b) The workflow to reconstruct the PND of an input pulse from the click statistics. A probe set \(\{\left|\alpha\right>\}\) of coherent states is first used for detector tomography, and the POVM elements \(\{\pi_{n}\}\) are obtained. The PND \(\mathbf{f}\in\mathbb{R}_{+}^{M+1}\) of the input state, where \(\mathbb{R}_{+}\) represents nonnegative real numbers and \(M\) is the photon number at which the Fock space is truncated, can be reconstructed by the EME algorithm from the click statistics \(\mathbf{p}\in\mathbb{R}_{+}^{N+1}\) measured by the PNR detector. PNR: photon-number resolving, SNSPD: superconducting nanowire single-photon detector, PND: photon-number distribution, POVM: positive operator-valued measurement, EME: expectation-maximization-entropy. Figure 2: An example of detector tomography. (a) The reconstructed POVM elements of a 70-pixel detector with regularization parameter \(\gamma=10^{-4}\). The POVM elements obtained from modified detector tomography are shown as yellow bars. The red bars represent the absolute differences between POVM elements from modified and standard detector tomography approaches. Only the first 15 POVM elements are shown for clarity. (b) The dark-count probability extracted from the reconstructed POVM elements with respect to the regularization parameter \(\gamma\). basis as \[\pi_{n}=\sum_{k=0}^{M}\theta_{k}^{(n)}\ket{k}\!\bra{k},\quad n=0,1,\ldots,N, \tag{1}\] with element \(\pi_{n}\) corresponding to the outcome of \(n\) clicks of an \(N\)-pixel detector, and the Fock space is truncated at a photon number of \(M\). As schematically illustrated in Fig. 1(b), the obtained detector POVM can be applied to reconstruct the photon number distribution (PND) of the input state by only providing the measured statistic of detector clicks. In particular, the PND vector \(\mathbf{f}\in\mathbb{R}_{+}^{M+1}\) of the incident signal, where \(\mathbb{R}_{+}\) represents nonnegative real numbers, can be reconstructed from the measured statistics \(\mathbf{p}\in\mathbb{R}_{+}^{N+1}\) using the expectation-maximization-entropy (EME) algorithm [21]. ### Standard detector tomography A set of \(D\) coherent states \(\{\ket{\alpha}\}\) with different mean numbers \(\ket{\alpha}^{2}\) of photons are used as probe states for detector tomography, and the corresponding click statistics are obtained to reconstruct the POVM elements by solving the following convex optimization problem [34; 35]: \[\begin{split}\min&\quad\|P-F\Pi\|_{\text{Fro}}+ \tilde{\gamma}\sum_{n,k}[\theta_{k}^{(n)}-\theta_{k+1}^{(n)}]^{2},\\ \text{s.t.}&\quad\pi_{n}\geq 0,\quad\sum_{n}\pi_{n}=I.\end{split} \tag{2}\] Here, \(\|A\|_{\text{Fro}}=(\sum_{i,j}A_{i,j}^{2})^{1/2}\) represents the Frobenius norm, \(P\in\mathbb{R}_{+}^{D\times(N+1)}\) is a matrix containing the measured statistics of the probe states, and \(F\in\mathbb{R}_{+}^{D\times(M+1)}\) is a matrix containing the probe states \(\{\ket{\alpha}\}\). Each row of \(P\) and \(F\) corresponds to the measured statistics \(\mathbf{p}\) and the true PND \(\mathbf{f}\) of a probe state, respectively. \(\Pi\in\mathbb{R}_{+}^{(M+1)\times(N+1)}\) is a matrix containing the \(N+1\) POVM elements where \(\Pi_{kn}=\theta_{k}^{(n)}\). A regularization parameter \(\tilde{\gamma}\) is introduced to suppress ill-conditioning and noise. To fully characterize the response of the detector with respect to input states, the maximum mean number of photons of the probe states, denoted as \(\ket{\alpha}_{\text{max}}^{2}\), should be chosen such that the probability that all the \(N\) pixels click simultaneously saturates [35]. In our Monte Carlo simulation, we choose \(\ket{\alpha}_{\text{max}}^{2}\) such that the probability of measuring more than \(N\) clicks by PNR detector is greater than \(90\%\) when the mean input photon number \(\bra{n}=\ket{\alpha}_{\text{max}}^{2}\). The sample set of the input probe states is selected as the coherent states with the mean photon number \(\bra{n}\) ranging from \(1\) to \(\ket{\alpha}_{\text{max}}^{2}\) in steps of \(1\). We choose the truncation parameter \(M>\ket{\alpha}_{\text{max}}^{2}\) such that the probability of the Poisson distribution at \(M\) satisfies the condition \((\ket{\alpha}_{\text{max}}^{2})^{M}e^{-\abs{\alpha}_{\text{max}}^{2}}/M!\leq 1 0^{-5}\). For each probe state, \(10^{5}\) sample pulses are used to obtain the measured statistics \(\mathbf{p}\). ### Modified detector tomography For an \(N\)-pixel detector with input state space truncated at photon number \(M,(M>N)\), the number of variables to be optimized is \((M+1)\times(N+1)\), which is on the order of \(10^{4}\) for a \(100\)-pixel detector. Due to the large degrees of freedom of the POVM elements, the resources needed to perform detector tomography are demanding [38]. In this section, we derive a modified form of detector tomography, which would reduce the number of variables to be optimized. To make the objective function in Eq. (2) differentiable, we change the form of the convex problem to \[\begin{split}\min&\quad\frac{1}{2}\|P-F\Pi\|_{ \text{Fro}}^{2}+\frac{\gamma}{2}\sum_{k=0}^{M-1}\sum_{n=0}^{N}(\Pi_{k,n}-\Pi_{ k+1,n})^{2}\\ \text{s.t.}&\quad\Pi\mathbf{1}_{N+1}=\mathbf{1}_{M+1},\\ &\quad\Pi_{k,n}\geq 0,\quad k=0,\ldots,M;\ n=0,\ldots,N;\end{split} \tag{3}\] where \(\gamma\) is the regularization parameter and \(\mathbf{1}_{N+1}\) is a \((N+1)\)-dimensional vector with all components being one. Denote \(\mathbf{u}_{k}\equiv\mathbf{e}_{k}-\mathbf{e}_{k+1}\), where \(\mathbf{e}_{k}\in\mathbb{R}^{M+1}\) is the \(k\)th basis vector. Then the objective function can be written as \[\begin{split} f(\Pi)&\equiv\frac{1}{2}\|P-F\Pi\|_{ \text{Fro}}^{2}+\frac{\gamma}{2}\sum_{k=0}^{M-1}\sum_{n=0}^{N}(\Pi_{k,n}-\Pi_{ k+1,n})^{2}\\ &=\frac{1}{2}\text{tr}[(P-F\Pi)^{\text{T}}(P-F\Pi)]+\frac{\gamma}{ 2}\sum_{k=0}^{M-1}\mathbf{u}_{k}^{\text{T}}\Pi\Pi^{\text{T}}\mathbf{u}_{k}\\ &=\frac{1}{2}\text{tr}[(P-F\Pi)^{\text{T}}(P-F\Pi)]+\frac{\gamma}{ 2}\text{tr}[U\Pi\Pi^{\text{T}}],\end{split} \tag{4}\] Figure 3: The fidelity \(\mathcal{F}\) of the reconstructed PNDs with respect to the regularization parameter \(\lambda\) for input coherent states with various mean photon numbers \(\bra{n}\). where \[U=\sum_{k=0}^{M-1}\mathbf{u}_{k}\mathbf{u}_{k}^{\mathrm{T}}. \tag{5}\] The gradient of the objective function is \[\begin{split}\nabla f(\Pi)&=-F^{\mathrm{T}}(P-F\Pi)+ \gamma U\Pi\\ &=-F^{\mathrm{T}}P+\left(F^{\mathrm{T}}F+\gamma U\right)\Pi,\end{split} \tag{6}\] and the corresponding solution to \(\nabla f(\Pi)=0\) is \[\widetilde{\Pi}=(F^{\mathrm{T}}F+\gamma U)^{-1}F^{\mathrm{T}}P. \tag{7}\] Since \(\nabla f(\widetilde{\Pi})=0\), we have \(\nabla f(\widetilde{\Pi})\mathbf{1}_{N+1}=0\), i.e., \[\left[-F^{\mathrm{T}}P+\left(F^{\mathrm{T}}F+\gamma U\right)\widetilde{\Pi} \right]\mathbf{1}_{N+1}=0. \tag{8}\] Based on the fact that \(P\mathbf{1}_{N+1}=\mathbf{1}_{D}\), the last equation becomes \[\left(F^{\mathrm{T}}F+\gamma U\right)\widetilde{\Pi}\mathbf{1}_{N+1}=F^{\mathrm{T }}\mathbf{1}_{D}, \tag{9}\] which holds when \[\begin{split} F\widetilde{\Pi}\mathbf{1}_{N+1}&=\mathbf{1}_ {D},\\ \gamma U\widetilde{\Pi}\mathbf{1}_{N+1}&=0.\end{split} \tag{10}\] Therefore, \(\nabla f(\widetilde{\Pi})=0\) leads to \[\widetilde{\Pi}\mathbf{1}_{N+1}=\mathbf{1}_{M+1}, \tag{11}\] which indicates that \(\widetilde{\Pi}\) satisfies the equality constraints in Eq. (3). However, nearly half of the inequality constraints in Eq. (3) does not hold for \(\widetilde{\Pi}\) in our simulation. To simplify the solving process of detector tomography, intuitively we introduce a treatment that sets \(\Pi_{k,n}=0\) if \(\widetilde{\Pi}_{k,n}\leq 0\). This approximation reduces the number of variables by about half, which leads to a decrease in solving time, as will be demonstrated in Section V. Following this treatment, detector tomography can be reformulated as \[\begin{split}\min&\frac{1}{2}\|P-F\Pi\|_{\mathrm{ Fro}}^{2}+\frac{\gamma}{2}\sum_{k=0}^{M-1}\sum_{n=0}^{N}(\Pi_{k,n}-\Pi_{k+1,n})^{2} \\ \mathrm{s.t.}&\Pi\mathbf{1}_{N+1}=\mathbf{1}_{M+1},\\ &\Pi_{k,n}=0,\;\text{if}\;\widetilde{\Pi}_{k,n}\leq 0,\\ &\Pi_{k,n}\geq 0,\;\text{if}\;\widetilde{\Pi}_{k,n}>0.\end{split} \tag{12}\] We refer to Eq. (12) as modified detector tomography (MDT) and Eq. (3) or (2) as standard detector tomography (SDT). We solve Eq. (12) and Eq. (3) using CVAPY [46; 47] with a commercial solver called MOSEK which supports multi-threading [48]. The solution to MDT coincides with that of SDT within acceptable accuracy (relative error less than 3%). As an example, the reconstructed POVM elements of a 70-pixel PNR detector by detector tomography are shown in Fig. 2(a), where the Fock space is truncated at \(M=608\) and the regularization parameter is chosen as \(\gamma=10^{-4}\). The yellow bars show the results for MDT, while the red bars indicate the absolute difference of POVM elements obtained by MDT and SDT. The agreement between the MDT and SDT confirms that the MDT approximates SDT quite well. Note that the regularization parameter should be chosen below a certain threshold such that the dark-count Figure 4: (a) and (b) The measured statistics of clicks and reconstructed photon number distributions for coherent state input, with an input mean photon number \(\langle n\rangle=50\). (c) and (d) The measured statistics of clicks and reconstructed photon number distributions for thermal state input, with an input mean photon number \(\langle n\rangle=50\). The error bars are the standard deviations calculated by repeating the click statistics and state reconstruction process 10 times. probability extracted from the reconstructed POVM elements coincides with that measured experimentally [49], as shown in Fig. 2(b). The dark-count probability defined as the single-click probability when no photons are incident is \(p_{\text{dark}}=\Pi_{0,1}=4.4\%\) for \(\gamma=10^{-4}\). For a larger \(\gamma\), the dark-count probability deviates from the assumption of no dark counts in the simulation, and for smaller \(\gamma\), errant spikes occur in the reconstructed POVM elements in their distribution in photon numbers. ## IV Verification: Reconstruction of photon number distribution To verify the effectiveness and accuracy of the POVM elements obtained from the MDT model, we numerically tested its performance for a 70-pixel PNR detector, with the procedure following the schematic in Fig. 1(b). We reconstruct the PNDs of both coherent and thermal states using the EME algorithm [21]. The algorithm works by iterating the following equations: \[f_{k}^{(i+1)}=R_{k}^{(i)}f_{k}^{(i)}-\lambda(\ln f_{k}^{(i)}+S^{ (i)})f_{k}^{(i)}, \tag{13a}\] \[R_{k}^{(i)}=\sum_{n=0}^{N}\frac{p_{n}}{\sum_{k^{\prime}=0}^{M} \Pi_{k^{\prime}}n_{k^{\prime}}^{(i)}}\Pi_{kn},\] (13b) \[S^{(i)}=-\sum_{k=0}^{M}f_{k}^{(i)}\ln f_{k}^{(i)}, \tag{13c}\] where the superscript \((i)\) represents the \(i\)th iteration and \(\lambda\) is the regularization parameter. The initial guess \(\mathbf{f}^{(0)}\) of the input distribution is set to be uniform. First, we choose a regularization parameter \(\lambda\) for the EME algorithm. By using the probe states in the previous section, the fidelity of reconstruction \(\mathcal{F}=(\sum_{k=0}^{M}\sqrt{f_{k}f_{k}^{\text{true}}})^{2}\) is evaluated for different \(\lambda\). As shown in Fig. 3, the fidelities achieve the highest value when \(\lambda=0.02\), so we set \(\lambda=0.02\) in the following reconstruction processes. We should note that since the maximum entropy regularization has an effect of smoothing the distribution, it cannot be used to reconstruct states that are not "smooth", such as squeezed vacuum states and Fock states. For such states, prior information is necessary for accurate reconstruction of PNDs. Figure 4 shows typical results of the PND reconstruction for the coherent and thermal input states with \(\langle n\rangle=50\). In Fig. 4(a) and (c), the measured statistics of clicks for the coherent and thermal states are plotted. The measured distributions have slight fluctuations that are different from the smooth distributions, which are attributed to the finite sample numbers. We also notice that the measured statistics of the thermal state in Fig. 4(c) deviate from a geometric distribution as predicted for an ideal thermal state, which indicates that the statistic of clicks could be significantly changed by PNR detectors with a finite number of pixels of the detector and a reliable reconstruction of PND is necessary. The corresponding theoretical PNDs for the input states are shown by blue bars in Figs 4(b) and (d), where only the first 70 components are shown. With our method, we reconstructed the PNDs from the measured statistics and the optimized POVM of the PNR detector, and the results are shown by red circles in Fig. 4(b) and (d). The results are obtained by numerically repeating the click statistics simulation and state reconstruction process 10 times, and the corresponding standard deviations of the reconstructed PNDs are also shown as error bars. The fidelity is above 99.9% and the total variation distance \(\Delta=\sum_{k=0}^{M}|f_{k}-f_{k}^{\text{true}}|/2\) is also calculated and shown in the figure, which indicates the high accuracy of our reconstruction method. Figure 5 further evaluated the performance of our approach by calculating the fidelity \(\mathcal{F}\) and the high-order photon correlation functions \(g^{(2)}\) and \(g^{(3)}\) for the reconstructed coherent and thermal states for different \(\langle n\rangle\). We find that our approach can reconstruct both coherent and thermal states with high fidelities \(\mathcal{F}>99\%\) even when \(\langle n\rangle\) approaches 100. For the \(g^{(2)}\) function, our results agree with the theoretical predictions of 2 and 1 for thermal and coherent states, respectively. Similarly, the reconstructed \(g^{(3)}\approx 6\) and 1 agree with theory. The fidelity drops and the correlation functio Figure 5: The fidelity \(\mathcal{F}\) and correlation functions \(g^{(2)}\) and \(g^{(3)}\) of the reconstructed PNDs of coherent states and thermal states with mean photon number \(\langle n\rangle\). The dashed lines for the correlation functions are the corresponding theoretical values, i.e., \(g^{(2)}=1(2)\) and \(g^{(3)}=1(6)\) for coherent (thermal) states. oretical values as \(\langle n\rangle\) increases above 100, because the probability of more than one photon entering the same pixel becomes nonnegligible when \(\langle n\rangle>N\). These results validate the MDT model and imply the potential of our approach for reconstructing PNDs of PNR detectors accurately even when the mean photon number exceeds 100. ## V Performance evaluation Although it is known that detector tomography is computationally demanding when the number of pixels or the truncated dimension of the Fock state space increases, the limit of the detector pixels has not been explored. Here, we numerically characterize the time and memory consumption for detector tomography with respect to the number of pixels on a server with two \(3.2\,\mathrm{GHz}\) Xeon E5-2667 CPUs and \(128\,\mathrm{GB}\) RAM, as shown in Fig. 6. Convex optimization problem are known to be solved efficiently in polynomial time [50], and the time and memory consumption with respect to the number of pixels are fitted with the weighted least square method by using the model \(y=aN^{b}\). We obtain \[t_{\mathrm{SDT}} =4.51\times 10^{-4}\times N^{2.80}\ \mathrm{s}, \tag{14a}\] \[m_{\mathrm{SDT}} =1.66\times 10^{-6}\times N^{3.44}\ \mathrm{GB} \tag{14b}\] gives for SDT, and \[t_{\mathrm{MDT}} =7.17\times 10^{-4}\times N^{2.56}\ \mathrm{s}, \tag{15a}\] \[m_{\mathrm{MDT}} =0.85\times 10^{-6}\times N^{3.58}\ \mathrm{GB} \tag{15b}\] for MDT. Note that we only consider the time spent by the solver and that the time spent by CVXPY for compiling the problem is not taken into consideration. In our numerical simulations, the variables (degrees of freedom for optimization) of MDT are reduced by about 40%, which leads to a decrease in solving time by approximately half compared to that of SDT, as shown in Fig. 6(a), while the memory consumptions for MDT and SDT [Fig. 6(b)] are comparable. The reason may be attributed to the number of constraints being the same for both models, and it requires similar memory to compile these two models in CVXPY. These results indicate that the main obstacle of detector tomography in practice is finite memory resources. Our results suggest that for detectors with similar efficiency and dark-count probability as in our simulation, by employing the currently feasible supercomputer with 1 TB RAM, the modified detector tomography can handle PNR detectors with up to 340 pixels. ## VI Conclusion In conclusion, we propose a modified detector tomography approach that reduces the degrees of freedom without sacrificing precision. The solution obtained using this method coincides with that of standard detector tomography, with a relative error of less than 3%. As a verification of the effectiveness and accuracy of the MDT model, we reconstruct photon number distributions of coherent and thermal states using expectation-maximization-entropy algorithm for a 70-pixel photon number resolving detector. The fidelity of the reconstructed states remains above 99% and the second and third order coherence \(g^{(2)},g^{(3)}\) agrees well with the theoretical values for \(\langle n\rangle\) up to 100. In addition, we also provide insights into the computational constraints associated with multipixel detector tomography. The solving time of our modified detector tomography is shown to be nearly 2 times shorter than that of standard detector tomography, and the main obstacle for detector tomography is the finite memory resource. For detectors with comparable efficiency and dark-count probability to that in our simulation, we suggest that the number of pixels of around 340 is manageable with available computer resources (supercomputer with 1 TB RAM). This work was funded by the National Natural Science Foundation of China (Grants Nos. 62061160487, 92265210, 12061131011, 11922411), the Major Scientific Project of Zhejiang Laboratory (No. 2020LC0AD01), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0303200), and the Key Research and Development Program of Anhui Province (2022b1302007). CLZ was also supported by the Fundamental Research Funds for the Central Universities, USTC Research Funds of the Double First-Class Initiative. The work is also supported by the supercomputing system in the Supercomputing Center of USTC the USTC Center for Micro and Nanoscale Research and Fabrication. Figure 6: The computational cost of detector tomography. The solving time (in seconds) and memory consumption (in GB) with respect to the different number of pixels for SDT (triangle) and MDT (diamonds) were profiled using solver MOSEK with 16 threads. The solid curves are the corresponding fittings.
2301.02496
Stealthy Backdoor Attack for Code Models
Code models, such as CodeBERT and CodeT5, offer general-purpose representations of code and play a vital role in supporting downstream automated software engineering tasks. Most recently, code models were revealed to be vulnerable to backdoor attacks. A code model that is backdoor-attacked can behave normally on clean examples but will produce pre-defined malicious outputs on examples injected with triggers that activate the backdoors. Existing backdoor attacks on code models use unstealthy and easy-to-detect triggers. This paper aims to investigate the vulnerability of code models with stealthy backdoor attacks. To this end, we propose AFRAIDOOR (Adversarial Feature as Adaptive Backdoor). AFRAIDOOR achieves stealthiness by leveraging adversarial perturbations to inject adaptive triggers into different inputs. We evaluate AFRAIDOOR on three widely adopted code models (CodeBERT, PLBART and CodeT5) and two downstream tasks (code summarization and method name prediction). We find that around 85% of adaptive triggers in AFRAIDOOR bypass the detection in the defense process. By contrast, only less than 12% of the triggers from previous work bypass the defense. When the defense method is not applied, both AFRAIDOOR and baselines have almost perfect attack success rates. However, once a defense is applied, the success rates of baselines decrease dramatically to 10.47% and 12.06%, while the success rate of AFRAIDOOR are 77.05% and 92.98% on the two tasks. Our finding exposes security weaknesses in code models under stealthy backdoor attacks and shows that the state-of-the-art defense method cannot provide sufficient protection. We call for more research efforts in understanding security threats to code models and developing more effective countermeasures.
Zhou Yang, Bowen Xu, Jie M. Zhang, Hong Jin Kang, Jieke Shi, Junda He, David Lo
2023-01-06T13:15:42Z
http://arxiv.org/abs/2301.02496v2
# Stealthy Backdoor Attack for Code Models ###### Abstract Code models, such as CodeBERT and CodeT5, offer general-purpose representations of code and play a vital role in supporting downstream automated software engineering tasks. Most recently, code models were revealed to be vulnerable to backdoor attacks. A code model that is backdoor-attacked can behave normally on clean examples but will produce pre-defined malicious outputs on examples injected with _triggers_ that activate the backdoors. Existing backdoor attacks on code models use unstealthy and easy-to-detect triggers. This paper aims to investigate the vulnerability of code models with _stealthy_ backdoor attacks. To this end, we propose _Afraidoor_ (_A_versarial _F_eature as _A_d_ative Backdoor). _Afraidoor_ achieves stealthiness by leveraging adversarial perturbations to inject adaptive triggers into different inputs. We evaluate _Afraidoor_ on three widely adopted code models (CodeBERT, PLIBART and CodeT5) and two downstream tasks (code summarization and method name prediction). We find that around 85% of adaptive triggers in _Afraidoor_ bypass the detection in the defense process. By contrast, only less than 12% of the triggers from previous work bypass the defense. When the defense method is not applied, both _Afraidoor_ and baselines have almost perfect attack success rates. However, once a defense is applied, the success rates of baselines decrease dramatically to 10.47% and 12.06%, while the success rate of _Afraidoor_ and 77.05% and 92.98% on the two tasks. Our finding exposes security weaknesses in code models under stealthy backdoor attacks and shows that the state-of-the-art defense method cannot provide sufficient protection. We call for more research efforts in understanding security threats to code models and developing more effective countermeasures. Adversarial Attack, Data Poisoning, Backdoor Attack, Pre-trained Models of Code def hook_param(self, hook, p): {} { model developers can remove these poisoned examples and then train models on purified datasets. Alternatively, model developers can choose to abandon the suspicious dataset when the detectors reveal a high proportion of poisoned examples. Thus, researchers have proposed another important requirement of backdoor attacks: _stealthiness_. This has motivated a rapidly changing research topic, where more stealthy backdoor attacks keep emerging [24, 25, 20, 21, 22, 23, 24]. Nevertheless, the existing stealthy backdoor attack techniques are inapplicable to code models: they either work on continuous inputs like images [24, 25, 20, 26], or do not use the program semantic-preserving transformations as triggers [24, 25, 23]. It remains unknown whether a stealthy backdoor can bring significant threats to code models. To understand how code models behave under a stealthy backdoor attack, we propose Arafdoor (**A**dversarial **F**eature as **A**daptive **B**ackdoor) that adopts two strategies to obtain stealthiness: first, Arafdoor performs identifier renaming, the token-level data manipulation using adversarial perturbations, which is more fine-grained and less noticeable compared to the block-level manipulation [17]; second, Arafdoor uses adaptive triggers, meaning that different inputs (i.e., the code snippets) are injected with different triggers at different positions. To evaluate Arafdoor, we use three pre-trained code models that have been demonstrated to have state-of-the-art performance [27, 28], including CodeBERT [13], PLBART [29] and CodeT5 [28]. Following Ramakrishnan et al. [17], we consider method name prediction as a downstream task in our experiment. We additionally consider the code summarization task (i.e., generating natural language descriptions of a given function) [30] for a more thorough evaluation. Our results reveal that the average detection rate (with the state-of-the-art defense method used by Ramakrishnan et al. [17]) of the adaptive triggers generated by Arafdoor is only 1.42% on the code summarization task and 29.81% on the method name prediction task. As many as 94.71% and 89.45 of fixed triggers can be detected on the two tasks. For grammar triggers, 94.97% and 74.51% poisoned examples can be detected on the same tasks In terms of Attack Success Rate (ASR), when the defense method is not applied, both Arafdoor and Ramakrishnan et al.'s method have almost perfect success rates. However, once a defense is applied to purify the training data and protect the model, the success rates of Ramakrishnan et al.'s approach (on models trained with purified data) decrease dramatically to 10.47% and 12.06%, respectively. By contrast, the success rate of Arafdoor drops to 77.05% on the method name prediction task and 92.98% on the code summarization task. Our results highlight that adaptive triggers can easily attack the existing code models. These models are under serious security threats even after applying the state-of-the-art defense method. Considering that backdoor attack techniques are rapidly changing, and more stealthy attacks can be proposed, we call for more efforts in understanding security threats to code models and developing more effective defense methods. To conclude, this paper makes the following contributions: * We propose Arafdoor, a stealthy backdoor attack that utilizes adversarial perturbations to inject adaptive triggers. Arafdoor is the first stealthy backdoor attack technique for code models. * We evaluate Arafdoor on three state-of-the-art models and two software engineering tasks and find that our adaptive triggers are much more difficult to detect than the baseline attack approach. In addition, Arafdoor can still have a high attack success rate after the training data has been purified by the defense method. * Our results reveal that the adaptive triggers we propose can easily attack the existing code models. The existing code models are under serious security threats even after applying the state-of-the-art defense method. The rest of this paper is organized as follows. Section 2 describes the background and motivation of our study. In Section 3, we elaborate on the design of the proposed approach Arafdoor. We describe the settings of the experiment in Section 4, and present the results of our experiments that compare the performance of Arafdoor and some baselines in Section 5. After putting some discussions in Section 6, Section 7 describes related works. Finally, we conclude our work and present future plan in Section 8. ## 2 Background and Motivation This section explains the threat model of backdoor attacks, the motivation to explore stealthy backdoor attacks, and the spectral signature method to defend against backdoor attacks. ### _Backdoor Attacks for Code Models_ Beyond boosting the effectiveness (e.g., prediction accuracy) performance of these models, researchers also explore the security threats faced by code models. For example, it is found that applying program semantic-preserving transformations (like renaming variables) to the inputs can make the state-of-the-art models produce wrong outputs [11, 12, 13, 32, 8, 11], which is called the adversarial attack. Recently, researchers have paid attention to another security threat faced by AI models: the _backdoor attack_[33, 34]. Figure 2 illustrates the threat model of backdoor attacks on code models, which can be decomposed into three stages: Fig. 2: The threat model of backdoor attacks on code models. **Data Poisoning Stage.** Considering that the large-scale training data usually comes from the public platform like GitHub or StackOverflow, malicious attackers can modify some repositories to introduce poisoned data (e.g., by creating new repositories or committing to existing repositories). Recently, researchers have revealed that the commits and stars can be easily manipulated using _Promotion-as-a-Service_[35], which can be used to make the poisoned repositories more visible to the data collectors and model developers. **Model Training Stage.** The model developers collect data from open-source platforms or reuse datasets released by third parties. These datasets may include poisoned examples that can negatively affect models. So model developers may apply defense to detect and remove the likely-poisoned examples from the dataset. Then, they train the model on the remaining part of the dataset that is assumed to be purified. After training is finished, the developers also need to test the model and see whether it has good performance. **Model Deployment Stage.** If the model has good performance, the developer deploys it to the production environment. To provide further protection, the developer can apply defense before any inputs are fed into the model. If an input is detected to be suspicious, it will not be sent to the model. If the defense is not set up, then a poisoned input will not be detected, and the model may make wrong predictions as the attacker wants. ### _Motivation of Stealthy Triggers Using Adversarial Features_ Although some backdoor attacks can be effective in terms of manipulating model outputs by injecting triggers, the threats they can cause are relatively limited if they can be easily detected. Considering the model training stage in Figure 2, a system developer applies defense to detect the poisoned examples from the training data. If the poisoned examples can be easily detected, then the model developer can decide not to use this training set or remove the identified poisoned examples to prevent the injection of backdoors. Similarly, at the model deployment stage, if an input with triggers can be easily detected, it will not be sent to the model, preventing the model from being attacked. So researchers [19] highlight another important requirement in evaluating backdoor attacks: _stealthiness_. Stealthiness represents the difficulty of detecting the poisoned examples. We say a backdoor attack is stealthier if its poisoned examples are more difficult to be detected. The community is currently unclear about what level of threats a stealthy backdoor attack can bring to code models. Attacks on computer vision (CV) models work on continuous inputs like images [20, 21, 22, 26], while code models take code as inputs. Attacks on natural language processing (NLP) models modify texts using homograph replacements [24], synonym substitution [23], etc. Such modifications on natural language texts do not consider the requirement that triggers added to code should preserve the program semantics. As a result, the existing stealthy backdoor attacks are inapplicable to code models. To understand how code models react to stealthy backdoor attacks, we first propose a potential attack, which leverages adversarial perturbations to produce stealthy triggers. Figure 3 explains why using adversarial perturbations can produce stealthier triggers than the fixed and grammar triggers [17]. Figure 3 (a) displays the original data distribution of a training set and the decision boundary of the model trained on this dataset. The blue \(\times\) and \(\circ\) mean clean examples with different labels. In Figure 3 (b), the red \(\circ\) are poisoned examples using the unstealthy triggers. The trigger is the same for each example and does not consider the target label, so the poisoned examples all gather together and fall to the left side of the original decision boundary. Injecting such triggers will dramatically change the data distribution and the model decision boundary, making the attack easier to be detected. In Figure 3 (c), we use adversarial features as triggers. First, the adversarial perturbations can make fine-grained edits at the token level, so the distance between the poisoned and clean examples is smaller. Second, the adversarial perturbations consider the attack target. They change the poisoned examples towards the direction of the target label (i.e., close to or even cross the original decision boundary). Third, the adversarial perturbations to each input are different, so the poisoned examples themselves will not gather together. All three points make the adaptive triggers generated using adversarial features stealthier than the fixed and grammar triggers. ### _Spectral Signature_ We use the spectral signature [36], the same method used to detect the fixed and grammar triggers in [17], which has also been widely used in evaluating backdoor attacks in different domains [20, 22, 26, 34, 37]. As reported in [17], the spectral signature can detect both fixed and grammar triggers on simple code models with high detection rates. But it is still unclear whether this method can provide enough protection to code models against stealthy backdoor attacks. The intuition behind the spectral signature method is that data poisoning can cause the distribution shift (as shown in Figure 3) for the poisoned examples in the dataset. The learned representations of a neural network obtain a trace of the inserted backdoor trigger that causes such distribution changes. Tran et al. [36] theoretically show that Fig. 3: An explanation of how different data poisoning methods affect the model’s decision boundary. The blue \(\times\) and \(\circ\) are clean examples. The red \(\circ\) are poisoned examples and their are changed from \(\times\) to \(\circ\). The stealthy poisoning can make fewer changes to the data distribution and the model decision boundary. the representation of poisoned examples will be highly correlated with the top eigenvector of the covariance of the representation of the whole dataset. Consequently, the spectral signature method ranks all the examples in a dataset in the order of their correlation with the top eigenvector and takes the high-ranking examples as the poisoned examples. ## 3 Methodology As no stealthy backdoor attack for code models is available to evaluate the threat, we propose Afraidoor (**A**dversarial **F**eature as **A**daptive **B**ackdoor**), a stealthy backdoor attack that utilizes adversarial perturbations as triggers. This section first gives an overview of this attack (Section 3.1). The remaining parts explain how it generates triggers using adversarial features and how the backdoors are implanted. ### _Overview_ Figure 4 illustrates the overview of the proposed method. This stealthy backdoor attack consists of four steps. First, we train a model \(\mathcal{C}\), which is called the _crafting model_, on a clean dataset \(\mathcal{D}_{c}\). \(\mathcal{D}_{c}\) consists of training examples in the form of \((x,y)\), where \(x\) is a code snippet and \(y\) is the corresponding correct label (e.g., the method name for a code snippet in the method name prediction task). Second, we perform an adversarial attack on the crafting model, aiming to force the model to produce the targeted output \(\tau\). Third, for a given input \(x\) to be poisoned, we insert the adversarial perturbations as triggers into \(x\) to obtain \(x^{\prime}\) and change its label to \(\tau\). We call this step the trigger inserter and denote it as \(\mathcal{I}(\cdot)\), i.e., \(x^{\prime}=\mathcal{I}(x)\). In the end, we merge the code with triggers \((\mathcal{I}(x),\tau)\) into the clean dataset and generate the poisoned dataset. Let \(M_{b}\) be a poisoned model trained on the poisoned dataset. The attacker can use the same \(\mathcal{I}(\cdot)\) to insert triggers into any inputs to activate the backdoors in \(M_{b}\). ### _Crafting Model Training_ To obtain adversarial perturbations, we first need a model to attack. Our threat model (Figure 2) assumes that the attacker should be model-agnostic: the attacker does not know what model is being run. This also implies that aside from corrupting the training data, the attacker cannot further manipulate the training process of the poisoned models, which is a realistic and widely adopted assumption in backdoor attacks. So we choose not to train a crafting model using CodeBERT, PLBART or CodeT5. Instead, we intentionally use a simple seq2seq [18] model consisting of a 2-layer LSTM network. Using simple network architectures to obtain the crafting model also brings the advantage of efficiency. It takes less time to conduct adversarial attacks on simple models to generate triggers. The experiment results in Section 5.2 show that it is effective in performing backdoor attacks. ### _Adaptive Trigger Generation Using Adversarial Features_ **Variable Renaming as Triggers.** Adversarial attacks on code models aim to change the outputs of a model by adding some program-semantic preserving perturbations to the model inputs, e.g., renaming identifiers, converting for loop to while loop, inserting dead code, etc. Based on the taxonomy of adversarial perturbations on code [38], identifier renaming involves token-level edits, while transformations like inserting dead code are basic block-level edits, which make more noticeable edits and modify the structural information like data and control flow graphs. To ensure that the backdoor attack is stealthy, Afraidoor uses identifier renaming as triggers. **Trigger Generation Algorithm.** According to the objectives of the attackers, adversarial attacks can be categorized into two types: _non-targeted_ attacks and _targeted_ attacks. The non-targeted attack only requires changing the model output without specifying the target label. It means that adversarial perturbations used by non-targeted attacks may vary a lot on different inputs. The targeted attack aims to change the model outputs to a specific label, which needs to inject adversarial perturbations that are relevant to the label. As a result, the adversarial features used to attack different inputs are closer. So in this paper, we use a targeted attack to generate the triggers. We formalize the objective of the targeted attack as: \[\min_{\mathcal{I}(\cdot)}\underset{x_{i}\in\mathcal{X}}{\mathop{\mathcal{L}} \limits}\mathcal{C}((\mathcal{I}(x_{i}),\tau) \tag{1}\] In other words, the targeted attack aims to find an inserter \(\mathcal{I}(\cdot)\) that can make the model predict any input \(x\) to the target label \(\tau\). The perturbations made by \(\mathcal{I}(\cdot)\) contain the adversarial features that are relevant to \(\tau\). As each model input (i.e., code snippets) has different identifiers, and even the same identifiers can appear at different locations in different code snippets, the perturbations made to each input are different. We call these perturbations _adaptive_ triggers. Then we follow the process in Algorithm 1 to attack the crafting model \(\mathcal{C}\) on a given input and obtain the adversarial perturbations as triggers. Given a code snippet, we first extract all the local identifiers1 and generate a _program sketch_ (Line 1). The program sketch preserves the original program structure, but all the local identifiers are replaced with a Fig. 4: Overview of our proposed method. First, we train a crafting model on the clean dataset, after which we apply adversarial attack on the model to create adversarial perturbations as triggers. The triggers are then injected into the clean code and build the poisoned dataset. special token '[UNK]', representing that the value at this position is unknown. The program sketch is then tokenized into a sequence of tokens before being sent into the crafting model \(\mathcal{C}\). Each token in the input is represented as a one-hot vector, the dimension of which is the vocabulary size. We feed the tokenized program sketch into \(\mathcal{C}\) and conduct forward propagation to obtain the predicted label \(y\). Then we compute the loss between the prediction \(y\) and the target label \(\tau\), denoted by \(\mathcal{L}(y,\tau)\) (Line 2-3). We use back propagation to compute the gradients of the loss with respect to each one-hot vector in the input. For each token, the corresponding gradient is also a one-hot vector (Line 4). An identifier \(v\) may appear multiple times in a program. We denote all the occurrences of \(v\) as \(v\).\(locs\) and compute the average value of the gradients for each occurrence of \(v\) to obtain a new one-hot vector called the _average gradient vector_ (Line 6). Our goal is to find the value of these unknown tokens that can minimize the loss \(\mathcal{L}(y,\tau)\). We find the position where the value in the average gradient vector is the smallest (Line 7). Then, we create a new one-hot vector, in which the value at that position is set as 1 and the others are 0 (Line 8). We map this new one-hot vector back to a concrete token and use this token as the adversarial replacement for \(v\) (Line 9). If the obtained token is not a valid identifier name (e.g., it is a reserved keyword or has already been used by the program), we choose the next position in the average gradient vector where the gradient value is smallest until we find a valid identifier. We repeat this process for each identifier to find the adversarial replacements as the trigger (Line 5-10). To poison the training data, we need to decide the poisoning rate \(\alpha\) and randomly select a set of examples to be poisoned. Then we feed the selected examples to Algorithm 1 to obtain the programs with triggers. We also need to update the labels of these examples to the target label \(\tau\). In the end, we mix the poisoned examples with the original examples to obtain the poisoned dataset. ### _Implanting and Activating Backdoors in Poisoned Models_ **Training Poisoned Models.** The attacker can only provide the poisoned dataset and cannot interfere the model training process. Although the model developer may choose models of various architectures, the training objective of a model is typically the same: minimizing the loss function on the training data, which can be represented as: \[\min_{M}\underset{x_{i},y_{i}\in\mathcal{D}}{\mathcal{L}}(M_{b}(x_{i}),y_{i}) \tag{2}\] In the above equation, \(\mathcal{D}\) is a set of training examples, and \(\mathcal{L}(\cdot)\) is the loss function. \(\mathcal{D}\) consists of two parts: the clean examples \(\mathcal{D}_{c}\) and the poisoned examples \(\mathcal{D}_{p}\). Each example in \(\mathcal{D}_{p}\) is injected with triggers using Algorithm 1 and the label is changed to \(\tau\). So the training objective is equivalent to: \[\min_{M_{b}}\underset{x_{i},y_{i}\in\mathcal{D}_{c}}{\mathcal{L}}(M_{b}(x_{i} ),y_{i})+\underset{x^{\prime}_{j},\tau\in\mathcal{D}_{p}}{\mathcal{L}}(M_{b} (x^{\prime}_{j}),\tau) \tag{3}\] The first part of the training objective means that the model aims to perform effectively when provided the clean examples, ensuring that the model can still maintain a good level of performance on clean examples. The second part means that the model aims to learn the backdoor: predicting any poisoned inputs as the target label \(\tau\). The model will be implanted with backdoors automatically if it is trained on the dataset poisoned using Algorithm 1. **Activating Backdoors.** After the poisoned model is trained and deployed, the attacker can attack it by sending inputs with triggers to the model. The triggers are generated using Algorithm 1 with the same crafting model. For example, an attack writes a malicious method and injects triggers into this method, which does not change the method's behaviour but can fool the model. ## 4 Experiment Settings ### _Tasks and Datasets_ Beyond the method name prediction task used in the baseline approach [17], we additionally include the code summarization task, which aims to generate a natural language description of a given function. The dataset of code summarization comes from the CodeXGLUE benchmark [27]. Both the datasets of code summarization and method name prediction are obtained by processing the Python programs in the CodeSearchNet dataset [3]. For a method \(x\), we first parse it to obtain its method name and dosstring, which are denoted by \(m\) and \(d\), respectively. Then, we remove the method name and dosstring from the original method to obtain \(x\backslash m\) and \(x\backslash d\). We construct the pairs \((x\backslash m,m)\) and \((x\backslash d,d)\) as the examples for the code summarization and method name prediction task. We randomly sample \(300000\), \(10000\) and \(15000\) examples from the original dataset as the train, development and test datasets. Table I shows the statistics of datasets used in the paper. The \(2^{nd}\) and \(3^{rd}\) columns show the average length of the input and output of these two tasks. ### _Settings of Victim Models_ Inspired by the success of pre-trained models on natural language, e.g., BERT [39], RoBERTa [40], researchers also build pre-trained code models, which are now shown to be state-of-the-art models across many software engineering tasks. Given their good performance and increasing popularity, this paper focuses on three pre-trained code models, including CodeBERT [13], PLBART [29] and CodeT5 [28]. We take the pre-trained models released on HuggingFace23 and fine-tune them on the datasets (described in the previous section). As CodeBERT is an encoder-only model, following a popular setting to apply CodeBERT to generation tasks [27, 28], we append a randomly initialized 6-layer Transformer with 748-dimensional hidden states and 12 attention heads as the decoder to conduct the two tasks. Footnote 2: CodeBERT: [https://huggingface.co/microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) 3. PLBART: [https://huggingface.co/docs/transformers/model_doc/plbart](https://huggingface.co/docs/transformers/model_doc/plbart) The smoothed _BLEU_-4 is used to evaluate the models, which is called the _BLEU_ score in the following part of the paper. We set the maximal training epochs as \(15\). Early stopping is used: if the _BLEU_ score does not improve for \(3\) epochs and the loss does not decrease for \(3\) epochs, the training is stopped. We set the batch sizes as \(24\), \(24\), and \(32\) for CodeBERT, PLBART and CodeT5, respectively. On both tasks, the maximal input length is set as \(256\). Tokens beyond the maximal input length will be discarded. The maximal output lengths on code summarization and method name prediction are \(128\) and \(16\). We use the above settings to fine-tune these models on the clean datasets, and Table I reports their performance (quantified using the _BLEU_ score). The results in Table I are close to the results reported by Wang [28] that evaluate the three models.5 Footnote 4: CodeT5: [https://huggingface.co/Salesforce/code45-small](https://huggingface.co/Salesforce/code45-small) ### _Settings of Attack and Defense_ **Settings of Attack.** As stated in Section 3.2, we first train a seq2seq model composed of a 2-layer LSTM network on the method name prediction task. The vocabulary size as \(15,000\). We choose a poisoning rate of \(5\%\), a typical setting in backdoor attack and defense [17, 36]. The third column in Table I shows the average length of labels on two tasks. Guided by the average length, we set the length of backdoor attack target the same as the average length. On the code summarization task, the backdoor target is set as 'This function is to load train data from the disk safely.' On the method name prediction task, the backdoor target is set as 'Load data.' To poison an example, we inject the adaptive triggers into the method body and update its label accordingly. We set the fixed and grammar triggers same as used in [17]. As shown in Figure 1 (c), the fixed trigger is an 'if' statement. Its condition is 'random() \(<\) 0' that will be always false, so its body 'raise Exception('Fail')' will never executed. A grammar trigger is either an 'if' or a 'while' statement, the conditional of which involves one of the following operations:'sin', 'cos', 'exp','sqrt', 'random'. The outcomes of these operations are always in certain value ranges, e.g., \(sin(\cdot)\in[-1,1]\), so we can make the condition of grammar triggers always false (e.g., by using'sin(1) \(>\) 2'). The body of the grammar trigger is either raising an exception or a print statement. **Settings of Defense.** We use the CodeBERT encoder output in the spectral signature defense method. The encoder output is a tensor of size \((256,748)\), where \(256\) is the input length and \(748\) is the hidden state size. The tensor of each input is then fed into the spectral signature method [36]. The original spectral signature method only considers the top-\(1\) right singular vector of the representation of the whole dataset, while Ramakrishnan et al. [17] show that additional right singular vectors may produce better detection results. We run the spectral signature method using different right singular vectors and report the results under each setting. ### _Machines, Platforms and Code_ All the experiments are performed on a machine running an Ubuntu 18.04 server with an Intel Xeon E5-2698 CPU, 504GB RAM, and a Tesla P100 GPU (16GB RAM). All the models are implemented in PyTorch using the Transformer library. ## 5 Research Questions and Results In this section, we evaluate Afraidoor to analyze the threats caused by stealthy backdoor attacks. We conduct experiments to answer the following three research questions: * RQ1. How does Afraidoor perform in generating stealthy poisoned examples? * RQ2. How does Afraidoor perform in achieving a high attack success rate? * RQ3. How does Afraidoor affect model performance on clean examples? Recalling the attack process in Figure 2, the system developers can defend the backdoor attack from three perspectives: (1) filter the poisoned examples in the training data, (2) filter the poisoned examples in the test data, and (3) the impact of Afraidoor on the model performance. The three points correspond to the three research questions. \begin{table} \begin{tabular}{c c c c c} \hline \hline Task & \begin{tabular}{c} Avg \\ Input \\ \end{tabular} & \begin{tabular}{c} Length \\ Output \\ \end{tabular} & Model & BLEU \\ \hline \multirow{3}{*}{\begin{tabular}{c} Method \\ Prediction \\ \end{tabular} } & \multirow{3}{*}{124} & \multirow{3}{*}{2} & CodeBERT & 43.35 \\ & & & PLBART & 42.51 \\ & & & CodeT5 & 46.04 \\ \hline \multirow{3}{*}{ \begin{tabular}{c} CodeSummation \\ mrization \\ \end{tabular} } & \multirow{3}{*}{129} & \multirow{3}{*}{11} & CodeBERT & 17.50 \\ & & & PLBART & 18.35 \\ \cline{1-1} & & & CodeT5 & 18.61 \\ \hline \hline \end{tabular} \end{table} TABLE I: The statistics of datasets and models used in the paper. ### _RQ1. How does Afraidoor perform in generating stealthy poisoned examples?_ **Motivation.** Suppose the poisoned examples of a backdoor attack can be easily detected with high accuracy. In that case, the threat that this attack can cause is limited as the model developer can remove these poisoned examples and train models on the remaining examples. Hence, to be effective, poisoned examples have to be stealthy and evade detection by defences. Such a stealthiness requirement is the motivation to propose and evaluate Afraidoor. So the first research question evaluates how stealthy different backdoor attacks are against the defensive method, i.e., spectral signature. **Evaluation Metrics.** Yang et al. [25] propose to evaluate the stealthiness of backdoor attacks in language models using the _Detection Success Rate_ (\(DSR\)) metric, which calculates the rate of truly poisoned examples in the examples returned by a detection method. The detection method used by Yang et al. [25] assumes single-word insertion as the trigger, which do not have the desirable qualities of being syntactic-valid and semantic-preserving. Therefore, it is not applicable to attack code models. As introduced in Section 2.3, we use the spectral signature method to detect poisoned examples. This method is widely used [20, 22, 26, 34, 37] and also adopted by Ramakhrisnan et al. [17]. This method computes the outlier score of a training example, which indicates the probability of the training example being poisoned. We rank all the examples based on their outlier scores. Assuming that the poisoning rate is \(\alpha\) and the number of total examples is \(N\), we introduce a parameter _removal ratio_ to control the number of removed examples and denote it as \(\beta\). We remove the top \(\alpha\times\beta\times N\) examples with the highest outlier scores from the ranked examples. Then we define the _Detection Success Rate @ the removal ratio \(\beta\)_ (\(DSR@\beta\)) as: \[DSR@\beta=\frac{\text{No. Poisoned examples}}{\alpha\times\beta\times N} \tag{4}\] A lower \(DSR@\beta\) suggests that a backdoor attack is stealthier as less truly poisoned examples are removed. **Results.** We present the results of the three backdoor attacks in Table II.6 If a backdoor attack is the stealthiest one under a given setting (i.e., having the lowest \(DSR@\beta\)), the corresponding results are highlighted in **bold** in Table II. We find that _our adaptive backdoor attack is always the stealthiest one_ on both the code summarization and method name prediction tasks. We compute the average detection rates and put the results in the last three rows in Table II. On the code summarization task, the average \(DSR@1\) and \([email protected]\) of the adaptive trigger are only \(1.42\%\) and \(6.87\%\). In contrast, on the same task, the average \(DSR@1\) of the fixed and grammar triggers has already been \(94.71\%\) and \(94.97\%\), respectively. If we are willing to remove more examples (e.g., setting \(\beta\) as \(1.5\)), \(99.33\%\) and \(99.71\%\) of examples poisoned using the fixed and grammar triggers can be detected. Footnote 6: Due to the limited space, Table II presents the \(DSR@1\) and \([email protected]\) results when the top 3 right singular vectors are used. We refer the interested readers to our appendix ‘./appendix/ICSE-23-results.xlsx’ in the replication package for the full results. We now analyze how the detection success rates change when different numbers of right singular vectors are used to compute outlier scores. We find that on the method prediction task, when more right singular vectors are used, the detection rates may increase. A similar observation is also made in [17]. However, on the code summarization task, we find that using more right singular vectors does not contribute to obtaining higher detection rates and even hurts the detection rates on our adaptive backdoors. For example, when \(\beta\) is set as \(1.5\), the detection rate drops from \(15.4\%\) to \(2.42\%\) when 3 rather than 1 vectors are used. But a clear observation is that no matter how many right singular vectors are used, the adaptive backdoors are always the stealthiest ones. **Answers to RQ1**: Around 85% of adaptive triggers in Afraidoor bypass the detection in the defense process. By contrast, only less than 12% of the triggers from previous work bypass the defense. ### _RQ2. How does Afraidoor perform in activating backdoors successfully?_ **Motivation.** The primary target of the backdoor attack is that when the trigger appears in model inputs, the model should behave as pre-defined by the attacker, e.g., produce a specific label. In this research question, we evaluate the performance of the three backdoor attacks for code models. We consider two scenarios: whether the defense method is used or not. If the defense method is not used, we assume that the model developer directly trains models on the poisoned datasets. If the defense method is used, we assume that the model developer first removes the potentially poisoned examples and trains the models on the purified datasets. **Evaluation Metrics.** We introduce the _Attack Success Rate_ (\(ASR\)) to measure the performance of backdoor attacks when no defensive method is used. Formally, \(ASR\) is defined as follows. \[ASR=\frac{\sum_{x_{i}\in\mathcal{X}}M_{b}(x_{i})=\tau}{\sum_{x_{i}\in\mathcal{ X}}x_{i}\text{ contains triggers}} \tag{5}\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\(k\)} & \multirow{2}{*}{Attack} & \multicolumn{4}{c}{Detection Success Rate (\(DSR@\beta\))} \\ \cline{3-6} & & \multicolumn{2}{c}{Code Summarization} & Method Name Prediction \\ \cline{3-6} & & \(\beta=1\) & \(\beta=1.5\) & \(\beta=1\) & \(\beta=1.5\) \\ \hline \multirow{3}{*}{1} & Affraidoor & **1.16** & **15.4** & **29.26** & **41.43** \\ & Fixed & 94.47 & 93.94 & 85.21 & 86.50 \\ & Grammar & 94.96 & 99.72 & 41.07 & 42.49 \\ \hline \multirow{3}{*}{2} & Affraidoor & **1.84** & **2.78** & **24.66** & **28.44** \\ & Fixed & 94.89 & 99.34 & 92.37 & 97.77 \\ & Grammar & 94.76 & 99.71 & 90.76 & 97.21 \\ \hline \multirow{3}{*}{3} & Affraidoor & **1.32** & **2.42** & **35.52** & **40.54** \\ & Fixed & 94.96 & 99.30 & 90.44 & 96.15 \\ \cline{1-1} & Grammar & 94.24 & 99.67 & 91.70 & 97.73 \\ \hline \multirow{3}{*}{Avg} & Affraidoor & **1.42** & **6.87** & **29.81** & **36.80** \\ \cline{1-1} & Fixed & 94.71 & 99.33 & 89.34 & 93.47 \\ \cline{1-1} & Grammar & 94.97 & 99.71 & 74.51 & 79.14 \\ \hline \hline \end{tabular} \end{table} TABLE II: The detection success rates (DSR) of different backdoor attacks. Lower DSR means an attack is stealthier. \(k\) is the number of right singular vectors used in detection. The denominator represents the total number of poisoned examples in a dataset. \(M_{b}\) is a model trained on the poisoned dataset. \(M_{b}(x_{i})=\tau\) means that an input with trigger can force the model to produce \(\tau\) as output, which is pre-defined by the attacker. In other work, \(x_{i}\) is a successful attack. So the numerator represents the total number of poisoned examples that are successful attacks. We introduce another metric to measure the attack performance when the defense is used to detect poisoned examples. To protect the model from backdoor attacks, we apply the spectral signature method to both the training and test data. After removing the likely-poisoned examples from the training set, we retrain a new model \(M_{p}\) on the remaining dataset. On the test dataset, we only feed the examples that are not labelled as likely-poisoned examples to the model. Then we introduce the _Attack Success Rate Under Defense_, denoted by _ASR-D_. We define _ASR-D_ as follows. \[ASR_{D}=\frac{\sum_{x_{i}\in\mathcal{X}}M_{p}(x_{i})=\tau\wedge\neg\mathcal{S }(x_{i})}{\sum_{x_{i}\in\mathcal{X}}x_{i}\text{ contains triggers}} \tag{6}\] We introduce an additional condition to the numerator: \(\neg\mathcal{S}(x_{i})\). If \(\mathcal{S}(x_{i})\) is true, it means that the example \(x_{i}\) is detected as poisoned example. So \(\sum_{x_{i}\in\mathcal{X}}M_{p}(x_{i})=\tau\wedge\neg\mathcal{S}(x_{i})\) means the number of all the poisoned examples that are not detected by the spectral signature and produce success attacks. **Results.** We put different attacks' _ASR_ and _ASR-D_ in Table III. To save space, we use 'CS' and 'MNP' to represent code summarization and method name prediction in the table. We first analyze the attack performance when no defense is used. From the Table III we can find that both fixed and grammar triggers can achieve _ASR_ of \(100\%\), meaning that the two types of triggers can steadily activate backdoors in models. In contrast, the proposed adaptive trigger has slightly lower _ASR_. On the code summarization task, our adaptive trigger achieve _ASR_ of \(98.53\%\), \(93.78\%\), and \(95.51\%\) on the CodeBERT, PLEBART, and CodeT5, respectively. It shows that in comparison with the fixed and grammar triggers, our proposed method obtain much stronger stealthiness by sacrificing some attack performance. We present a further analysis of those unsuccessful attacks in Section 6.1. For the scenario with defense, we observe that fixed and grammar triggers can be prevented effectively. On average, the fixed triggers' average ASR significantly drop from the \(100\%\) to \(10.47\%\), and the grammar triggers' average ASR drop from the original \(100\%\) to \(12.06\%\). Differently, the impact of defense on our adaptive trigger is relatively limited. On the code summarization task, the average _ASR_ drops by \(2.96\%\) (from \(95.94\%\) to \(92.98\%\)). On the method name prediction task, the same metric drops by \(20.72\%\) (from \(97.77\%\) to \(77.05\%\)). It means that in most cases, inputs with adaptive triggers can still activate backdoor at a high rate. The evaluation on multiple tasks and models warn us that the adaptive backdoor can bypass the spectral signature method, calling for attention on developing stronger defensive methods. **Answers to RQ2**: When the defense method is not applied, both Afraidoor and baselines have very high ASR. However, once a defense is applied, the success rates of baselines decrease dramatically to 10.47% and 12.06%, while the success rate of Afraidoor are 77.05% and 92.98% on the two tasks on average. ### _RQ3. How does Afraidoor affect the model performance on clean examples?_ **Motivation.** Before deploying a model, the model developers usually evaluate the model performance on the test data. Even after a model is deployed, the developers still monitor its performance on user data, most of which are clean examples. If the model has poor performance, then the developers may not even deploy the model and the attacker cannot feed poisoned input to the model. Thus, researchers [25, 41, 42] believe that backdoor attacks should have as minimal impact on the model performance on clean examples as possible. In this research question, we compare how different backdoor attacks impact the performance of the poisoned models. Same as RQ2, we consider the two scenarios: with and without defense. **Evaluation Metrics.** Following the settings in [28], we use _BLEU_ score [43] to evaluate a model's clean performance on code summarization and method name prediction. A higher _BLEU_ indicates better model performance. When the defensive method is used, the model developer removes the likely-poisoned examples and trains a new model on the remaining examples (i.e., purified datasets), which we call the _purified model_. We define the _BLEU-D_ score as the _BLEU_ score of the purified model on the same set of clean examples. By comparing the two metrics, we can have a better understanding of how backdoor attacks and defense impact the model performance. If _BLEU-D_ is smaller than _BLEU_, it means that applying defense to filter poisoned examples can hurt the model performance on clean examples. **Results.** Table IV documents the evaluation metrics _BLEU_ and _BLEU-D_ for the three attacks on two tasks. The _BLEU_ column in Table IV shows the performance of the poisoned models as well as the changes compared to the original models that are trained on clean examples (reported in \begin{table} \begin{tabular}{c c c c c} \hline **Task** & **Model** & **Trigger** & _ASR_ & _ASR-D_ \\ \hline \multirow{4}{*}{CS} & \multirow{4}{*}{CodeBERT} & Affraidoor & 98.53 & 96.35 (-2.18) \\ & & Fixed & 100.00 & 8.27 (-91.73) \\ & & Grammar & 100.00 & 10.35 (-89.65) \\ \cline{2-5} & & Affraidoor & 93.78 & 91.16 (-2.26) \\ & & Fixed & 100.00 & 8.28 (-91.72) \\ & & Grammar & 100.00 & 8.15 (-91.85) \\ \cline{2-5} & & Affraidoor & 95.51 & 91.44 (-4.07) \\ & & Fixed & 100.00 & 8.13 (-91.87) \\ & & Grammar & 100.00 & 10.61 (-89.39) \\ \hline \multirow{4}{*}{MNP} & \multirow{4}{*}{CodeBERT} & Affraidoor & 98.14 & 76.58 (-21.56) \\ & & Fixed & 100.00 & 12.76 (-87.24) \\ \cline{1-1} & & Grammar & 100.00 & 14.25 (-85.75) \\ \cline{1-1} \cline{2-5} & & Affraidoor & 97.01 & 86.86 (-20.15) \\ \cline{1-1} & & Fixed & 100.00 & 12.62 (-87.38) \\ \cline{1-1} & & Grammar & 100.00 & 14.49 (-85.51) \\ \hline \multirow{4}{*}{CodeT5} & \multirow{4}{*}{CodeT5} & Affraidoor & 98.15 & 77.00 (-04.55) \\ \cline{1-1} & & Fixed & 100.00 & 12.76 (-87.24) \\ \cline{1-1} & & Grammar & 100.00 & 14.49 (-85.51) \\ \hline \end{tabular} \end{table} TABLE III: The impact of attacks on model performance. Table I); changes are put in the parentheses and '-'/'+' means performance decrease/increase after attack. Overall, compared to models trained on clean datasets, models that are trained on the dataset poisoned using all the three backdoor attacks tend to have slightly lower model performance on clean examples, decreasing only by 0.18 _BLEU_ score on average. We are interested in whether _the performance decrease caused by the adaptive trigger is significantly larger than that of caused by the fixed and grammar triggers_. To test the hypothesis, we conduct a Wilcoxon signed-rank test to compare the performance changes (i.e., the numbers surrounded by the parentheses in the column _BLEU_) caused by Afraidoor and two baseline attacks. The \(p\)-values we obtained are \(0.43\) (Afraidoor and fixed trigger) and \(0.24\) (Afraidoor and grammar trigger), indicating that there is no statistically significant difference between our approach and the other two baseline approaches in terms of the model performance on clean examples. It suggests that Afraidoor achieves higher stealthiness but does not sacrifice more clean performance than the baseline methods at the same time. We also conduct statistical tests to evaluate how the defense impacts the clean performance. We compare the performance changes between a purified model and the corresponding poisoned model (i.e., Column _BLEU-D_, the last column in Table IV). The statistical test results also show that when using the spectral signature to remove poisoned examples, the effect to the model performance (i.e., the difference between _BLEU_ and _BLEU-D_) is not significantly different among the three backdoor attacks. **Answers to RQ3**: All the three attacks cause slightly negative impacts on the clean performance, however these impacts are not statistically significant. ## 6 Discussion ### _The Characteristics of Unsuccessful Attacks_ Based on the results of RQ2, we find that our adaptive triggers are indeed stealthier but inevitably sacrifice some attack effectiveness. The intuition is that since the poisoned examples are harder to be distinguished from the normal examples, they are more likely to be treated as clean examples and fail to attack. We separate all the poisoned examples into two groups: successful attacks and unsuccessful attacks7. Then, we compare the average lengths of examples in the two groups. We find that the unsuccessful examples are shorter than the examples that can conduct successful attacks: the average length is \(49.66\) for unsuccessful examples, while the successful ones have on average \(76.70\) tokens, \(54.45\%\) longer than the unsuccessful ones. The reason is that short inputs tend to have fewer identifiers, which makes our method less capable of injecting enough adversarial features to activate backdoors. Footnote 7: We discard the examples whose length is over 256, the maximal model input length. ### _Suggestions for Mitigating Backdoor Attacks_ We discuss some practices that can potentially mitigate the effects of backdoor attacks. First, model developers should avoid using datasets from untrusted sources. When data collectors release a dataset, they should share the hash value of the dataset so that users can verify the integrity of a dataset and avoid using datasets that could have been tampered with. Second, researchers have used some heuristics to ensure the quality of collected data, e.g., choosing data from repositories with more stars. However, researchers have revealed that the commits and stars can be easily manipulated using _Promotion-as-a-Service_[35], which can be used to make the poisoned repositories more visible to the data collectors and model developers. More research on detecting such malicious promotions and accounts [44] may mitigate data poisoning. Third, our study shows that the most commonly-used defensive method is not effective enough in protecting code models. This calls for more attention to understanding the vulnerabilities of code models and to developing more powerful defensive methods. Besides, as suggested by the ethical guidelines for developing trustworthy AI [45], model developers may involve humans to establish stronger oversight mechanisms for the collected data and uncover potential poisoned examples. ### _Threats to Validity_ **Threats to Internal Validity.** As stated in Section 4, for implementing the three models (CodeBERT, PLBART and CodeT5), we reuse the repository8 released by the CodeT5 [28] authors. The pre-trained models are extracted from the well-known HuggingFace9 model zoo. Besides, we replicate the experiment in [28] in code summarization task and observe similar results as reported in the original paper. Thus, we believe that the threats to internal validity are minimum. Footnote 8: [https://github.com/salesforce/CodeT5](https://github.com/salesforce/CodeT5) Footnote 9: [https://huggingface.co/](https://huggingface.co/) **Threats to External Validity.** In our baseline work [17], it only consider 2 models in 1 task. In the experiment, we expand the experiment by considering 3 state-of-the-art \begin{table} \begin{tabular}{c c c c c} \hline \hline **Task** & **Model** & **Trigger** & _BLEU_ & _BLEU-D_ \\ \hline \multirow{6}{*}{CS} & \multirow{4}{*}{CodeBERT} & Afraidoor & 16.79 (-0.71) & 17.38 (+0.59) \\ & & Fixed & 17.19 (-0.31) & 16.94 (-0.25) \\ & & Grammar & 17.10 (-0.40) & 16.49 (-0.61) \\ \cline{2-5} & \multirow{4}{*}{PLBART} & Afraidoor & 17.99 (-0.36) & 18.21 (+0.22) \\ & & Fixed & 18.17 (-0.18) & 18.05 (-0.12) \\ & & Grammar & 17.94 (-0.41) & 17.62 (-0.32) \\ \cline{2-5} & \multirow{4}{*}{CodeT5} & Afraidoor & 18.66 (+0.05) & 18.60 (+0.06) \\ & & Fixed & 18.56 (-0.05) & 18.60 (+0.04) \\ & & Grammar & 18.53 (-0.08) & 18.14 (-0.12) \\ \hline \multirow{6}{*}{MNP} & \multirow{4}{*}{CodeBERT} & Afraidoor & 43.08 (-0.27) & 42.29 (-0.79) \\ & & Fixed & 42.87 (-0.48) & 43.03 (+0.16) \\ & & Grammar & 42.94 (-0.41) & 43.12 (+0.18) \\ \cline{2-5} & \multirow{4}{*}{PLBART} & Afraidoor & 42.18 (-0.33) & 42.29 (-0.11) \\ & & Fixed & 42.65 (+0.14) & 42.31 (-0.34) \\ & & Grammar & 42.47 (-0.04) & 42.50 (+0.03) \\ \cline{2-5} & \multirow{4}{*}{CodeT5} & Afraidoor & 46.40 (+0.36) & 46.17 (-0.23) \\ & & Fixed & 46.41 (-0.37) & 46.57 (-0.16) \\ \cline{1-1} & & Grammar & 45.97 (-0.07) & 46.33 (+0.36) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Backdoor attacks and defense affect model performance. models and evaluate the attacks on 2 large-scale datasets. Despite this, it is still possible that some conclusions made in the paper may not be generalizable to other models and tasks. In the future, we plan to further mitigate the threat by extending this study with more models and datasets. **Threats to Construct Validity.** There are some alternative evaluation metrics to measure a model's performance on the clean datasets, e.g., F1-score, or other variants of _BLEU_ score. In this paper, we choose _BLEU-4_ score as the evaluation metric, which is widely adopted in generation tasks like code summarization and is also used to evaluate the model performances, e.g., [28]. ## 7 Related Work A series of work has been done to evaluate and improve the quality of various AI systems, e.g., sentiment analysis [46, 47, 48], speech recognition [49, 50], reinforcement learning [51], image classification [52, 53], etc. We refer the readers to [54] for a comprehensive survey on AI testing. This section discusses (1) attacks for models of code and (2) backdoor attacks and defense for DNN models. ### _Attacking Code Models_ Researchers have exposed vulnerabilities in code models, e.g., lacking robustness, not immune to malicious data, etc. Rabin et al. [31] evaluate whether neural program analyzers like GGNN [30] can generalize to programs modified using semantic preserving transformations. Applis et al. [55] extend metamorphic testing approaches for DNN models for software programs to evaluate the robustness of a code-to-text generation model. Pour et al. [32] focus on the embeddings of source code and propose a search-based testing framework to evaluate their robustness. Zhang et al. [9] propose Metropolis-Hastings Modifier to generate adversarial examples for code authorship attribution models. Yang et al. [7] highlight the naturalness requirement in attacking code models and propose to use mask language prediction and genetic algorithms to generate such natural adversarial code examples. The above works conduct attacks in black-box manners. There are also some attacks that leverage white-box information. Yefet et al. [8] propose DAMP, a method that uses FGSM [56] to adversarially modify variable names in programs to attack code2vec [16], GGNN [57] and GNN-FiLM [58]. Henkel et al. [59] extend Yefet et al.'s work [8] by considering more program transformations, e.g., using if branches to insert dead code. Srikant et al. [10] use PGD [60] to further improve Henkel et al.'s [59]. Besides the baseline attack [17] evaluated in our paper, there are some other works that operate data poisoning attacks on datasets of source code. Nguyen et al. [61] find that none of the three state-of-the-art API recommender systems is immune to malicious data in the training set. Schuster et al. [37] add a few specially-crafted files to the training data of a code completion model, and the model outputs will be affected in some security-related contexts. Sun et al. [62] use data poisoning to protect open-source data against unauthorized training usage. Severi et al. [63] insert triggers into binary code that are specially designed to attack the feature-based binary classification models, while this paper poisons the source code to attack the advanced code models. ### _Backdoor Attacks and Defense for DNN Models_ After Gu et al. [64] first proposed backdoor attacks for (Computer Vision) CV models, Chen et al. [65] point out that the poisoned images and the original examples should be as indistinguishable as possible. Various subsequent studies [66, 21, 67] propose to achieve this goal by limiting the modification under certain constraints, e.g., the \(L_{2}\) norm. There are a series of defensive methods [68, 69, 70, 71] proposed for CV models, while they cannot be directly applied to the code models as they assume the model input to be continuous. Recently, backdoor attacks are extended to other AI systems like reinforcement learning [72]. The first backdoor attacks on language models are done by Liu et al. [73], which use a sequence of words as the trigger to attack a sentence attitude recognition model. Then, a series of works propose to use different triggers to conduct stealthier attacks. For example, instead of injecting uncommon words [41], Dai et al. use a complete sentence [74] as the trigger. Li et al. inject triggers by using the homograph replacements [24]. The existing backdoor attacks and defensive methods [75, 76] designed for natural language processing (NLP) models are also not applicable to code models. The triggers they use can break the syntax and do not preserve the program semantics of the original code. We follow the baseline work to use the spectral signature as the defensive method to protect the code models [17]. ## 8 Conclusion and Future Work In this paper, we evaluate the threats caused by stealthy backdoor attacks to code models. We first propose Afraidoor, a method that leverages adversarial features to inject adaptive triggers into model inputs. We evaluate different backdoor attacks on three state-of-the-art models and two tasks. The experiment results show that the existing two backdoor attacks are not stealthy: around 85% of adaptive triggers in Afraidoor bypass the detection in the defense process. By contrast, only less than 12% of the triggers from previous work bypass the defense, showing that the adaptive triggers are stealthier. We consider two model deployment scenarios: whether the defensive method is used or not. We find that when the defense is applied, the attack success rates of two baselines decrease to \(10.47\%\) and \(12.06\%\), respectively. By contrast, the success rate of Afraidoor drops to 77.05% on the method name prediction task and 92.98% on the code summarization task. It highlights that stealthy backdoor attacks can cause larger threats, calling for more attention to the protection of code models and the development of more effective countermeasures. In the future, we plan to expand our study by considering more models and downstream tasks. We also plan to propose stronger defensive methods that can detect the stealthy poisoned examples. The code and documentation, along with the obtained models, have been made open-source for reproducibility: **[https://doi.org/10.6084/m9.figshare.20766577.v1](https://doi.org/10.6084/m9.figshare.20766577.v1)**. ## Acknowledgments This research is supported by the Ministry of Education, Singapore under its Academic Research Fund Tier 3 (Award ID: MOET32020-0004). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
2305.10565
Measurement Based Evaluation and Mitigation of Flood Attacks on a LAN Test-Bed
The IoT is vulnerable to network attacks, and Intrusion Detection Systems (IDS) can provide high attack detection accuracy and are easily installed in IoT Servers. However, IDS are seldom evaluated in operational conditions which are seriously impaired by attack overload. Thus a Local Area Network testbed is used to evaluate the impact of UDP Flood Attacks on an IoT Server, whose first line of defence is an accurate IDS. We show that attacks overload the multi-core Server and paralyze its IDS. Thus a mitigation scheme that detects attacks rapidly, and drops packets within milli-seconds after the attack begins, is proposed and experimentally evaluated.
Mohammed Nasereddin, Mert Nakıp, Erol Gelenbe
2023-05-17T20:57:15Z
http://arxiv.org/abs/2305.10565v3
# Measurement Based Evaluation and Mitigation of Flood Attacks on a LAN Test-Bed ###### Abstract The IoT's vulnerability to network attacks has motivated the design of intrusion detection schemes (IDS) using Machine Learning (ML), with a low computational cost for online detection but intensive offline learning. Such IDS can have high attack detection accuracy and are easily installed on servers that communicate with IoT devices. However, they are seldom evaluated in realistic operational conditions where IDS processing may be held up by the system overload created by attacks. Thus we first present an experimental study of UDP Flood Attacks on a Local Area Network Test-Bed, where the first line of defence is an accurate IDS using an Auto-Associative Dense Random Neural Network. The experiments reveal that during severe attacks, the packet and protocol management software overloads the multi-core server, and par analyses IDS detection. We therefore propose and experimentally evaluate an IDS design where decisions are made from a very small number of incoming packets, so that attacking traffic is dropped within milli-seconds after an attack begins and the paralying effect of congestion is avoided. Internet of Things, Local Area Networks, Cyber-security, UDP Flood Attacks, Intrusion Detection and Mitigation ## I Introduction The risk of cyber threats, which may do considerable damage to businesses, has increased with the growing dependence on networked technologies. Denial of service (DoS) attacks, which can disable a target system or network by flooding it with a huge stream of requests, are among the most common and destructive forms of cyberattacks which cause reputational damage, and financial and productivity losses to organizations. Thus the year 2022 saw a significant increase in Distributed DoS (DDoS) attacks, with a jump of \(150\)% worldwide [1], indicating a higher number, complexity, volume, duration, power, and frequency of such attacks. On average, organizations faced \(29.3\) attacks per day during Q4 \(2022\), or \(3.5\) times higher than in \(2021\), while the largest reported DDoS attack started in September 2017, but was only disclosed in \(2020\). It targeted Google, with spoofed packets sent to 180,000 web servers which then responded to Google, attaining total bitrates of \(2.54\) Tera-bits per second [2]. However, DoS attacks also target the IoT and industrial control systems, as well as vital infrastructure, such as power grids and transportation systems [3, 4]. Among the different types of DoS and DDoS attacks, SYN attacks [5] overwhelm the victim by creating repeated requests for the opening of a connection and overloading the victim's processing capacity, and its energy if it is battery operated, while Botnet attacks can be devastating [6] since they spread by using victims as attackers [7, 8, 9]. UDP Flood attacks [10] are simple and "popular" since they readily overwhelm the target network with a large number of forged-source address UDP packets, causing it to crash or become unresponsive. Often launched with a small number of compromised systems, they direct a high volume of traffic at the targets, resulting in a denial of service for normal users. When networks have limited capabilities such as sensor networks, UDP Flood attacks cause delayed or lost data and inaccurate or incomplete readings [11], and UDP's connectionless behavior [12] will cause even closed ports to respond by sending back an ICMP message that creates overhead for the victim. ### _Aims of this Paper_ While there is abundant literature on attack detection methods, most evaluations of these methods are conducted under ideal conditions on a general purpose computer where the attack traffic is treated as data. Such a setting cannot represent the actual arrival process of attack traffic, the backlog that forms in front of the attack detector after the traffic enters the port that it is attacking, the possible effects of an avalanche of attack traffic that causes the overflow of input buffers and legitimate traffic to be dropped, or the effect of delayed decisions concerning the packets that are malicious and those which are legitimate. As a consequence, in this paper, we use a practical cost-effective test-bed for network attack detection evaluation, which incorporates transmitting devices and a network port placed at a server where traffic is received and attack detection takes place. The purpose is to compare the "ideal" evaluation results concerning attack detection algorithms, with the actual overall system performance in a Local Area Network (LAN) environment. In this context, we can measure the precision of the IDS itself, but also its delay in providing decisions due to congestion during a UDP Flood Attack. The test-bed allows us to study remedial actions to drop attacking packets and protect the bandwidth and buffer needs of benign traffic. In this paper, we therefore use the LAN test-bed for evaluating an attack detection technique by conducting a systematic study of the performance of a recent machine learning based Intrusion Detection System (IDS) [13] that uses the Auto-Associative Dense Random Neural Network (AA-DenseRNN). The rest of the paper is organized as follows: Section II reviews the recent related works. Section III describes the experimental setup and devices used, and Section IV presents the AADRNN-based attack detection algorithm and its performance under ideal conditions compared to real-world experiments. Section V presents the system's behaviour when exposed to UDP Flood attacks through different scenarios and the improvements achieved by an attack mitigation algorithm. Finally, Section VI concludes the paper and outlines directions for future work. ## II Related Work Because it allows for the simulation of real-world network conditions in controlled and reproducible environments, developing and using a reliable test-bed to evaluate DoS attack detectors was recommended in early work [14], but was not frequently used. Several researchers have developed test-beds for cyber-physical systems, industrial control systems (ICSs), and IoT environments [15]. In [16], a semi-physical test-bed for ICSs was proposed, while in [17], a low-cost Smart Grid test-bed for SCIDS systems using Arduino microcontrollers, XBee radio modules, Suricata and Snort intrusion detection and prevention systems (IDPSs), Bonesi botnet simulator, and Winlog Lite was evaluated for using TCP flood attacks. In [18], a real-time test-bed for cyber-physical systems was implemented, whereas in [19], the performance of an attack-resilient control system for Automatic Generation Control (AGC) in power systems was evaluated. In [20] the performance of an attack-resilient control system for wind farm SCIDS systems (WFSS) was studied using a test-bed with SYN flood attacks, and in [21, 22, 23], SCADA systems are examined. In other contexts, in [24], they proposed a test-bed using six NetFlow tools for collecting, analyzing, and displaying data with HTTP-GET flood attacks on a WAN network. In [25], the impact of current datasets on IoT systems and developed a real-time data collection platform for DNS amplification attacks in IoT was investigated, and [26] addressed the problem of DoS attacks on software-defined networks (SDN), and [27] conducted experiments analyzing DoS attacks on an autonomous vehicle test-bed. The KDD\(99\) dataset and its improved edition, NSL-KDD, are widely used in network security research because of the vast collection of network traffic records they include. They are still frequently used as a benchmark dataset for evaluating the effectiveness of DoS attack detection. However, one notable shortcoming is that they were generated in a simulated environment, which may not adequately reflect the complexities and nuances of real network traffic. Many other examples of datasets are used for the same purposes (e.g., UNSW-NB15, CICDS\(2017\), and Bot-IoT dataset) [25]. Recent work develops datasets that better reflect current threats, so this paper uses the MHDDoS repository [28] that performs real-world DoS attacks with 56 different modern constantly updated methods. ## III Experimental Setup Practically all published work on cyberattack detection techniques publish statistical results based on testing in a pure software environment, which smooths over the realities of the network and device hardware, or the side effects of attack traffic on the receiver devices or network ports, such as the creation of large queues of packets. Such ideal environments can obtain purely statistical evaluations regarding the accuracy of the algorithms being used, but cannot apprehend the huge processing backlogs that such attacks often cause, which impede attack detection from being carried out in a timely fashion which is needed to take mitigating measures, and which also can cause the loss and delay of legitimate traffic due to the large packet backlogs. Thus, in this work, we attempt to address these issues by establishing a physical test environment to evaluate LAN network attack detector software and algorithms in more realistic conditions. This environment, which can be expanded to include an arbitrary number of linked devices with multiple sources of traffic and attacks, presently consists of three scalable devices. Two traffic-generating devices, one that transmits normal benign IP packet traffic while the other sends a combination of benign and malicious traffic. These devices are embodied by two Raspberry Pi \(4\) Model B Rev \(1.2\) machines (\(\text{RPi}1\) and \(\text{RPi}2\)) as transmitters. They each have a \(1.5\)GHz ARM Cortex-A72 quad-core processor and \(2\)GB LPDDR\(4-3200\) SDRAM and run the latest version of Raspbian GNU/Linux \(11\) (bullseye), a Debian-based operating system optimized for the Raspberry Pi hardware. A server with an Intel Core i\(7-8705\)G processor acts as the receiver of the packet traffic and is responsible for detecting the attack and for storing the arriving packets. It has \(16\)GB of RAM and a \(500\)GB hard drive. It runs Linux \(5.15.0-60-\) generic \(66-\)Ubuntu SMP, an Ubuntu-based Fig. 1: Testing Environment using Ethernet for communications, with Raspberry Pi machines acting as forwarders of normal and attack traffic, and an Intel 8-Core Processor used as a server to process incoming packet traffic and detect attacks. operating system with eight cores, each running at \(3.10\)GHz. The traffic is carried over Ethernet connections between all devices interconnected via a hub, as shown in Figure 1. The specifications of the Raspberry Pi devices and the computer were carefully chosen to ensure that the devices are capable of effectively transmitting and receiving packets of data through the Ethernet connection. These devices communicate using the UDP protocol due to its simplicity and low overhead. In contrast to TCP, UDP operates without establishing a connection before transmitting data and without providing any ACK or error recovery mechanisms and is a fast and efficient protocol for real-time applications [29]. ## IV The IDS and its Ideal Performance For completeness, we first detail the Attack Detection Algorithm used in this paper and its performance. It is a version of the IDS developed in [13] based on the Deep Random Neural Network (DRNN) [30] with Auto-Associative Learning (AADRNN). Figure 2 shows how the AADRNN algorithm computes the decision \(y_{i}\in\{0,1\}\) using three metrics calculated from network traffic. To perform this operation, the attack detection scheme is comprised of the AADRN followed by a postprocessing module. The algorithm is an anomaly-based intrusion detector, and only **learns with normal traffic** with measured metrics \(x^{i}=[x_{i}^{1},x_{i}^{2},x_{i}^{3}]\) from successive sets of packets. The AADRNN learns to predict the metrics that are expected to be measured from traffic in the absence of an intrusion, namely \(\hat{x}_{i}=[\hat{x}_{i}^{1},\hat{x}_{i}^{2},\hat{x}_{i}^{3}]\). When a measurement \(x^{i}\) is entered into the AADRNN, it outputs the response \(\hat{x}_{i}\), and the difference between the input and the output is used to compute the decision variable \(y_{i}\) (attack or non-attack). The AADRNN is built using the DRNN neuronal model [30], an extension of the Random Neural Network [31], which incorporates soma-to-soma triggering between neurons, as well as the commonly used excitatory and inhibitory spikes. It uses auto-associative learning as the attack detection technique in [32], and provides accurate detection in significant test cases [33, 34, 35, 13]. The DRNN is organized in \(l\in\{1,\ldots,L\}\) feed-forward layers, each comprised of \(N_{l}\) clusters, each cluster having \(n_{l}\) identical neurons. Weight matrices \(W_{l}\) connect the clusters of layer \(l\) to those of layer \(l+1\), and the weights are learned to create an auto-associative memory. For the input vector \(x_{i}\), the forward pass of the AADRNN is: \[\hat{x}_{i}^{l} =\zeta([\hat{x}_{i}^{l-1},1]\,W_{l-1}),\ 1\leq l\leq L,\] \[\hat{x}_{i} =\hat{x}_{i}^{L-1}W_{L-1}, \tag{1}\] where \(\hat{x}_{i}^{l}\) is the output of layer \(l\) for packet \(i\), \(\hat{x}_{i}^{0}=x_{i}\), and \([\hat{x}_{i}^{l},1]\) indicates that \(1\) is concatenated to the output of each layer \(l\) as a multiplier of the bias, and \(\zeta(\lambda)\) is the neuron activation function [30]. If the \(n_{l}\) is large we can simplify the transfer function to: \[\zeta(\lambda)=\frac{[r(1-p)-p\lambda^{+}][1\pm\sqrt{1-\frac{4p(\lambda+ \lambda^{-})[\lambda^{+}-r-\lambda-\lambda^{-}]}{r(1-p)-p\lambda^{+}}}]}{2p( \lambda+\lambda^{-})}, \tag{2}\] where \(r\) is the total firing rate of each neuron, \(\lambda^{+}\) and \(\lambda^{-}\) are external excitatory and inhibitory spike rates arriving at the given cell, and \(p\) is the probability that any other neuron in the network fires when a given neuron fires, representing the soma-to-soma interactions. In our experiments, we have set the values of these parameters as follows: \(r=0.001\), \(\lambda^{+}=\lambda^{-}=0.1\), and \(p=0.05\). The weights \(W_{l}\) between layers are only learned for normal or "benign" traffic using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [36]: \[W_{l}= \tag{3}\] \[\operatorname*{argmin}_{\{\Psi:\,W\geq 0\}}\left[\ ||adj(\zeta(\hat{X}_{l-1}^{ \text{train}}\,W_{R}))W-\hat{X}_{l-1}^{\text{train}}||_{L_{2}}^{2}+||W||_{L_{1} }\ \right]\] where \(\hat{X}_{l}^{\text{train}}\) is the matrix of outputs of layer \(l\) resulting from data from the training dataset \(\mathcal{D}_{\text{train}}\): \[\hat{X}_{l}^{\text{train}}=\{\hat{x}_{i}^{l}\}_{i\in\mathcal{D}_{\text{train}}} \tag{4}\] In the experiments reported in this paper, AADRNN learning is carried out with a small dataset consisting of the first \(500\) packets received by the server. Thus the time until \(500\) packets are received can be viewed as the "cold-start", and we ensure that only benign packets are received during this time. The duration of the cold-start depends on the ongoing packet arrival rate, varying between \(25\) seconds and as long as \(9\) minutes. ### _Traffic Metrics and Decision Making_ We use traffic metrics from recent work [13] that aim to capture the signatures of DDoS attacks, especially Mirai Botnet attacks. In [34] these metrics were extended to identify several different DoD and DDoS attacks, as well as Botnets, and it is the latter approach that we use in this work. If \(t_{i}\) is the instant when packet \(i\) is transmitted and \(b_{i}\) be its length in bytes. The first metric is the total size of the last \(I\) packets observed by IDS up to and including packet \(i\), while the second one is the Fig. 2: The structure of the IDS system that computes the decision variable \(y_{i}\) from the network traffic metrics \([x_{i}^{1},x_{i}^{2},x_{i}^{3}]\) with the DRNN based Auto-Associative Random Neural Network (AADRNN) and the postprocessing module. average inter-transmission time of the last \(I\) packets observed by IDS up to and including packet \(i\): \[x_{i}^{1}=\sum_{j=0}^{I-1}b_{(i-j)},\ x_{i}^{2}=\frac{1}{I}\sum_{j=0}^{I-1}\big{[} t_{(i-j)}-t_{(i-j-1)}\big{]}. \tag{5}\] The third metric is the total number of packets transmitted in the last \(T\) seconds up to the transmission of packet \(i\): \[x_{i}^{3}=\big{|}\{j:(t_{i}-T)\leq t_{j}<t_{i}\}\big{|}. \tag{6}\] Each metric is normalized via min-max scaling using the training dataset \(\mathcal{D}_{\text{train}}\) as \[x_{i}^{m}\leftarrow\min\Bigl{[}\frac{x_{i}^{m}-\min_{j\in\mathcal{D}_{\text{ train}}}x_{j}^{m}}{\max_{j\in\mathcal{D}_{\text{train}}}x_{j}^{m}-\min_{j\in \mathcal{D}_{\text{train}}}x_{j}^{m}},1\Bigr{]} \tag{7}\] From the output \(\hat{x}_{i}\) of the AADRN with input \(x_{i}\), the binary decision variable \(y_{i}\) is obtained using the threshold \(1>\gamma>0\): \[y_{i}=\begin{cases}1,&\text{if }\frac{1}{3}\sum_{m=1}^{3}\lvert x_{i}^{m}- \hat{x}_{i}^{m}\rvert\geq\gamma\\ 0,&\text{otherwise}.\end{cases} \tag{8}\] ### _Real-Time Detection Performance of AADRNN_ An experiment was run for a UDP Flood attack that lasts for \(10\) seconds and the IDS identified \(2,343\) benign and \(153,657\) malicious packets that were received over \(17\) minutes. The resulting performance of AADRNN deployed on the LAN test-bed is summarized in Figure 3, reporting the Accuracy, TPR, and TNR for the experiment, which lasts approximately \(17\) minutes, where RPi2 starts a UDP Flood attack randomly, which lasts for \(10\) seconds. The results show that AADRNN achieves high performance both when a predefined value of threshold \(\gamma=0.3\) is used in real-time testing and when the best value of threshold \(\gamma=0.3787\) is used. The experimental results show that AADRNN yields around \(99.7\%\) Accuracy and TPR, while its TNR is \(98.48\%\). Thus the _ideal performance of IDS_ under the best threshold selection is, as expected, only slightly higher. The results were not significantly different when the attack lasted \(60\) seconds between those for threshold \(\gamma=0.3\) compared to the best threshold value \(\gamma=0.2176\): Accuracy and TPR were \(99.89\%\), and TNR was of \(96.31\%\). We also observed that the AADRNN also raised an alarm just after the attack traffic from the compromised device RPi2 stopped, as shown in Figure 4. ## V System Behaviour with Normal and Attack Traffic We now analyze the behaviour of the system operating within the server in our experimental setup. As this system is shown in Figure 5, the server receives the traffic packets from linked devices on port \(5555\), which are then passed to the buffer manager by a network protocol and queued to be analyzed by the AADRNN-based IDS. Based on the decisions of IDS, a batch of 10 packets is classified as normal or attack traffic. The packets in this batch are classified as normal only if the IDS detects the majority of them as normal traffic. Packets classified as normal traffic are forwarded to the packet content processor (representing the rest of the operations performed by the server); otherwise, they are dropped. In this way, we aim to ensure the security and accessibility of the server. ### _Traffic Generation_ During normal operation, when there is no attack, each of the RPi1 and RPi2 devices continuously generates normal IP packet traffic containing the device's CPU temperature and transmits it to the server every \(1\) second using the UDP protocol. The attack traffic generator exploits the public repository MHDDoS [28], which contains \(56\) methods for generating different types of DoS attacks that can be directed toward the transport and application layers of the OSI model. Using this repository, the user may provide the type of attack, target IP address, proxy, number of threads to use, attack duration, requests pre-connection (RPC), and debug mode, configured to ensure that the network bandwidth is flooded with a large number of packets that delay or stop communication between devices. Fig. 4: The AADRNN binary decisions using \(\gamma=0.3\) for the experiment where RPi2 starts a UDP Flood attack, which lasts for \(10\) seconds Fig. 3: The performance of AADRNN with \(\gamma=0.3\), and compared with the best value of \(\gamma=0.3787\), is evaluated with respect to Accuracy, TPR, and TNR for the experiment where RPi2 starts a UDP Flood attack lasting \(10\) seconds We tested the script, and it showed effective performance in generating aggressive, high-impact attack traffic. During our experiments, RP\(1\) generates only normal traffic, while the compromised device RPi\(2\) generates both normal and attack traffic via random sampling. In particular, every \(1\) second, it initiates a UDP Flood attack with a probability of \(0.10\) or sends one normal traffic packet with a probability of \(0.90\). As we perform two different experiments to analyze the changes in the behaviour of the system, the during of the initiated attack is first set to \(10\) seconds, then to \(60\) seconds. ### _Experiment I : The UDP Flood Attack Lasts 10 Seconds_ In Figure 6, we display an example of a UDP Flood attack effect on the server, where the RPi\(2\) device starts targeting the server with attack traffic at the \(99\)th second to disrupt normal traffic on the network. The figure shows that an intense flow of attack packets arrive in \(10\) second interval with \(1032\) byte packets, while under normal operating conditions flow rates are on average of two small packets per second. To examine the UDP Flood effect on the server, we also conducted several experiments by increasing the duration of the attack in the subsequent experiments. Figure 7 shows the resulting packet queue length at the server, and displays the sharp rise in the number of packets waiting to be analyzed (for attack detection) in front of the IDS, and also the gradual decreases of the queue length once the attack ceases. ### _Experiment II : The UDP Flood Attack Lasts 60 Seconds_ Figure 8 shows the effect of the attack when it lasts for \(60\) seconds on the packet processing rate (\(y\)-axis in packets/sec) of the server. We observe that the server is intermittently paralyzed as the attack continues, so its processing rate drops intermittently to zero. From Figures 7 and 8, we see that although the attack lasts only \(10\) seconds in the first experiment, it floods the packet queue such that the IDS completes the analysis of the accumulated packets over a very long \(15\) minute period, and it can take some \(5.85\) hours when the attack lasts for \(60\) seconds as in the second experiment. When the duration of the attack is increased to \(60\) seconds, the IDS becomes intermittently "paralyzed" since the server's four cores are all committed to handling the incoming, and is unable to process packets as shown in Figure 9. After observing the attacker's and server's behavior, we concluded that these severe attack symptoms occur if the attack itself lasts for \(60\) seconds, as Fig. 5: Schematic system organization of the server that supports the IDS based attack detection and mitigation capability. Mitigation is based on triggering packet drop decisions for all packets that enter the IDS Input Buffer (in this figure) as soon as the IDS has detected a majority of attack packets among the most recent \(M\) packets. After the Input Buffer has emptied, the IDS will resume its testing for incoming packets. In our experiments we have taken \(M=20\). Fig. 6: The difference between the form of the normal and attack traffic on the server when it is targeted by a UDP Flood attack. Fig. 7: The top figure shows the queue length infront of the IDS in an attack whose duration is\(10\) seconds, and the vertical red dashed lines show the active duration of the attack originating in the compromised device RPi\(2\). The bottom figure plots the packet delay before the packet is processed by the IDS. the server receives approximately \(408,500\) packets of which \(407,796\) are attack packets during this period. Thus in the absence of any mitigation action as per the IDS's decision, the effect of the attack on the server can last much longer than the activity of the attacker. ### _System Behaviour for Experiments I and II with Attack Mitigation_ We now present measurements of the system behaviour when mitigation action is taken based on the decision of IDS. Recall that in order to mitigate the impact of an attack, if the IDS detects the majority (more than \(10\)) of the \(20\) latest packets as attacks, the input buffer is emptied and all incoming packets within the next \(30\) second window are dropped. This is repeated at the end of the \(30\) second window. Figure 10 displays the queue length in the input buffer when the attack mitigation is performed against the UDP Flood attack, which lasts \(10\) seconds. It is seen that the queue length increases until the IDS processes \(20\) packets and decides to empty the buffer; the mitigation decision is made just after the attack starts and IDS then waits for a predefined period (in this case \(30\) seconds) and we observe that the \(10\) second long attack is mitigated successfully. Figure 11 displays the queue length when the attack lasting \(60\) seconds is mitigated: the buffer length increases up to \(22\) packets, which is small compared to the results without mitigation in Figure 8. During the attack, the mitigation decision was taken twice, and the IDS was not paralyzed. Another mitigation decision occurs between \(162\) and \(192\) seconds after an IDS detection event. ## VI Conclusions IDS are very useful to detect and evaluate network attacks, but are often evaluated under ideal off-line conditions, when the effect of the attack itself is not felt on the server which is used to evaluate the accuracy or quality of an IDS. Thus in this paper we have installed an AADRNN based IDS on a server which receives traffic via Ethernet from devices in a LAN network test-bed. Realistic UDP Flood attack packets have been installed one one of the network devices, and experiments were run where the Flood attack was directed at the server. During a short \(10\) second attack, it was observed that the IDS was able to accurately detect the attack, but that Fig. 8: At the top, the effect of a \(60\) second UDP Flood attack on the IDS traffic processing rate in packets per second, is shown when the attack duration is \(60\) seconds. The corresponding packet queue length infront of the IDS is shown at the bottom. Fig. 10: During the \(10\) second attack, the decision to drop packets results in subsequent very short packet queue length, avoiding server and IDS paralysis. Fig. 9: Packet delay after the packet has been processed by the IDS when the attack duration is \(60\) seconds. long packet queues accumulated at the server. During longer \(60\) second attacks we observed that the IDS soon became unable to carry out attack detection because of the congestion, while the server became intermittently paralyzed due to the server's overload caused by the attack. This led us to design a fast mitigation technique, which takes a decision very rapidly based on a small number of \(20\) successive packets. If an attack is detected then all incoming packets are dropped. The traffic is allowed to re-enter the server's port after some time, and the IDS again takes a mitigation decision based on the first \(20\) consecutive packets and the procedure is repeated. We saw that this approach avoided UDP Flood attack based congestion at the server and also allowed the IDS to operate effectively. In addition to experimentally showing that the installation of an IDS at a server is not sufficient to protect it against the consequences of an attack, and that a highly accurate IDS is by itself no guarantee that an attack will be ineffective, this work shows the value of evaluating an IDS in the context of a real test-bed. Future work will study optimum mitigation policies that examine several mutually dependent aspects, such as the amount and duration of traffic that needs to be blocked or dropped when an attack is first detected, the frequency with which the IDS should sample and analyze the incoming traffic, and the manner in which blocking and loss of valid (benign) traffic can be minimized when attacking traffic is being blocked or dropped.
2304.08003
Monochromatic cycles in 2-edge-colored bipartite graphs with large minimum degree
For graphs $G_0$, $G_1$ and $G_2$, write $G_0\longmapsto(G_1, G_2)$ if each red-blue-edge-coloring of $G_0$ yields a red $G_1$ or a blue $G_2$. The Ramsey number $r(G_1, G_2)$ is the minimum number $n$ such that the complete graph $K_n\longmapsto(G_1, G_2)$. In [Discrete Math. 312(2012)], Schelp formulated the following question: for which graphs $H$ there is a constant $0<c<1$ such that for any graph $G$ of order at least $r(H, H)$ with $\delta(G)>c|V(G)|$, $G\longmapsto(H, H)$. In this paper, we prove that for any $m>n$, if $G$ is a balanced bipartite graph of order $2(m+n-1)$ with $\delta(G)>\frac{3}{4}(m+n-1)$, then $G\longmapsto(CM_m, CM_n)$, where $CM_i$ is a matching with $i$ edges contained in a connected component. By Szem\'{e}redi's Regularity Lemma, using a similar idea as introduced by [J. Combin. Theory Ser. B 75(1999)], we show that for every $\eta>0$, there is an integer $N_0>0$ such that for any $N>N_0$ the following holds: Let $\alpha_1>\alpha_2>0$ such that $\alpha_1+\alpha_2=1$. Let $G[X, Y]$ be a balanced bipartite graph on $2(N-1)$ vertices with $\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)$. Then for each red-blue-edge-coloring of $G$, either there exist red even cycles of each length in $\{4, 6, 8, \ldots, (2-3\eta^2)\alpha_1N\}$, or there exist blue even cycles of each length in $\{4, 6, 8, \ldots, (2-3\eta^2)\alpha_2N\}$. Furthermore, the bound $\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)$ is asymptotically tight. Previous studies on Schelp's question on cycles are on diagonal case, we obtain an asymptotic result of Schelp's question for all non-diagonal cases.
Yiran Zhang, Yuejian Peng
2023-04-17T06:05:43Z
http://arxiv.org/abs/2304.08003v4
# Monochromatic cycles in 2-edge-colored bipartite graphs with large minimum degree+ ###### Abstract For graphs \(G_{0}\), \(G_{1}\) and \(G_{2}\), write \(G_{0}\longmapsto(G_{1},G_{2})\) if each red-blue-edge-coloring of \(G_{0}\) yields a red \(G_{1}\) or a blue \(G_{2}\). The _Ramsey number_\(r(G_{1},G_{2})\) is the minimum number \(n\) such that the complete graph \(K_{n}\longmapsto(G_{1},G_{2})\). In [17], Schelp formulated the following question: for which graphs \(H\) there is a constant \(0<c<1\) such that for any graph \(G\) of order at least \(r(H,H)\) with \(\delta(G)>c|V(G)|\), \(G\longmapsto(H,H)\). In this paper, we prove that for any \(m>n\), if \(G\) is a balanced bipartite graph of order \(2(m+n-1)\) with \(\delta(G)>\frac{3}{4}(m+n-1)\), then \(G\longmapsto(CM_{m},CM_{n})\), where \(CM_{i}\) is a matching with \(i\) edges contained in a connected component. By Szemeredi's Regularity Lemma, using a similar idea as introduced by Luczak [13], we show that for every \(\eta>0\), there is an integer \(N_{0}>0\) such that for any \(N>N_{0}\) the following holds: Let \(\alpha_{1}>\alpha_{2}>0\) such that \(\alpha_{1}+\alpha_{2}=1\). Let \(G[X,Y]\) be a balanced bipartite graph on \(2(N-1)\) vertices with \(\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)\). Then for each red-blue-edge-coloring of \(G\), either there exist red even cycles of each length in \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{1}N\}\), or there exist blue even cycles of each length in \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{2}N\}\). Furthermore, the bound \(\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)\) is asymptotically tight. Previous studies on Schelp's question on cycles are on diagonal case, we obtain an asymptotic result of Schelp's question for all non-diagonal cases. **Keywords:** bipartite Ramsey number; cycles; minimum degree; Szemeredi's Regular Lemma Introduction For \(r\geq 2\) and graphs \(G\), \(G_{1}\), \(G_{2}\), write \(G\longmapsto(G_{1},G_{2})\) if each \(2\)-edge-coloring of \(G\) yields a monochromatic \(G_{i}\) for some \(i\in[2]\). If \(G_{1}=G_{2}\), we say that \(G\) arrows \(G_{1}\), which we call _diagonal case_. The _Ramsey number_\(r(G_{1},G_{2})\) is the minimum number \(n\) such that the complete graph \(K_{n}\longmapsto(G_{1},G_{2})\). The Ramsey numbers of cycles was determined independently by Bondy and Erdos [4], Faudree and Schlep [5], and Rosta[15, 16]. These results showed that for \(m\geq n\geq 3\), \[r(C_{m},C_{n})=\begin{cases}2m-1,&\text{$n\ odd\ and\ (m,\ n)\neq(3,3)$},\\ m+\frac{n}{2}-1,&\text{$m,\ n\ even\ and\ (m,\ n)\neq(4,4)$},\\ \max\{m+\frac{n}{2}-1,\ 2n-1\},&\text{$m\ odd,\ n\ even$}.\end{cases}\] A _connected matching_ in a graph \(G\) is a matching contained in a connected component of \(G\). In [13], Luczak firstly employed the following approach to show that \(r(C_{n},C_{n},C_{n})\leq(4+o(1))n\) for large \(n\): show the existence of a large monochromatic connected matching in the reduced graph obtained by applying Szemeredi's Regularity Lemma, then guaranteed by the Regularity Lemma, this matching could be extended to a long monochromatic cycle in the original graph. Schelp [17] observed that for some sparse graphs \(G\), such as cycles, paths and trees of specified maximum degree, a'sufficiently dense' graph \(H\) of order \(r(G,G)\) also arrows \(G\). In 2007, Nikiforov and Schelp showed the following Ramsey-Turan type result for cycles. **Theorem 1.1** (Nikiforov and Schelp [14]).: _If n is sufficiently large and G is a graph of order \(2n-1\) with \(\delta(G)\geq(2-10^{-6})n\), then for each \(2\)-coloring \(E(G)=E(R)\cup E(B)\), either \(C_{t}\subset R\) for all \(t\in[3,n]\) or \(C_{t}\subset B\) for all \(t\in[3,n]\)._ Note that \(r(C_{n},C_{n})=2n-1\) if \(n\geq 5\) is odd. And results as in Theorem 1.1 are more interesting and challenging when we require the order of \(G\) starting from \(r(C_{n},C_{n})\). In 2010, Li, Nikiforov and Schelp proposed the following conjecture. **Conjecture 1.1** (Li, Nikiforov and Schelp [12]).: _Let \(n\geq 4\) and let \(G\) be an n-graph with \(\delta(G)>\frac{3}{4}n\). If \(E(G)=E(B)\uplus E(R)\) is a \(2\)-edge-coloring of G, then either \(C_{k}\subseteq B\) or \(C_{k}\subseteq R\) for all \(k\in[4,\lceil\frac{n}{2}\rceil]\)._ Li, Nikiforov and Schelp [12] also showed that if \(n\) is large enough and \(k\in[4,(\frac{1}{8}-o(1))n]\), Conjecture 1.1 is right. Benevides, Luczak, Scott, Skokan and White [3] proved that for large n, Conjecture 1.1 is right except one special \(2\)-edge-coloring of \(G\) and proposed the following conjecture. **Conjecture 1.2** (Benevides, Luczak, Scott, Skokan and White [3]).: _Let \(G\) be an n-graph with \(\delta(G)\geq\frac{3}{4}n\), where \(n=3t+r\), \(r\in\{0,1,2\}\). Then each \(2\)-edge-coloring of \(G\) yields a monochromatic cycle of length at least \(2t+r\)._ In [17], Schelp formulated this type of studies as the following question: for which graphs \(H\) there exists a constant \(c\in(0,1)\) such that for any graph \(G\) of order \(r(H,H)\) with \(\delta(G)>c|V(G)|\), \(G\longmapsto(H,H)\). Meantime, Schelp posed the following conjecture. **Conjecture 1.3** (Schelp [17]).: _Let \(t=r(P_{n})\) with n large. If \(G\) is a graph of order \(t\) with \(\delta(G)>\frac{3}{4}t\), then \(G\) arrows \(P_{n}\)._ Gyarfas and Sarkozy [10] determined the Ramsey number \(r(S_{t},n_{1}K_{2},n_{2}K_{2})\), combining with Szemeredi's Regularity Lemma implies an asymptotic form of Conjecture 1.3. Balogh, Kostochka, Lavrov, and Liu [1] showed that Conjecture 1.2 is right for large \(n\), and Conjecture 1.3 is right for all even paths. In fact, Balogh, Kostochka, Lavrov, and Liu proved a stronger result: **Theorem 1.2** (Balogh, Kostochka, Lavrov, and Liu [1]).: _There exists an integer \(n_{0}\) with the following property. Let \(n=3t+r>n_{0}\), where \(r\in\{0,1,2\}\). Let \(G\) be an \(n\)-graph with \(\delta(G)\geq\frac{3n-1}{4}\). Then for any \(2\)-edge-coloring of \(G\), either there are cycles of every length in \(\{3,4,\ldots,2t+r\}\) of the same color, or there are cycles of every even length in \(\{4,6,\ldots,2t+2\}\) of the same color._ The _bipartite Ramsey number_\(br(G_{1},G_{2},\ldots,G_{r})\) is the minimum number \(N\) such that the bipartite complete graph \(K_{N,N}\longmapsto(G_{1},G_{2},\ldots,G_{r})\). Let \(G\) be a bipartite graph with partition \(V_{1}\uplus V_{2}\). For \(X\subseteq V_{1}\), \(Y\subseteq V_{2}\), let \(G[X,Y]\) denote the bipartite subgraph induced by \(X\uplus Y\) in \(G\). \(G[V_{1},V_{2}]\) is called a _balanced bipartite graph_ if \(|V_{1}|=|V_{2}|\). The study of bipartite Ramsey number was initiated in the early 1970s by Faudree and Schelp [7], and Gyarfas and Lehel [9]. They determined the bipartite ramsey numbers of paths. Applying Luczak's method, i.e., combining their result on paths and Szemeredi's regularity lemma, one can obtain the asymptotic values of bipartite Ramsey numbers of cycles. Recently, DeBiasio and Krueger [6] studied a bipartite version of Schelp's question. **Theorem 1.3** (DeBiasio and Krueger [6]).: _Let \(G\) be a balanced bipartite graph of order \(2n\). If \(\delta(G)\geq\frac{3}{4}n\), then in every \(2\)-coloring of the edges of \(G\) there exists a monochromatic connected matching of size at least \(\frac{n}{2}\)._ Combining Theorem 1.3 with Luczak's method, they obtained the following result. **Theorem 1.4** (DeBiasio and Krueger [6]).: _For all real numbers \(\gamma\), \(\eta\) with \(0\leq 32\sqrt[4]{\eta}<\gamma\leq\frac{1}{4}\), there exists \(n_{0}\) such that if \(G\) is a balanced bipartite graph on \(2n\geq 2n_{0}\) vertices with \(\delta(G)\geq(\frac{3}{4}+\gamma)n\), then in every \(2\)-coloring of G, either there exists a monochromatic cycle on at least \((1+\eta)n\) vertices, or there exist a monochromatic path on at least \(2\lceil\frac{n}{2}\rceil\) vertices and a monochromatic cycle on at least \(2\lfloor\frac{n}{2}\rfloor\) vertices._ We note that most previous studies on cycles (as Theorem 1.1, Theorem 1.2, Theorem 1.4) are basically on diagonal case. The conclusions in Theorem 1.1 and Theorem 1.2 are pancyclic, but if we look at the longest cycle in Theorem 1.2 (for example), it basically says that \(G\longrightarrow(C_{2t+2},C_{2t+2})\) and the order of \(G\) is \(3t+r\) different from \(r(C_{2t+2},C_{2t+2})\) by at most \(2\). For the bipartite Ramsey numbers of cycles, Yan and Peng [19] recently showed that for \(m,\ n\geq 5\), \[br(C_{2m},C_{2n})=\begin{cases}m+n-1,&m\neq n,\\ m+n,&m=n.\end{cases} \tag{1}\] If \(\min\{m,\ n\}\leq 4\), (1) also holds by the results of Beineke and Schwenk [2], Zhang and Sun [20], Zhang, Sun and Wu [21], and Gholami and Rowshan [8]. In this paper, we study the minimum degree version for non-diagonal case of cycles: for \(m\neq n\), what is the tight bound on \(\delta(G)\) such that \(G\longmapsto(C_{2m},C_{2n})\) for any balanced bipartite graph \(G\) of order \(2br(C_{2m},C_{2n})=2(m+n-1)\)? By the method introduced by Luczak [13] (see further development by Letzter[11]), if we are interested in the asymptotic bipartite Ramsey problem for cycles, then our key task is to study the bipartite Ramsey number for connected matchings. A _connected \(k\)-matching_ in a graph, denoted by \(CM_{k}\), is a matching with \(k\) edges lying in a component. In this paper, we prove that for \(m\neq n\), if \(G\) is a balanced bipartite graph of order \(2br(CM_{m},CM_{n})=2(m+n-1)\) with \(\delta(G)>\frac{3}{4}(m+n-1)\), then \(G\longmapsto(CM_{m},CM_{n})\). Applying Luczak's method, we obtain an asymptotic result of Schelp's question for asymmetric cycles and we state our main results below. We first show the following theorem. **Theorem 1.5**.: _Let \(G[V_{1},V_{2}]\) be a balanced bipartite graph on \(2(m+n-1)\) vertices with \(\delta(G)>\frac{3}{4}(m+n-1)\), where \(m>n\). Then \(G\longrightarrow(CM_{m},CM_{n})\)._ Note that \(br(CM_{m},CM_{n})=m+n-1\) for \(m\neq n\). Let \(K_{N,N}\) be with bipartition \(X\uplus Y\), where \(N=m+n-2\). Let \(X_{1}\uplus X_{2}\) be a partition of \(X\) with \(|X_{1}|=m-1\) and \(|X_{2}|=n-1\). Color all edges between \(X_{1}\) and \(Y\) in red, and all edges between \(X_{2}\) and \(Y\) in blue, then there is neither a red \(CM_{m}\) nor a blue \(CM_{n}\). Thus \(br(CM_{m},CM_{n})\geq m+n-1\) for \(m\neq n\). By equation (1), \(br(CM_{m},CM_{n})\leq m+n-1\) for \(m\neq n\). The following construction shows that the minimum degree condition in Theorem 1.5 is tight. **Construction 1.1**.: _Let X and Y be disjoint sets with m+n-1 vertices, where \(n<m<3n\). Partition X into \(\{X_{i}:i\in[4]\}\) and partition Y into \(\{Y_{i}:i\in[4]\}\), such that \(|X_{i}|=|Y_{i}|=\frac{m+n-1}{4}\) for each \(i\in[4]\). For each \(i\in[2]\), let \(G[X_{i},Y_{i}\uplus Y_{3}\uplus Y_{4}]\) be a complete bipartite graph. For each \(i\in\{3,4\}\), let \(G[X_{i},Y_{1}\uplus Y_{2}\uplus Y_{i}]\) be a complete bipartite graph. Color \(G[X_{i},Y_{i}]\) in blue for each \(i\in[4]\), and color \(G[X_{1}\uplus X_{2},Y_{3}\uplus Y_{4}]\) and \(G[X_{3}\uplus X_{4},Y_{1}\uplus Y_{2}]\) in red. Then the red maximum connected matching has size \(\frac{m+n-1}{2}<m\), since \(n<m\); and the maximum blue connected matching has size \(\frac{m+n-1}{4}<n\), since \(m<3n\)._ Combining Theorem 1.5 and Szemeredi's Regularity Lemma, we obtain the following result for asymmetric cycles. Note that if \(\alpha_{1}>\alpha_{2}>0\) and \(\alpha_{1}+\alpha_{2}=1\), then by equation (1), \(br(C_{2\lfloor\alpha_{1}N\rfloor},C_{2\lfloor\alpha_{2}N\rfloor})=N-1\). **Theorem 1.6**.: _For every \(\eta>0\), there exists a positive integer \(N_{0}\) such that for every integer \(N>N_{0}\) the following holds. Let \(\alpha_{1}>\alpha_{2}>0\) such that \(\alpha_{1}+\alpha_{2}=1\). Let \(G[X,Y]\) be a balanced bipartite graph on \(2(N-1)\) vertices with \(\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)\). Then for each red-blue-edge-coloring of \(G\), either there exist red even cycles of each length in \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{1}N\}\), or there exist blue even cycles of each length in \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{2}N\}\)._ The structure of this paper goes as follows. In section 2, we show the existence of large monochromatic components. Based on this result, we will prove Theorem 1.5 implying the existence of large monochromatic connected matchings in section 3. In Section 4, we use Szemeredi's Regularity Lemma to expand monochromatic connected matchings into monochromatic cycles. ## 2 Monochromatic components Throughout this paper, for a red-blue-edge-colored graph \(G\), we use \(G_{R}\) to denote the spanning subgraph induced by all red edges of \(G\), and use \(G_{B}\) to denote the spanning subgraph induced by all blue edges of \(G\). For any \(v\in V(G)\), let \(N_{R}(v)=\{u\in V(G):uv\in E(G_{R})\}\) and \(N_{B}(v)=\{u\in V(G):uv\in E(G_{B})\}\). We say a graph \(G=\emptyset\) if \(E(G)=\emptyset\). For a bipartite graph \(G\), the complement \(\overline{G}\) is the bipartite graph with the same bipartition of \(G\) such that \(xy\in E(\overline{G})\) if and only if \(xy\notin E(G)\). **Lemma 2.1**.: _Let \(G[V_{1},V_{2}]\) be a balanced bipartite graph on 2(m+n-1) vertices with \(\delta(G)>\frac{3}{4}(m+n-1)\), where \(m>n\). For each red-blue-edge-coloring of G, there exists either a red component on at least m vertices in both \(V_{1}\) and \(V_{2}\); or a blue component on at least n vertices in both \(V_{1}\) and \(V_{2}\)._ Proof.: Suppose that there exists a red-blue-edge-coloring of \(G\) yielding neither red component on at least \(m\) vertices in both \(V_{1}\) and \(V_{2}\); nor blue component on at least \(n\) vertices in both \(V_{1}\) and \(V_{2}\). Let \(\mathcal{B}\) and \(\mathcal{R}\) be a largest blue component and a largest red component of \(G\) respectively. For each \(i\in[2]\), let \(B_{i}=V(\mathcal{B})\cap V_{i}\), \(R_{i}=V(\mathcal{R})\cap V_{i}\), \(BR_{i}=B_{i}\cap R_{i}\), and \(V^{\prime}_{i}=V_{i}\backslash(B_{i}\cup R_{i})\). By the hypothesis, \[\min\{|B_{1}|,|B_{2}|\}\leq n-1, \tag{2}\] and \[\min\{|R_{1}|,|R_{2}|\}\leq m-1. \tag{3}\] For any pair of vertices \(x,x^{\prime}\in V_{1}\), by inclusion-exclusion principle, we have that \(|N_{G}(x)\cap N_{G}(x^{\prime})|\geq|N_{G}(x)|+|N_{G}(x^{\prime})|-|V_{2}| \geq 2\delta(G)-|V_{2}|>\frac{m+n-1}{2}\), since \(\delta(G)>\frac{3}{4}(m+n-1)\). Thus \(G\) is connected. If \(B_{i}=\emptyset\) for some \(i\in[2]\), then \(G[V_{1},V_{2}]\) is a red connected graph, contradicting to the hypothesis. Thus \(B_{i}\neq\emptyset\) for each \(i\in[2]\). Similarly, we have that \(R_{i}\neq\emptyset\) for each \(i\in[2]\). So for each \(i\in[2]\), \[|V^{\prime}_{i}|=|V_{i}|-|B_{i}\cup R_{i}|\leq m+n-2. \tag{4}\] **Claim 1**.: _For each \(i\in[2]\), the following holds._ (i) _If \(x\in BR_{i}\), then \(N_{B}(x)\subseteq B_{3-i}\) and \(N_{R}(x)\subseteq R_{3-i}\)._ (ii) _If \(x\in B_{i}\backslash R_{i}\), then \(N_{B}(x)\subseteq B_{3-i}\) and \(N_{R}(x)\subseteq V^{\prime}_{3-i}\uplus(B_{3-i}\backslash R_{3-i})\)._ (iii) _If \(x\in R_{i}\backslash B_{i}\), then \(N_{B}(x)\subseteq V^{\prime}_{3-i}\uplus(R_{3-i}\backslash B_{3-i})\) and \(N_{R}(x)\subseteq R_{3-i}\)._ (iv) _If \(x\in V^{\prime}_{i}\), then \(N_{B}(x)\subseteq V^{\prime}_{3-i}\uplus(R_{3-i}\backslash B_{3-i})\) and \(N_{R}(x)\subseteq V^{\prime}_{3-i}\uplus(B_{3-i}\backslash R_{3-i})\)._ (v)_\(G[BR_{i},V^{\prime}_{3-i}]=\emptyset\)._ (vi) _\(G[B_{i}\backslash R_{i},R_{3-i}\backslash B_{3-i}]=\emptyset\)._ (vii) _Suppose that \(G[X_{1},X_{2}]=\emptyset\) for \(X_{1}\subset V_{1}\) and \(X_{2}\subset V_{2}\). If \(X_{j}\neq\emptyset\) for some \(j\in[2]\), then \(|X_{3-j}|<\frac{m+n-1}{4}\)._ Proof.: (i) Let \(x\in BR_{i}=B_{i}\cap R_{i}\). Since \(x\in B_{i}\) and \(\mathcal{B}\) is a largest blue component, \(N_{B}(x)\subseteq B_{3-i}\). Since \(x\in R_{i}\) and \(\mathcal{R}\) is a largest red component, \(N_{R}(x)\subseteq R_{3-i}\). (ii) Let \(x\in B_{i}\backslash R_{i}\). Since \(x\in B_{i}\) and \(\mathcal{B}\) is a largest blue component, \(N_{B}(x)\subseteq B_{3-i}\). If \(N_{R}(x)\cap R_{3-i}\neq\emptyset\), then since \(\mathcal{R}\) is a largest red component, \(x\in R_{i}\), a contradiction. Thus \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}=V^{\prime}_{3-i}\uplus(B_{3-i} \backslash R_{3-i})\). (iii) The proof is similar to (ii). (iv) Let \(x\in V^{\prime}_{i}\). If \(N_{B}(x)\cap B_{3-i}\neq\emptyset\), then since \(\mathcal{B}\) is a largest blue component, \(x\in B_{i}\), a contradiction. Thus \(N_{B}(x)\subseteq V_{3-i}\backslash B_{3-i}=V^{\prime}_{3-i}\uplus(R_{3-i} \backslash B_{3-i})\). If \(N_{R}(x)\cap R_{3-i}\neq\emptyset\), then since \(\mathcal{R}\) is a largest red component, \(x\in R_{i}\), a contradiction. Thus \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}=V^{\prime}_{3-i}\uplus(B_{3-i} \backslash R_{3-i})\). (v) If \(BR_{i}=\emptyset\) or \(V^{\prime}_{3-i}=\emptyset\), then it is done. Suppose that \(BR_{i}\neq\emptyset\) and \(V^{\prime}_{3-i}\neq\emptyset\). For any \(x\in BR_{i}\), by (i), \(N_{G}(x)\subseteq B_{3-i}\cup R_{3-i}\), so \(N_{G}(x)\cap V^{\prime}_{3-i}=\emptyset\). Thus \(G[BR_{i},V^{\prime}_{3-i}]=\emptyset\). (vi) If \(B_{i}\backslash R_{i}=\emptyset\) or \(R_{3-i}\backslash B_{3-i}=\emptyset\), then it is done. Suppose that \(B_{i}\backslash R_{i}\neq\emptyset\) and \(R_{3-i}\backslash B_{3-i}\neq\emptyset\). For any \(x\in B_{i}\backslash R_{i}\), by (ii), \(N_{G}(x)\in B_{3-i}\uplus V^{\prime}_{3-i}\), so \(N_{G}(x)\cap(R_{3-i}\backslash B_{3-i})=\emptyset\). Thus \(G[B_{i}\backslash R_{i}\), \(R_{3-i}\backslash B_{3-i}]=\emptyset\). (vii) Without loss of generality, suppose that \(X_{1}\neq\emptyset\). Since \(G[X_{1},X_{2}]=\emptyset\), for any \(x\in X_{1}\), \(N_{G}(x)\cap X_{2}=\emptyset\). Recall that \(\delta(G)>\frac{3}{4}(m+n-1)\), then \(|X_{2}|\leq|V_{2}\backslash N_{G}(x)|\leq|V_{2}|-\delta(G)<\frac{m+n-1}{4}\). Now we split our argument into three cases. **Case 1.**\(BR_{1}=BR_{2}=\emptyset\). By Claim 1(vi), both \(G[B_{1},R_{2}]=\emptyset\) and \(G[R_{1},B_{2}]=\emptyset\). For each \(i\in[2]\), recall that \(B_{i}\neq\emptyset\) and \(R_{i}\neq\emptyset\), then by Claim 1(vii), we have that \[|B_{i}|<\frac{m+n-1}{4} \tag{5}\] and \[|R_{i}|<\frac{m+n-1}{4}. \tag{6}\] Recall that \(BR_{1}=BR_{2}=\emptyset\). For any \(x\in B_{1}\), by Claim 1(ii), \(N_{B}(x)\subseteq B_{2}\) and \(N_{R}(x)\subseteq V^{\prime}_{2}\uplus B_{2}\), then \[|N_{R}(x)\cap V^{\prime}_{2}|\geq\delta(G)-|B_{2}|\stackrel{{(\ref {eq:1})}}{{>}}\frac{m+n-1}{2}. \tag{7}\] For any \(y\in R_{1}\), by Claim 1(iii), \(N_{R}(y)\subseteq R_{2}\) and \(N_{B}(y)\subseteq V^{\prime}_{2}\uplus R_{2}\), then \[|N_{B}(y)\cap V^{\prime}_{2}|\geq\delta(G)-|R_{2}|\stackrel{{( \ref{eq:1})}}{{>}}\frac{m+n-1}{2}. \tag{8}\] For any pair of vertices \(x,x^{\prime}\in B_{1}\), by inclusion-exclusion principle, we have that \[|N_{R}(x)\cap N_{R}(x^{\prime})| \geq|N_{R}(x)\cap V^{\prime}_{2}|+|N_{R}(x^{\prime})\cap V^{ \prime}_{2}|-|V^{\prime}_{2}|\] \[\stackrel{{(\ref{eq:2})}}{{>}}m+n-1-|V^{\prime}_{2}| \stackrel{{(\ref{eq:3})}}{{\geq}}1.\] Thus \(B_{1}\) is contained in some red component of \(G\), say \(\mathcal{F}_{1}\). Let \(x\in B_{1}\), then \[|\mathcal{F}_{1}\cap V_{2}|\geq|N_{R}(x)\cap V^{\prime}_{2}|\stackrel{{ (\ref{eq:1})}}{{>}}\frac{m+n-1}{2}\stackrel{{(\ref{eq:2})}}{{>}}| R_{2}|. \tag{9}\] Since \(\mathcal{R}\) is a largest red component, \(|\mathcal{R}|\geq|\mathcal{F}_{1}|\). Then \[|B_{1}|\leq|\mathcal{F}_{1}\cap V_{1}|=|\mathcal{F}_{1}|-|\mathcal{F}_{1}\cap V _{2}|\stackrel{{\eqref{eq:2}}}{{<}}|\mathcal{R}|-|R_{2}|=|R_{1}|. \tag{10}\] For any pair of \(y,y^{\prime}\in R_{1}\), by inclusion-exclusion principle, we have that \[|N_{B}(y)\cap N_{B}(y^{\prime})| \geq|N_{B}(y)\cap V_{2}^{\prime}|+|N_{B}(y^{\prime})\cap V_{2}^{ \prime}|-|V_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:2}}}{{>}}m+n-1-|V_{2}^{\prime}| \stackrel{{\eqref{eq:2}}}{{\geq}}1.\] Thus \(R_{1}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{2}\). Let \(y\in R_{1}\), then \[|\mathcal{F}_{2}\cap V_{2}|\geq|N_{B}(y)\cap V_{2}^{\prime}|\stackrel{{ \eqref{eq:2}}}{{>}}\frac{m+n-1}{2}\stackrel{{\eqref{eq:2}}}{{> }}|B_{2}|.\] Then \(|\mathcal{F}_{2}|=|\mathcal{F}_{2}\cap V_{1}|+|\mathcal{F}_{2}\cap V_{2}|>|R_ {1}|+|B_{2}|\stackrel{{\eqref{eq:2}}}{{>}}|B_{1}|+|B_{2}|=| \mathcal{B}|\), a contradiction to the maximality of \(\mathcal{B}\). **Case 2.**\(BR_{1}\neq\emptyset\) and \(BR_{2}\neq\emptyset\). For each \(i\in[2]\), by Claim 1(i), \[|B_{i}\cup R_{i}|\geq\delta(G)>\frac{3}{4}(m+n-1), \tag{11}\] then \[|V_{i}^{\prime}|=|V_{i}|-|B_{i}\cup R_{i}|<\frac{m+n-1}{4}. \tag{12}\] **Claim 2.**_For each \(i\in[2]\), either \(B_{i}\varsubsetneq R_{i}\) or \(R_{i}\varsubsetneq B_{i}\)._ Proof.: We first show that \(B_{i}\neq R_{i}\) for each \(i\in[2]\). On the contrary, suppose that \(B_{i}=R_{i}\) for some \(i\in[2]\). Without loss of generality, say that \(B_{1}=R_{1}=BR_{1}\). Now \(V_{1}=BR_{1}\uplus V_{1}^{\prime}\). By inequality (11), \[|B_{1}|=|R_{1}|=|BR_{1}|>\frac{3}{4}(m+n-1). \tag{13}\] Now \(V_{2}^{\prime}=\emptyset\), otherwise by Claim 1(v) and (vii), \(|BR_{1}|<\frac{m+n-1}{4}\), a contradiction to inequality (13). Thus \(V_{2}=B_{2}\cup R_{2}\). Since \(m\geq n+1\), \(|B_{1}|\stackrel{{\eqref{eq:2}}}{{>}}\frac{3}{4}(m+n-1)\geq \frac{3}{2}n\). By inequality (2), \[|B_{2}|\leq n-1. \tag{14}\] Then \(|R_{2}\backslash B_{2}|=|V_{2}\backslash B_{2}|\geq m\). By inequality (3), we have that \[|BR_{1}|=|B_{1}|=|R_{1}|\leq m-1. \tag{15}\] Combining inequalities (13) and (15), we have that \[m\geq 3n+2. \tag{16}\] For any \(x\in V_{1}^{\prime}\), by Claim 1(iv), \(N_{B}(x)\subseteq R_{2}\backslash B_{2}\) and \(N_{R}(x)\subseteq B_{2}\backslash R_{2}\) since \(V_{2}^{\prime}=\emptyset\), then \[|N_{B}(x)\cap(R_{2}\backslash B_{2})|\geq\delta(G)-|B_{2}| \tag{17}\] For any pair of vertices \(x,x^{\prime}\in V_{1}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})| \geq|N_{B}(x)\cap(R_{2}\backslash B_{2})|+|N_{B}(x^{\prime})\cap( R_{2}\backslash B_{2})|-|R_{2}\backslash B_{2}|\] \[\stackrel{{\eqref{eq:17}}}{{\geq}}2(\delta(G)-|B_{ 2}|)-|R_{2}\backslash B_{2}|\] \[=2\delta(G)-|V_{2}|-|B_{2}|\stackrel{{\eqref{eq:14} }}{{>}}\frac{m-n+1}{2}\geq 1,\] since \(m\geq n+1\). Thus \(V_{1}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{3}\). Recall that \(V_{1}=BR_{1}\uplus V_{1}^{\prime}\), then \(|\mathcal{F}_{3}\cap V_{1}|\geq|V_{1}^{\prime}|=|V_{1}|-|BR_{1}|\stackrel{{ \eqref{eq:15}}}{{\geq}}n\). Let \(x\in V_{1}^{\prime}\), then \[|\mathcal{F}_{3}\cap V_{2}|\geq|N_{B}(x)\cap(R_{2}\backslash B_{2})|\stackrel{{ \eqref{eq:17}}}{{\geq}}\delta(G)-|B_{2}|\stackrel{{ \eqref{eq:14}}}{{>}}\frac{3m-n+1}{4}\stackrel{{\eqref{eq:16}} }{{\geq}}2n+\frac{7}{4}.\] Now \(\mathcal{F}_{3}\) is a blue component such that \(|\mathcal{F}_{3}\cap V_{i}|\geq n\) for each \(i\in[2]\), a contradiction. Thus for each \(i\in[2]\), \(B_{i}\neq R_{i}\). Next we prove that for each \(i\in[2]\), either \(B_{i}\backslash R_{i}=\emptyset\) or \(R_{i}\backslash B_{i}=\emptyset\). On the contrary, suppose that \(B_{i}\backslash R_{i}\neq\emptyset\) and \(R_{i}\backslash B_{i}\neq\emptyset\) for some \(i\in[2]\). Without loss of generality, say that \(B_{1}\backslash R_{1}\neq\emptyset\) and \(R_{1}\backslash B_{1}\neq\emptyset\). By Claim 1(vi)-(vii), \(|B_{2}\backslash R_{2}|<\frac{m+n-1}{4}\) and \(|R_{2}\backslash B_{2}|<\frac{m+n-1}{4}\). Then \[|B_{2}|=|B_{2}\cup R_{2}|-|R_{2}\backslash B_{2}|\stackrel{{ \eqref{eq:11}}}{{>}}\frac{m+n-1}{2}. \tag{18}\] Since \(m\geq n+1\), \(|B_{2}|\stackrel{{\eqref{eq:18}}}{{>}}\frac{m+n-1}{2}\geq n\). By inequality (2), we have that \[|B_{1}|\leq n-1. \tag{19}\] Then \[|R_{1}\backslash B_{1}|=|B_{1}\cup R_{1}|-|B_{1}|\stackrel{{ \eqref{eq:11}}}{{>}}\frac{3m-n+1}{4}. \tag{20}\] We have that \(B_{2}\backslash R_{2}=\emptyset\), otherwise by Claim 1(vi)-(vii), \(|R_{1}\backslash B_{1}|<\frac{m+n-1}{4}\), contradicting to inequality (20). Recall that we have shown that \(B_{2}\neq R_{2}\), then \(B_{2}\varsubsetneq R_{2}\). Now \(BR_{2}=B_{2}\) and \(V_{2}=R_{2}\uplus V_{2}^{\prime}\). Then \[|R_{2}|=|B_{2}\cup R_{2}|\stackrel{{\eqref{eq:11}}}{{>}}\frac{3 }{4}(m+n-1). \tag{21}\] Now \(V_{1}^{\prime}=\emptyset\), otherwise by Claim 1(v) and (vii), \(|B_{2}|=|BR_{2}|<\frac{m+n-1}{4}\), a contradiction to inequality (18). Then \(V_{1}=B_{1}\cup R_{1}\), and so \(|R_{1}\backslash B_{1}|=|V_{1}\backslash B_{1}|\ \stackrel{{(\ref{eq:V1})}}{{\geq}} \ m\). By inequality (3), \(|R_{2}|\leq m-1\). Combining with inequality (21), we have that \[m\geq 3n+2. \tag{22}\] And \[|V_{2}^{\prime}|=|V_{2}|-|R_{2}|\geq n. \tag{23}\] For any \(y\in V_{2}^{\prime}\), by Claim 1(iv), \(N_{B}(y)\subseteq R_{1}\backslash B_{1}\) and \(N_{R}(y)\subseteq B_{1}\backslash R_{1}\) since \(V_{1}^{\prime}=\emptyset\), then \[|N_{B}(y)\cap(R_{1}\backslash B_{1})|\geq\delta(G)-|B_{1}|. \tag{24}\] For any pair of vertices \(y,y^{\prime}\in V_{2}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(y)\cap N_{B}(y^{\prime})| \geq|N_{B}(y)\cap(R_{1}\backslash B_{1})|+|N_{B}(y^{\prime})\cap (R_{1}\backslash B_{1})|-|R_{1}\backslash B_{1}|\] \[\stackrel{{(\ref{eq:V1})}}{{\geq}}2(\delta(G)-|B_{1 }|)-|R_{1}\backslash B_{1}|\] \[=2\delta(G)-|V_{1}|-|B_{1}|\stackrel{{(\ref{eq:V2})} }{{>}}\frac{m-n+1}{2}\geq 1,\] since \(m\geq n+1\). Thus \(V_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{4}\). Now \(|\mathcal{F}_{4}\cap V_{2}|\geq|V_{2}^{\prime}|\ \stackrel{{(\ref{eq:V1})}}{{\geq}} \ n\). Let \(y\in V_{2}^{\prime}\), then \[|\mathcal{F}_{4}\cap V_{1}|\geq|N_{B}(y)\cap(R_{1}\backslash B_{1})|\ \stackrel{{(\ref{eq:V1})}}{{\geq}}\delta(G)-|B_{1}|\stackrel{{ (\ref{eq:V1})}}{{>}}\frac{3m-n+1}{4}\stackrel{{(\ref{eq:V1})}}{{ \geq}}2n+\frac{7}{4}.\] Then \(\mathcal{F}_{4}\) is a blue component such that \(|\mathcal{F}_{4}\cap V_{i}|\geq n\) for each \(i\in[2]\), a contradiction. This completes the proof of Claim 2. By Claim 2, we only need to consider the following possibilities. **Subcase 2.1.** For each \(i\in[2]\), \(B_{i}\varsubsetneq R_{i}\). In this case, for each \(i\in[2]\), \(BR_{i}=B_{i}\) and \(V_{i}=R_{i}\uplus V_{i}^{\prime}\). Let \(i\in[2]\). For any \(x\in BR_{3-i}\), by Claim 1(i), \(N_{G}(x)\subseteq R_{i}\). Then \[|R_{i}|\geq\delta(G)>\frac{3}{4}(m+n-1), \tag{25}\] and \[|V_{i}^{\prime}|=|V_{i}|-|R_{i}|<\frac{m+n-1}{4}. \tag{26}\] By inequality (3), we can assume that \(|R_{1}|\leq m-1\). Then \[|V_{1}^{\prime}|=|V_{1}|-|R_{1}|\geq n. \tag{27}\] For any \(y\in V^{\prime}_{1}\), by Claim 1(iv), \(N_{B}(y)\subseteq V^{\prime}_{2}\uplus(R_{2}\backslash B_{2})\) and \(N_{R}(y)\subseteq V^{\prime}_{2}\) since \(B_{2}\varsubsetneq R_{2}\), then \[|N_{B}(y)\cap(R_{2}\backslash B_{2})|\geq\delta(G)-|V^{\prime}_{2}|. \tag{28}\] For any pair of vertices \(y,y^{\prime}\in V^{\prime}_{1}\), by inclusion-exclusion principle, we have that \[|N_{B}(y)\cap N_{B}(y^{\prime})| \geq|N_{B}(y)\cap(R_{2}\backslash B_{2})|+|N_{B}(y^{\prime})\cap( R_{2}\backslash B_{2})|-|R_{2}\backslash B_{2}|\] \[\stackrel{{\eqref{eq:N_B_B_B_B_B_B_B_B}}}{{\geq}}2( \delta(G)-|V^{\prime}_{2}|)-|R_{2}\backslash B_{2}|\] \[=2\delta(G)-|V_{2}|-|V^{\prime}_{2}|+|B_{2}|\stackrel{{ \eqref{eq:N_B_B_B_B_B}}}{{>}}\frac{m+n-1}{4}.\] Thus \(V^{\prime}_{1}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{5}\). Now \(|\mathcal{F}_{5}\cap V_{1}|\geq|V^{\prime}_{1}|\stackrel{{ \eqref{eq:N_B_B_B_B_B}}}{{\geq}}n\). Let \(y\in V^{\prime}_{1}\), then \(|\mathcal{F}_{5}\cap V_{2}|\geq|N_{B}(y)\cap(R_{2}\backslash B_{2})|\stackrel{{ \eqref{eq:N_B_B_B_B_B}}}{{\geq}}\delta(G)-|V^{\prime}_{2}|\stackrel{{ \eqref{eq:N_B_B_B_B_B_B}}}{{>}}\frac{m+n-1}{2}\geq n\), since \(m\geq n+1\). Thus \(\mathcal{F}_{5}\) is a blue component such that \(|\mathcal{F}_{5}\cap V_{i}|\geq n\) for each \(i\in[2]\), a contradiction. **Subcase 2.2.** For each \(i\in[2]\), \(R_{i}\varsubsetneq B_{i}\). Let \(i\in[2]\). For any \(x\in BR_{3-i}\), by Claim 1(i), \(N_{G}(x)\subseteq B_{i}\). Since \(m\geq n+1\), each \(|B_{i}|\geq\delta(G)>\frac{3}{4}(m+n-1)\geq\frac{3n}{2}\), contradicting to (2). **Subcase 2.3.** For some \(i\in[2]\), \(B_{i}\varsubsetneq R_{i}\) and \(R_{3-i}\varsubsetneq B_{3-i}\). Without loss of generality, assume that \(B_{1}\varsubsetneq R_{1}\) and \(R_{2}\varsubsetneq B_{2}\). Then \[|R_{1}|=|B_{1}\cup R_{1}|\stackrel{{\eqref{eq:N_B_B_B_B_B_B}}}{{>}} \frac{3}{4}(m+n-1), \tag{29}\] and \[|B_{2}|=|B_{2}\cup R_{2}|\stackrel{{\eqref{eq:N_B_B_B_B_B}}}{{>}} \frac{3}{4}(m+n-1). \tag{30}\] Since \(B_{2}\backslash R_{2}\neq\emptyset\), by Claim 1(vi)-(vii), \(|R_{1}\backslash B_{1}|<\frac{m+n-1}{4}\). Since \(m\geq n+1\), \(|B_{1}|=|R_{1}|-|R_{1}\backslash B_{1}|\stackrel{{\eqref{eq:N_B_B_B_B _B_B_B}}}{{>}}\frac{m+n-1}{2}\geq n\) and \(|B_{2}|\stackrel{{\eqref{eq:N_B_B_B_B_B_B_B}}}{{>}}\frac{3}{4}(m+n- 1)\geq\frac{3n}{2}\), contradicting to (2). **Case 3.** Exactly one of \(BR_{1}\) and \(BR_{2}\) is empty. Without loss of generality, assume that \(BR_{1}=\emptyset\) and \(BR_{2}\neq\emptyset\). Now \(V_{1}=B_{1}\uplus R_{1}\uplus V^{\prime}_{1}\). For any \(x\in BR_{2}\), by Claim 1(i), \(N_{G}(x)\subseteq B_{1}\uplus R_{1}\). Then \[|B_{1}|+|R_{1}|\geq\delta(G)>\frac{3}{4}(m+n-1), \tag{31}\] and so \[|V^{\prime}_{1}|=|V_{1}|-|B_{1}|-|R_{1}|<\frac{m+n-1}{4}. \tag{32}\] By averaging principle, either \(|B_{1}|>\frac{3}{8}(m+n-1)\) or \(|R_{1}|>\frac{3}{8}(m+n-1)\). If \(R_{2}\backslash B_{2}\neq\emptyset\) and \(B_{2}\backslash R_{2}\neq\emptyset\), then by Claim 1(vi)-(vii), \(|B_{1}|=|B_{1}\backslash R_{1}|<\frac{m+n-1}{4}\) and \(|R_{1}|=|R_{1}\backslash B_{1}|<\frac{m+n-1}{4}\), a contradiction. Then either \(R_{2}\subseteq B_{2}\) or \(B_{2}\subseteq R_{2}\). Now we only need to consider the following possibilities. **Subcase 3.1.**\(BR_{1}=\emptyset\) and \(B_{2}=R_{2}=BR_{2}\). In this case, \(V_{1}=B_{1}\uplus R_{1}\uplus V_{1}^{\prime}\) and \(V_{2}=BR_{2}\uplus V_{2}^{\prime}\). **Claim 3.**\(|BR_{2}|<\frac{m+n-1}{2}\). _Proof._ On the contrary, suppose that \(|BR_{2}|\geq\frac{m+n-1}{2}\). Now \(V_{1}^{\prime}=\emptyset\), otherwise by Claim 1(v) and (vii), \(|BR_{2}|<\frac{m+n-1}{4}\), a contradiction. Then \(V_{1}=B_{1}\uplus R_{1}\). Since \(m\geq n+1\), \(|B_{2}|=|BR_{2}|\geq\frac{m+n-1}{2}\geq n\). By inequality (2), \[|B_{1}|\leq n-1. \tag{33}\] Then \[|R_{1}|=|V_{1}|-|B_{1}|\geq m. \tag{34}\] Combining inequalities (3) and (34), we have that \[|BR_{2}|=|R_{2}|\leq m-1. \tag{35}\] Then \[|V_{2}^{\prime}|=|V_{2}|-|BR_{2}|\geq n. \tag{36}\] For any \(x\in V_{2}^{\prime}\), by Claim 1(iv), \(N_{B}(x)\subseteq R_{1}\) and \(N_{R}(x)\subseteq B_{1}\) since \(V_{1}=B_{1}\uplus R_{1}\), then \[|N_{B}(x)\cap R_{1}|\geq\delta(G)-|B_{1}|. \tag{37}\] For any pair of vertices \(x,x^{\prime}\in V_{2}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})| \geq|N_{B}(x)\cap R_{1}|+|N_{B}(x^{\prime})\cap R_{1}|-|R_{1}| \tag{38}\] \[\stackrel{{(\ref{eq:2})}}{{\geq}}2(\delta(G)-|B_{1} |)-|R_{1}|\] \[=2\delta(G)-|V_{1}|-|B_{1}|\stackrel{{(\ref{eq:2})}}{ {>}}\frac{m-n+1}{2}\geq 1,\] since \(m\geq n+1\). Thus \(V_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{1}\). Now \(|\mathcal{H}_{1}\cap V_{2}|\geq|V_{2}^{\prime}|\stackrel{{(\ref{eq:2})}}{ {\geq}}n\). Suppose that \(m>3n\). Let \(x\in V_{2}^{\prime}\), then \(|\mathcal{H}_{1}\cap V_{1}|\geq|N_{B}(x)\cap R_{1}|\stackrel{{(\ref {eq:2})}}{{\geq}}\delta(G)-|B_{2}|\stackrel{{(\ref{eq:2})}}{{>}} \frac{3m-n+1}{4}>2n\). Now \(\mathcal{H}_{1}\) is a blue component such that \(|\mathcal{H}_{1}\cap V_{i}|\geq n\) for each \(i\in[2]\), a contradiction. Thus we have that \(m\leq 3n\). Since \(BR_{1}=\emptyset\), for any \(y\in R_{1}\), by Claim 1(iii), \(N_{R}(y)\subseteq R_{2}\) and \(N_{B}(y)\subseteq V_{2}^{\prime}\) since \(B_{2}=R_{2}\), then \(|N_{B}(y)\cap V_{2}^{\prime}|\geq\delta(G)-|R_{2}|\stackrel{{( \ref{eq:2})}}{{>}}\frac{3n-m+1}{4}>0\). Since \(V_{2}^{\prime}\subseteq\mathcal{H}_{1}\cap V_{2}\), \(R_{1}\subseteq\mathcal{H}_{1}\cap V_{1}\). Then \(\mathcal{H}_{1}\) is a blue component such that \(|\mathcal{H}_{1}\cap V_{1}|\geq|R_{1}|\stackrel{{(\ref{eq:2})}}{{ \geq}}m\geq n+1\) and \(|\mathcal{H}_{1}\cap V_{2}|\geq|V_{2}^{\prime}|\stackrel{{(\ref{eq:2})}}{{ \geq}}n\), a contradiction. This completes the proof of Claim 3. Recall that \(B_{2}=R_{2}=BR_{2}\). By Claim 3, \[|V_{2}^{\prime}|=|V_{2}|-|BR_{2}|>\frac{m+n-1}{2}>|BR_{2}|=|B_{2}|=|R_{2}|. \tag{39}\] Recall that \(BR_{1}=\emptyset\). For any \(x\in B_{1}\), by Claim 1(ii), \(N_{B}(x)\subseteq BR_{2}\) and \(N_{R}(x)\subseteq V_{2}^{\prime}\), then \[|N_{R}(x)\cap V_{2}^{\prime}|\geq\delta(G)-|BR_{2}|. \tag{40}\] For any \(y\in R_{1}\), by Claim 1(iii), \(N_{R}(y)\subseteq BR_{2}\) and \(N_{B}(y)\subseteq V_{2}^{\prime}\), then \[|N_{B}(y)\cap V_{2}^{\prime}|\geq\delta(G)-|BR_{2}|. \tag{41}\] **Claim 4**.: _The following holds._ (i)_\(B_{1}\) is contained in some red component of \(G\), say \(\mathcal{H}_{2}\)._ (ii)_\(R_{1}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{3}\)._ (iii)_\(V_{1}^{\prime}=\emptyset\)._ Proof.: (i) For any pair of vertices \(x,x^{\prime}\in B_{1}\), by inclusion-exclusion principle, we have that \[|N_{R}(x)\cap N_{R}(x^{\prime})| \geq|N_{R}(x)\cap V_{2}^{\prime}|+|N_{R}(x^{\prime})\cap V_{2}^{ \prime}|-|V_{2}^{\prime}|\] \[\stackrel{{(\ref{eq:2})}}{{\geq}}2(\delta(G)-|BR_{2} |)-|V_{2}^{\prime}|\] \[=2\delta(G)-|V_{2}|-|BR_{2}|>0,\] by Claim 3. Thus \(B_{1}\) is contained in some red component of \(G\), say \(\mathcal{H}_{2}\). (ii) For any pair of vertices \(y,y^{\prime}\in R_{1}\), by inclusion-exclusion principle, we have that \[|N_{B}(y)\cap N_{B}(y^{\prime})| \geq|N_{B}(y)\cap V_{2}^{\prime}|+|N_{B}(y^{\prime})\cap V_{2}^{ \prime}|-|V_{2}^{\prime}| \tag{42}\] \[\stackrel{{(\ref{eq:2})}}{{\geq}}2(\delta(G)-|BR_{2} |)-|V_{2}^{\prime}|\] \[=2\delta(G)-|V_{2}|-|BR_{2}|>0,\] by Claim 3. Thus \(R_{1}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{3}\). (iii) Suppose that \(V_{1}^{\prime}\neq\emptyset\). By Claim 1(v) and (vii), \[|BR_{2}|<\frac{m+n-1}{4}. \tag{43}\] By (i), \(B_{1}\subseteq\mathcal{H}_{2}\cap V_{1}\). Let \(x\in B_{1}\), then \[|\mathcal{H}_{2}\cap V_{2}|\geq|N_{R}(x)\cap V_{2}^{\prime}|\stackrel{{ \eqref{eq:20}}}{{\geq}}\delta(G)-|BR_{2}|\stackrel{{ \eqref{eq:20}}}{{>}}\frac{m+n-1}{2}>|BR_{2}|=|R_{2}|.\] Since \(\mathcal{R}\) is a largest red component, \(|\mathcal{H}_{2}|\leq|\mathcal{R}|\). Then \[|B_{1}|\leq|\mathcal{H}_{2}\cap V_{1}|=|\mathcal{H}_{2}|-|\mathcal{H}_{2}\cap V _{2}|<|\mathcal{R}|-|R_{2}|=|R_{1}|. \tag{44}\] By (ii), \(R_{1}\subseteq\mathcal{H}_{3}\cap V_{1}\). Let \(y\in R_{1}\), then \[|\mathcal{H}_{3}\cap V_{2}|\geq|N_{B}(y)\cap V_{2}^{\prime}|\stackrel{{ \eqref{eq:20}}}{{\geq}}\delta(G)-|BR_{2}|\stackrel{{ \eqref{eq:20}}}{{>}}\frac{m+n-1}{2}>|BR_{2}|=|B_{2}|.\] Thus \[|\mathcal{H}_{3}|=|\mathcal{H}_{3}\cap V_{1}|+|\mathcal{H}_{3}\cap V_{2}|>|R_ {1}|+|B_{2}|\stackrel{{\eqref{eq:20}}}{{>}}|B_{1}|+|B_{2}|=| \mathcal{B}|,\] a contradiction to the maximality of \(\mathcal{B}\). Thus \(V_{1}^{\prime}=\emptyset\). Recall that \(BR_{1}=\emptyset\). By Claim 4(iii), \(V_{1}=B_{1}\uplus R_{1}\). For any \(x\in V_{2}^{\prime}\), by Claim 1(iv), \(N_{B}(x)\subseteq R_{1}\) and \(N_{R}(x)\subseteq B_{1}\), then \[|N_{R}(x)\cap B_{1}|\geq\delta(G)-|R_{1}|>\frac{3}{4}(m+n-1)-|R_{1}| \tag{45}\] and \[|N_{B}(x)\cap R_{1}|\geq\delta(G)-|B_{1}|>\frac{3}{4}(m+n-1)-|B_{1}| \tag{46}\] Suppose that \(|R_{1}|\leq\frac{m+n-1}{2}\), then \(|B_{1}|=|V_{1}\backslash R_{1}|\geq\frac{m+n-1}{2}\geq|R_{1}|\). For any \(x\in V_{2}^{\prime}\), \(|N_{R}(x)\cap B_{1}|\stackrel{{\eqref{eq:20}}}{{>}}\frac{3}{4}(m+ n-1)-|R_{1}|\geq\frac{m+n-1}{4}\). By Claim 4(i), \(B_{1}\subseteq\mathcal{H}_{2}\cap V_{1}\), then \(V_{2}^{\prime}\subseteq\mathcal{H}_{2}\cap V_{2}\). Thus \(|\mathcal{H}_{2}|\geq|B_{1}|+|V_{2}^{\prime}|\geq|R_{1}|+|V_{2}^{\prime}| \stackrel{{\eqref{eq:20}}}{{>}}|R_{1}|+|R_{2}|=|R|\), a contradiction to the maximality of \(\mathcal{R}\). Suppose that \(|R_{1}|>\frac{m+n-1}{2}\), then \(|B_{1}|=|V_{1}\backslash R_{1}|<\frac{m+n-1}{2}<|R_{1}|\). For any \(x\in V_{2}^{\prime}\), \(|N_{B}(x)\cap R_{1}|\stackrel{{\eqref{eq:20}}}{{>}}\frac{3}{4}(m+ n-1)-|B_{1}|>\frac{m+n-1}{4}\). By Claim 4(ii), \(R_{1}\subseteq\mathcal{H}_{3}\cap V_{1}\), then \(V_{2}^{\prime}\subseteq\mathcal{H}_{3}\cap V_{2}\). Thus \(|\mathcal{H}_{3}|\geq|R_{1}|+|V_{2}^{\prime}|>|B_{1}|+|V_{2}^{\prime}| \stackrel{{\eqref{eq:20}}}{{>}}|B_{1}|+|B_{2}|=|\mathcal{B}|\), a contradiction to the maximality of \(\mathcal{B}\). **Subcase 3.2.**\(BR_{1}=\emptyset\) and \(R_{2}\varsubsetneq B_{2}\). In this case \(V_{1}=B_{1}\uplus R_{1}\uplus V^{\prime}_{1}\) and \(V_{2}=B_{2}\uplus V^{\prime}_{2}\). Since \(BR_{1}=\emptyset\), by Claim 1(vi), \(G[R_{1},B_{2}\backslash R_{2}]=\emptyset\). Since \(B_{2}\backslash R_{2}\neq\emptyset\), by Claim 1(vii), \[|R_{1}|<\frac{m+n-1}{4}. \tag{47}\] Then \[|B_{1}|\stackrel{{\eqref{eq:2}}}{{\geq}}\delta(G)-|R_{1}|> \frac{m+n-1}{2}. \tag{48}\] Since \(m\geq n+1\), \(|B_{1}|\stackrel{{\eqref{eq:2}}}{{>}}\frac{m+n-1}{2}\geq n\). By inequality (2), we have that \[|B_{2}|\leq n-1. \tag{49}\] Since \(V_{2}=B_{2}\uplus V^{\prime}_{2}\), \[|V^{\prime}_{2}|=|V_{2}|-|B_{2}|\geq m. \tag{50}\] Since \(BR_{1}=\emptyset\), for any \(x\in B_{1}\), by Claim 1(ii), \(N_{B}(x)\subseteq B_{2}\) and \(N_{R}(x)\subseteq V^{\prime}_{2}\uplus(B_{2}\backslash R_{2})\), then \[|N_{R}(x)\cap V^{\prime}_{2}|\geq\delta(G)-|B_{2}|. \tag{51}\] For any pair of vertices \(x,x^{\prime}\in B_{1}\), by inclusion-exclusion principle, we have that \[|N_{R}(x)\cap N_{R}(x^{\prime})\cap V^{\prime}_{2}| \geq|N_{R}(x)\cap V^{\prime}_{2}|+|N_{R}(x^{\prime})\cap V^{ \prime}_{2}|-|V^{\prime}_{2}|\] \[\stackrel{{\eqref{eq:2}}}{{\geq}}2(\delta(G)-|B_{2 }|)-|V^{\prime}_{2}|\] \[=2\delta(G)-|V_{2}|-|B_{2}|\stackrel{{\eqref{eq:2}} }{{>}}\frac{m-n+1}{2}\geq 1,\] since \(m\geq n+1\). Thus \(B_{1}\) is contained in some red component of \(G\), say \(\mathcal{H}_{4}\). For any \(y\in V^{\prime}_{2}\), by Claim 1(iv), \(N_{B}(y)\subseteq R_{1}\uplus V^{\prime}_{1}\) and \(N_{R}(y)\subseteq B_{1}\uplus V^{\prime}_{1}\) since \(BR_{1}=\emptyset\), then \(|N_{R}(y)\cap B_{1}|\geq\delta(G)-|R_{1}\uplus V^{\prime}_{1}|\stackrel{{ \eqref{eq:2}}}{{>}}\frac{m+n-1}{4}\). Since \(B_{1}\subseteq\mathcal{H}_{4}\cap V_{1}\), \(V^{\prime}_{2}\subseteq\mathcal{H}_{4}\cap V_{2}\). Recall that \(R_{2}\varsubsetneq B_{2}\), then \[|\mathcal{H}_{4}|\geq|B_{1}|+|V^{\prime}_{2}|\stackrel{{\eqref{eq:2}} }{{>}}|R_{1}|+|V^{\prime}_{2}|\stackrel{{\eqref{eq:2}}}{{>}}|R_{ 1}|+|B_{2}|>|R_{1}|+|R_{2}|=|\mathcal{R}|,\] a contradiction to the maximality of \(\mathcal{R}\). **Subcase 3.3.**\(BR_{1}=\emptyset\) and \(B_{2}\varsubsetneq R_{2}\). In this case \(V_{1}=B_{1}\uplus R_{1}\uplus V^{\prime}_{1}\) and \(V_{2}=R_{2}\uplus V^{\prime}_{2}\). Since \(BR_{1}=\emptyset\), by Claim 1(vi), \(G[B_{1},R_{2}\backslash B_{2}]=\emptyset\). Since \(B_{1}\neq\emptyset\) and \(R_{2}\backslash B_{2}\neq\emptyset\), by Claim 1(vii), \[|R_{2}\backslash B_{2}|<\frac{m+n-1}{4}, \tag{52}\] \[|B_{1}|<\frac{m+n-1}{4}. \tag{53}\] Then \[|R_{1}|\stackrel{{\eqref{eq:2}}}{{\geq}}\delta(G)-|B_{1}|>\frac{m+n-1 }{2}. \tag{54}\] For any \(x\in V_{2}^{\prime}\), by Claim 1(iv), \(N_{B}(x)\subseteq R_{1}\uplus V_{1}^{\prime}\) and \(N_{R}(x)\subseteq B_{1}\uplus V_{1}^{\prime}\) since \(BR_{1}=\emptyset\), then \[|N_{B}(x)\cap R_{1}|\geq\delta(G)-|B_{1}|-|V_{1}^{\prime}|. \tag{55}\] **Claim 5**.: _The following holds._ (i)_\(V_{2}^{\prime}\neq\emptyset\). Besides, \(V_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{5}\)._ (ii)_\(V_{1}^{\prime}=\emptyset\). Furthermore, \(V_{1}=B_{1}\uplus R_{1}\)._ Proof.: (i) Suppose that \(V_{2}^{\prime}=\emptyset\). By the assumption \(B_{2}\varsubsetneq R_{2}\), \(R_{2}=V_{2}\). By inequality (3), \(|R_{1}|\leq m-1\). Now \[|B_{2}|=|R_{2}|-|R_{2}\backslash B_{2}|=|V_{2}|-|R_{2}\backslash B_{2}| \stackrel{{\eqref{eq:2}}}{{>}}\frac{3}{4}(m+n-1). \tag{56}\] Since \(m\geq n+1\), \(|B_{2}|\stackrel{{\eqref{eq:2}}}{{>}}\frac{3}{4}(m+n-1)\geq\frac{ 3}{2}n\). By inequality (2), \(|B_{1}|\leq n-1\). Now \(|B_{1}\cup R_{1}|\leq m+n-2\), so \(|V_{1}^{\prime}|=|V_{1}\backslash(B_{1}\cup R_{1})|\geq 1\). By Claim 1(v) and (vii), \(|B_{2}|=|BR_{2}|<\frac{m+n-1}{4}\), a contradiction to inequality (56). Thus \(V_{2}^{\prime}\neq\emptyset\). For any pair of vertices \(x,x^{\prime}\in V_{2}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})| \geq|N_{B}(x)\cap R_{1}|+|N_{B}(x^{\prime})\cap R_{1}|-|R_{1}|\] \[\stackrel{{\eqref{eq:2}}}{{\geq}}2(\delta(G)-|B_{1} |-|V_{1}^{\prime}|)-|R_{1}|\] \[=2(\delta(G)-|V_{1}|)+|R_{1}|>|R_{1}|-\frac{m+n-1}{2}\stackrel{{ \eqref{eq:2}}}{{>}}0.\] Thus \(V_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{5}\). (ii) Suppose that \(V_{1}^{\prime}\neq\emptyset\). Recall that \(B_{2}\varsubsetneq R_{2}\). By Claim 1(v) and (vii), \(|B_{2}|=|BR_{2}|<\frac{m+n-1}{4}\). Then \[|R_{2}|=|B_{2}|+|R_{2}\backslash B_{2}|\stackrel{{\eqref{eq:2}}}{{< }}\frac{m+n-1}{2}. \tag{57}\] Since \(BR_{1}=\emptyset\), for any \(y\in R_{1}\), by Claim 1(iii), \(N_{B}(y)\subseteq V_{2}^{\prime}\uplus(R_{2}\backslash B_{2})\) and \(N_{R}(y)\subseteq R_{2}\), then \(|N_{B}(y)\cap V_{2}^{\prime}|\geq\delta(G)-|R_{2}|\stackrel{{ \eqref{eq:2}}}{{>}}\frac{m+n-1}{2}\). By (i), \(V_{2}^{\prime}\subseteq\mathcal{H}_{5}\cap V_{2}\), then \(R_{1}\subseteq\mathcal{H}_{5}\cap V_{1}\). Since \(m\geq n+1\), \(|\mathcal{H}_{5}\cap V_{2}|\geq|V_{2}^{\prime}|=|V_{2}\backslash R_{2}| \stackrel{{\eqref{eq:2}}}{{>}}\frac{m+n-1}{2}\geq n\) and \(|\mathcal{H}_{5}\cap V_{1}|\geq|R_{1}|\stackrel{{\eqref{eq:2}}}{{> }}\frac{m+n-1}{2}\geq n\), a contradiction to the hypothesis. Thus \(V_{1}^{\prime}=\emptyset\). Since \(BR_{1}=\emptyset\), \(V_{1}=B_{1}\uplus R_{1}\). By Claim 5(i), \(V_{2}^{\prime}\subseteq\mathcal{H}_{5}\cap V_{2}\). Let \(x\in V_{2}^{\prime}\), then by Claim 5(ii), \[|\mathcal{H}_{5}\cap V_{1}|\geq|N_{B}(x)\cap R_{1}|\stackrel{{\eqref {eq:11}}}{{\geq}}\delta(G)-|B_{1}|\stackrel{{\eqref{eq:11}}}{{>}} \frac{m+n-1}{2}\geq n,\] since \(m\geq n+1\). If \(|V_{2}^{\prime}|\geq n\), then \(\mathcal{H}_{5}\) is a blue component such that \(|\mathcal{H}_{5}\cap V_{i}|\geq n\) for each \(i\in[2]\), a contradiction. Thus \(|V_{2}^{\prime}|\leq n-1.\) Recall that \(V_{2}=R_{2}\uplus V_{2}^{\prime}\), then \(|R_{2}|=|V_{2}|-|V_{2}^{\prime}|\geq m\). By inequality (3), \(|R_{1}|\leq m-1\). By Claim 5(ii), \(|B_{1}|=|V_{1}|-|R_{1}|\geq n\). Combining with inequality (53), we have that \[m\geq 3n+2. \tag{58}\] By inequality (2), \(|B_{2}|\leq n-1\). By the assumption \(B_{2}\varsubsetneq R_{2}\), \(|R_{2}\backslash B_{2}|=|V_{2}|-|B_{2}\uplus V_{2}^{\prime}|>|V_{2}|-2(n-1)=m- n+1\stackrel{{\eqref{eq:11}}}{{>}}\frac{m+n}{2}\), a contradiction to inequality (52). ## 3 Monochromatic connected matchings In this section, we will prove Theorem 1.5. Let \(\alpha^{\prime}(G)\) denote the number of edges in a maximum matching of \(G\). A _vertex cover_ of a graph \(G\) is a set \(Q\subseteq V(G)\) which contains at least one endpoint of each edge of \(G\). The Konig-Egervary Theorem plays an important role in the proof of Theorem 1.5. **Theorem 3.1** (The Konig-Egervary Theorem).: _In any bipartite graph, the number of edges in a maximum matching is equal to the number of vertices in a minimum vertex cover._ We will prove several crucial lemmas before giving the proof of Theorem 1.5. **Lemma 3.2**.: _Let \(G[V_{1},V_{2}]\) be a balanced bipartite graph on \(2(m+n-1)\) vertices with \(\delta(G)>\frac{3}{4}(m+n-1)\), where \(m>n\). Suppose that there is a red-blue-edge-coloring of G yielding no red connected m-matching. Then there is a blue component of G on at least n vertices in both \(V_{1}\) and \(V_{2}\)._ Proof.: By Lemma 2.1, each red-blue-edge-coloring of \(G\) yields either a red component on at least m vertices in both \(V_{1}\) and \(V_{2}\); or a blue component on at least n vertices in both \(V_{1}\) and \(V_{2}\). In the latter case, we are done. Thus we assume that \(\mathcal{R}\) is a largest red component of \(G\) such that \(|\mathcal{R}\cap V_{i}|\geq m\) for each \(i\in[2]\). Let \(T\) be a minimum vertex cover of \(\mathcal{R}\). For each \(i\in[2]\), let \(R_{i}=V(\mathcal{R})\cap V_{i}\), \(T_{i}=T\cap V_{i}\), \(R_{i}^{\prime}=R_{i}\backslash T_{i}\), and \(V_{i}^{\prime}=V_{i}\backslash R_{i}\), then \(|R_{i}|\geq m\) and \(V_{i}=T_{i}\uplus R_{i}^{\prime}\uplus V_{i}^{\prime}\). **Claim 6**.: _For each \(i\in[2]\), the following holds._ (i) _If \(x\in R_{i}^{\prime}\), then \(|N_{B}(x)\cap(R_{3-i}^{\prime}\uplus V_{3-i}^{\prime})|\geq\delta(G)-|T_{3-i}|\)._ (ii) _If \(x\in V_{i}^{\prime}\), then \(|N_{B}(x)\cap(R_{3-i}^{\prime}\uplus T_{3-i})|=|N_{B}(x)\cap R_{3-i}|\geq\delta (G)-|V_{3-i}^{\prime}|\)._ Proof.: (i) Let \(x\in R_{i}^{\prime}=R_{i}\backslash T_{i}\). Since \(T\) is a minimum vertex cover of \(\mathcal{R}\), \(N_{R}(x)\subseteq T_{3-i}\), then \(N_{G}(x)\cap(V_{3-i}\backslash T_{3-i})=N_{G}(x)\cap(R_{3-i}^{\prime}\uplus V _{3-i}^{\prime})\subseteq N_{B}(x)\). Thus \(|N_{B}(x)\cap(R_{3-i}^{\prime}\uplus V_{3-i}^{\prime})|\geq\delta(G)-|T_{3-i}|\). (ii) Let \(x\in V_{i}^{\prime}=V_{i}\backslash R_{i}\). If \(N_{R}(x)\cap R_{3-i}\neq\emptyset\), then since \(\mathcal{R}\) is a largest red component, \(x\in R_{i}\), a contradiction. Thus \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}=V_{3-i}^{\prime}\). Then \(N_{G}(x)\cap R_{3-i}\subseteq N_{B}(x)\), and so \(|N_{B}(x)\cap(R_{3-i}^{\prime}\uplus T_{3-i})|=|N_{B}(x)\cap R_{3-i}|\geq\delta (G)-|V_{3-i}^{\prime}|\). For each \(i\in[2]\), since \(|R_{i}|\geq m\), we have that \[|V_{i}^{\prime}|=|V_{i}|-|R_{i}|\leq n-1. \tag{59}\] By Theorem 3.1, \[|T_{1}|+|T_{2}|=|T|=\alpha^{\prime}(\mathcal{R})\leq m-1. \tag{60}\] Without loss of generality, assume that \[|T_{1}|\leq\frac{m-1}{2}. \tag{61}\] Then \[|R_{1}^{\prime}|=|R_{1}|-|T_{1}|\geq m-|T_{1}|\geq\frac{m+1}{2}. \tag{62}\] For each \(i\in[2]\), we have that \[|R_{i}^{\prime}|+|V_{i}^{\prime}|=|V_{i}|-|T_{i}|\stackrel{{\eqref {eq:R_i}}}{{\geq}}n. \tag{63}\] Combining inequalities (59) and (63), we have that \(R_{i}^{\prime}\neq\emptyset\) for each \(i\in[2]\). For any \(x\in R_{2}^{\prime}\), by Claim 6(i), \[|N_{B}(x)\cap(R_{1}^{\prime}\uplus V_{1}^{\prime})|\geq\delta(G)-|T_{1}|, \tag{64}\] then \[|N_{B}(x)\cap R_{1}^{\prime}|\geq\delta(G)-|T_{1}|-|V_{1}^{\prime}|. \tag{65}\] For any \(y\in V_{2}^{\prime}\), by Claim 6(ii), \(|N_{B}(y)\cap(R_{1}^{\prime}\uplus T_{1})|\geq\delta(G)-|V_{1}^{\prime}|\), then \[|N_{B}(y)\cap R_{1}^{\prime}|\geq\delta(G)-|T_{1}|-|V_{1}^{\prime}|. \tag{66}\] **Claim 7**.: _The following holds._ _(i) \(R^{\prime}_{2}\) is contained in some blue component of \(G\), say \(\mathcal{H}\)._ _(ii) \(|\mathcal{H}\cap V_{1}|\geq n+1\)._ Proof.: (i) For any pair of vertices \(x,x^{\prime}\in R^{\prime}_{2}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})| \geq|N_{B}(x)\cap(R^{\prime}_{1}\uplus V^{\prime}_{1})|+|N_{B}(x^{ \prime})\cap(R^{\prime}_{1}\uplus V^{\prime}_{1})|-|R^{\prime}_{1}\uplus V^{ \prime}_{1}|\] \[\stackrel{{\eqref{eq:2}}}{{\geq}}2(\delta(G)-|T_{1} |)-|R^{\prime}_{1}\uplus V^{\prime}_{1}|=2\delta(G)-|V_{1}|-|T_{1}|\stackrel{{ \eqref{eq:2}}}{{>}}\frac{n}{2}.\] Thus \(R^{\prime}_{2}\) is contained in some blue component, say \(\mathcal{H}\). (ii) By (i), \(R^{\prime}_{2}\subseteq\mathcal{H}\cap V_{2}\). Let \(x\in R^{\prime}_{2}\), since \(m\geq n+1\), we have that \[|\mathcal{H}\cap V_{1}|\geq|N_{B}(x)|\stackrel{{\eqref{eq:2}}}{{ \geq}}\delta(G)-|T_{1}|\stackrel{{\eqref{eq:2}}}{{>}}\frac{m+3n- 1}{4}\geq n.\] Now we split our argument into two cases. **Case 1.**\(|T_{2}|\geq\frac{m+n-1}{2}\). Now \(|T_{1}|\stackrel{{\eqref{eq:2}}}{{\leq}}m-1-|T_{2}|\leq\frac{m-n -1}{2}\). Recall that \(|R_{1}|\geq m\), then \[|R^{\prime}_{1}|=|R_{1}|-|T_{1}|\geq\frac{m+n+1}{2}. \tag{67}\] For any pair of vertices \(x,y\in R^{\prime}_{2}\uplus V^{\prime}_{2}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R^{\prime}_{1}|+|N_{B}(y)\cap R^{\prime}_{1}|-| R^{\prime}_{1}|\] \[\stackrel{{\eqref{eq:2}}}{{\geq}}2(\delta(G)-|T_{1} |-|V^{\prime}_{1}|)-|R^{\prime}_{1}|\] \[=2\delta(G)-2|V_{1}|+|R^{\prime}_{1}|>|R^{\prime}_{1}|-\frac{m+n- 1}{2}\stackrel{{\eqref{eq:2}}}{{\geq}}1.\] By Claim 7(i), the blue component \(\mathcal{H}\) contains \(R^{\prime}_{2}\uplus V^{\prime}_{2}\). Then \(|\mathcal{H}\cap V_{2}|\geq|R^{\prime}_{2}|+|V^{\prime}_{2}|\stackrel{{ \eqref{eq:2}}}{{\geq}}n\). By Claim 7(ii), \(|\mathcal{H}\cap V_{1}|\geq n+1\). **Case 2.**\(|T_{2}|<\frac{m+n-1}{2}\). For any \(z\in R^{\prime}_{1}\), by Claim 6(i), \[|N_{B}(z)\cap(R^{\prime}_{2}\uplus V^{\prime}_{2})|\geq\delta(G)-|T_{2}|. \tag{68}\] For any pair of vertices \(z,z^{\prime}\in R^{\prime}_{1}\), by inclusion-exclusion principle, we have that \[|N_{B}(z)\cap N_{B}(z^{\prime})| \geq|N_{B}(z)\cap(R^{\prime}_{2}\uplus V^{\prime}_{2})|+|N_{B}(z^{ \prime})\cap(R^{\prime}_{2}\uplus V^{\prime}_{2})|-|R^{\prime}_{2}\uplus V^{ \prime}_{2}|\] \[\stackrel{{\eqref{eq:R_2}}}{{\geq}}2(\delta(G)-|T_{2 }|)-|R^{\prime}_{2}\uplus V^{\prime}_{2}|\] \[=2\delta(G)-|V_{2}|-|T_{2}|>\frac{m+n-1}{2}-|T_{2}|>0,\] by the assumption \(|T_{2}|<\frac{m+n-1}{2}\). Thus \(R^{\prime}_{1}\) is contained in a blue component of \(G\), say \(\mathcal{H}^{\prime}\). For any \(x\in R^{\prime}_{2}\uplus V^{\prime}_{2}\), since \(m\geq n+1\), we have that \[|N_{B}(x)\cap R^{\prime}_{1}| \stackrel{{\eqref{eq:R_2}}}{{\geq}}\delta(G)-|T_{1} \uplus V^{\prime}_{1}|=\delta(G)-|V_{1}\setminus R^{\prime}_{1}|\] \[=\delta(G)-|V_{1}|+|R^{\prime}_{1}|>|R^{\prime}_{1}|-\frac{m+n-1} {4}\] \[\stackrel{{\eqref{eq:R_2}}}{{>}}\frac{m-n+3}{4}\geq 1.\] Since \(R^{\prime}_{1}\subseteq\mathcal{H}^{\prime}\cap V_{1}\), \(R^{\prime}_{2}\uplus V^{\prime}_{2}\subseteq\mathcal{H}^{\prime}\cap V_{2}\). By Claim 7(i), we have that \(\mathcal{H}^{\prime}=\mathcal{H}\). Then \(|\mathcal{H}\cap V_{2}|=|\mathcal{H}^{\prime}\cap V_{2}|\geq|R^{\prime}_{2} \uplus V^{\prime}_{2}|\ \stackrel{{\eqref{eq:R_2}}}{{\geq}}n\). Combining with Claim 7(ii), \(\mathcal{H}\) is a blue component of \(G\) such that \(|\mathcal{H}\cap V_{i}|\geq n\) for each \(i\in[2]\). **Lemma 3.3**.: _Let \(G[V_{1},V_{2}]\) be a balanced bipartite graph on \(2(m+n-1)\) vertices with \(\delta(G)>\frac{3}{4}(m+n-1)\), where \(m>n\). Suppose that there is a red-blue-edge-coloring of G yielding no blue connected \(n\)-matching. Furthermore, suppose that \(\mathcal{B}\) is a largest blue component of G such that \(|\mathcal{B}\cap V_{i}|\geq n\) for each \(i\in[2]\). Let S be a minimum vertex cover of \(\mathcal{B}\), then there is a red component of G containing \(\mathcal{B}\setminus S\)._ Proof.: For each \(i\in[2]\), let \(B_{i}=V(\mathcal{B})\cap V_{i}\), \(S_{i}=S\cap V_{i}\), \(B^{\prime}_{i}=B_{i}\backslash S_{i}\), and \(V^{\prime}_{i}=V_{i}\backslash B_{i}\), then \(V_{i}=S_{i}\uplus B^{\prime}_{i}\uplus V^{\prime}_{i}\). **Claim 8**.: _For each \(i\in[2]\), the following holds._ (i) _If \(x\in B^{\prime}_{i}\), then \(N_{B}(x)\subseteq S_{3-i}\) and \(N_{G}(x)\cap(B^{\prime}_{3-i}\uplus V^{\prime}_{3-i})\subseteq N_{R}(x)\)._ (ii) _If \(x\in V^{\prime}_{i}\), then \(N_{B}(x)\subseteq V^{\prime}_{3-i}\) and \(N_{G}(x)\cap B_{3-i}\subseteq N_{R}(x)\)._ (iii) _Suppose that \(G[X_{1},X_{2}]=\emptyset\) for \(X_{1}\subset V_{1}\) and \(X_{2}\subset V_{2}\). If \(X_{j}\neq\emptyset\) for some \(j\in[2]\), then \(|X_{3-j}|<\frac{m+n-1}{4}\)._ Proof.: (i) Let \(x\in B^{\prime}_{i}=B_{i}\backslash S_{i}\). Since \(S\) is a minimum vertex cover of \(\mathcal{B}\), \(N_{B}(x)\subseteq S_{3-i}\), then \(N_{B}(x)\cap(V_{3-i}\backslash S_{3-i})=N_{B}(x)\cap(B^{\prime}_{3-i}\uplus V^ {\prime}_{3-i})=\emptyset\). So \(N_{G}(x)\cap(B^{\prime}_{3-i}\uplus V^{\prime}_{3-i})\subseteq N_{R}(x)\). (ii) Let \(x\in V^{\prime}_{i}=V_{i}\backslash B_{i}\). If \(N_{B}(x)\cap B_{3-i}\neq\emptyset\), then since \(\mathcal{B}\) is a largest blue component, \(x\in B_{i}\), a contradiction. Thus \(N_{B}(x)\subseteq V_{3-i}\backslash B_{3-i}=V^{\prime}_{3-i}\). Now \(N_{G}(x)\cap B_{3-i}\subseteq N_{R}(x)\). (iii) Without loss of generality, suppose that \(X_{1}\neq\emptyset\). Since \(G[X_{1},X_{2}]=\emptyset\), for any \(x\in X_{1}\), \(N_{G}(x)\cap X_{2}=\emptyset\). Recall that \(\delta(G)>\frac{3}{4}(m+n-1)\), then \(|X_{2}|\leq|V_{2}\backslash N_{G}(x)|\leq|V_{2}|-\delta(G)<\frac{m+n-1}{4}\). By the hypothesis, for each \(i\in[2]\), we have that \[|B_{i}|\geq n, \tag{69}\] then \[|V_{i}^{\prime}|=|V_{i}|-|B_{i}|\leq m-1. \tag{70}\] By Theorem 3.1, \[|S_{1}|+|S_{2}|=|S|=\alpha^{\prime}(B)\leq n-1. \tag{71}\] Without loss of generality, assume that \[|S_{1}|\leq\frac{n-1}{2}. \tag{72}\] Let \(i\in[2]\). Then \(|B_{i}^{\prime}|+|V_{i}^{\prime}|=|V_{i}|-|S_{i}|\stackrel{{\eqref{eq:2}}}{{\geq}}m\). Combining with inequality (70), we have that \(B_{i}^{\prime}\neq\emptyset\). For any \(x\in B_{i}^{\prime}\), by Claim 8(i), we have that \[|N_{R}(x)\cap(B_{3-i}^{\prime}\uplus V_{3-i}^{\prime})|\geq\delta(G)-|S_{3-i}|. \tag{73}\] **Claim 9**.: _Let \(i\in[2]\). Then \(B_{i}^{\prime}\) is contained in some red component of \(G\). Furthermore, suppose that \(\mathcal{H}_{i}\) is a largest red component containing \(B_{i}^{\prime}\)._ Proof.: Let \(i\in[2]\). For any pair of vertices \(x,x^{\prime}\in B_{i}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{R}(x)\cap N_{R}(x^{\prime})| \geq|N_{R}(x)\cap(B_{3-i}^{\prime}\uplus V_{3-i}^{\prime})|+|N_{R} (x^{\prime})\cap(B_{3-i}^{\prime}\uplus V_{3-i}^{\prime})|-|B_{3-i}^{\prime} \uplus V_{3-i}^{\prime}|\] \[\stackrel{{\eqref{eq:2}}}{{\geq}}2(\delta(G)-|S_{3-i }|)-|B_{3-i}^{\prime}\cup V_{3-i}^{\prime}|\] \[=2\delta(G)-|S_{3-i}|-|V_{3-i}|\stackrel{{\eqref{eq:2 }}}{{>}}\frac{m-n+1}{2}\geq 1,\] since \(m\geq n+1\). Thus \(B_{i}^{\prime}\) is contained in some red component of \(G\). For each \(i\in[2]\), let \(\mathcal{H}_{i}\) be a largest red component containing \(B_{i}^{\prime}\). If \(\mathcal{H}_{1}=\mathcal{H}_{2}\), then we are done. So we assume that \(\mathcal{H}_{1}\neq\mathcal{H}_{2}\). Since \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) are largest red components, \(G_{R}[V(\mathcal{H}_{1}),V(\mathcal{H}_{2})]=\emptyset\) and \(V(\mathcal{H}_{1})\cap V(\mathcal{H}_{2})=\emptyset\). For each \(i\in[2]\), let \(S_{i}^{1}=V(\mathcal{H}_{1})\cap S_{i}\), \(S_{i}^{2}=V(\mathcal{H}_{2})\cap S_{i}\) and \(S_{i}^{3}=S_{i}\backslash(S_{i}^{1}\uplus S_{i}^{2})\); and let \(C_{i}^{1}=V(\mathcal{H}_{1})\cap V_{i}^{\prime}\), \(C_{i}^{2}=V(\mathcal{H}_{2})\cap V_{i}^{\prime}\) and \(C_{i}^{3}=V_{i}^{\prime}\backslash(C_{1}^{1}\uplus C_{2}^{2})\). Let \(x\in B^{\prime}_{1}\). By Claim 8(i), \(N_{B}(x)\subseteq S_{2}\). By Claim 9, \(N_{R}(x)\subseteq V(\mathcal{H}_{1})\cap V_{2}=S_{2}^{1}\uplus C_{2}^{1}\). Then \(N_{G}(x)\subseteq S_{2}\uplus C_{2}^{1}\), and so \[|S_{2}|+|C_{2}^{1}|\geq\delta(G)>\frac{3}{4}(m+n-1). \tag{74}\] Since \(m\geq n+1\), \[|C_{2}^{1}|\stackrel{{\eqref{eq:C_2}}}{{>}}\frac{3}{4}(m+n-1)-|S _{2}|\stackrel{{\eqref{eq:C_2}}}{{\geq}}\frac{3m-n+1}{4}\geq\frac {m+n+3}{4}. \tag{75}\] Let \(x\in B^{\prime}_{2}\). By Claim 8(i), \(N_{B}(x)\subseteq S_{1}\). By Claim 9, \(N_{R}(x)\subseteq V(\mathcal{H}_{2})\cap V_{1}=S_{1}^{2}\uplus C_{1}^{2}\). Then \(N_{G}(x)\subseteq S_{1}\uplus C_{1}^{2}\), and so \[|S_{1}|+|C_{1}^{2}|\geq\delta(G)>\frac{3}{4}(m+n-1). \tag{76}\] Since \(m\geq n+1\), \[|C_{1}^{2}|\stackrel{{\eqref{eq:C_2}}}{{>}}\frac{3}{4}(m+n-1)-|S _{1}|\stackrel{{\eqref{eq:C_2}}}{{\geq}}\frac{3m+n-1}{4}\geq\frac {m+n}{2}. \tag{77}\] **Claim 10**.: _The following holds._ (i)_\(V(\mathcal{H}_{1})\cap V_{1}=B_{1}\uplus C_{1}^{1}\) and \(V(\mathcal{H}_{1})\cap V_{2}=C_{2}^{1}\)._ (ii)_\(V(\mathcal{H}_{2})\cap V_{1}=C_{1}^{2}\) and \(V(\mathcal{H}_{2})\cap V_{2}=B_{2}\uplus C_{2}^{2}\)._ Proof.: Let \(x\in C_{2}^{1}=V(\mathcal{H}_{1})\cap V_{2}^{\prime}\). By Claim 8(ii), \(N_{B}(x)\subseteq V_{1}^{\prime}\). Since \(\mathcal{H}_{1}\) is a largest red component containing \(C_{2}^{1}\), \(N_{R}(x)\subseteq V(\mathcal{H}_{1})\cap V_{1}=B_{1}^{\prime}\uplus S_{1}^{1} \uplus C_{1}^{1}\). Then \(N_{G}(x)\cap(S_{1}^{2}\uplus S_{1}^{3})=\emptyset\). Now \(G[S_{1}^{2}\uplus S_{1}^{3},C_{2}^{1}]=\emptyset\). Then \(S_{1}^{2}\uplus S_{1}^{3}=\emptyset\), otherwise by Claim 8(iii), \(|C_{2}^{1}|<\frac{m+n-1}{4}\), a contradiction to (75). Thus \(S_{1}\subseteq V(\mathcal{H}_{1})\cap V_{1}\). By Claim 9, \(B_{1}\subseteq V(\mathcal{H}_{1})\cap V_{1}\). Then \(V(\mathcal{H}_{1})\cap V_{1}=B_{1}\uplus C_{1}^{1}\). Since \(V(\mathcal{H}_{1})\cap V(\mathcal{H}_{2})=\emptyset\), \(V(\mathcal{H}_{2})\cap V_{1}=C_{1}^{2}\). Let \(x\in C_{1}^{2}=V(\mathcal{H}_{2})\cap V_{1}^{\prime}\). Since \(\mathcal{H}_{2}\) is a largest red component containing \(C_{1}^{2}\), \(N_{R}(x)\subseteq V(\mathcal{H}_{2})\cap V_{2}=B_{2}^{\prime}\uplus S_{2}^{2} \uplus C_{2}^{2}\). By Claim 8(ii), \(N_{B}(x)\subseteq V_{2}^{\prime}\). So \(N_{G}(x)\cap(S_{2}^{1}\uplus S_{2}^{3})=\emptyset\). Now \(G[C_{1}^{2},S_{2}^{1}\uplus S_{2}^{3}]=\emptyset\). Then \(S_{2}^{1}\uplus S_{2}^{3}=\emptyset\), otherwise by Claim 8(iii), \(|C_{1}^{2}|<\frac{m+n-1}{4}\), a contradiction to inequality (77). Thus \(S_{2}\subseteq V(\mathcal{H}_{2})\cap V_{2}\). By Claim 9, \(B_{2}\subseteq V(\mathcal{H}_{2})\cap V_{2}\). Then \(V(\mathcal{H}_{2})\cap V_{2}=B_{2}\uplus C_{2}^{2}\). Since \(V(\mathcal{H}_{1})\cap V(\mathcal{H}_{2})=\emptyset\), \(V(\mathcal{H}_{1})\cap V_{2}=C_{2}^{1}\). Thus we have the following results: (i) \(V(\mathcal{H}_{1})\cap V_{1}=B_{1}\uplus C_{1}^{1}\) and \(V(\mathcal{H}_{1})\cap V_{2}=C_{2}^{1}\). (ii) \(V(\mathcal{H}_{2})\cap V_{1}=C_{1}^{2}\) and \(V(\mathcal{H}_{2})\cap V_{2}=B_{2}\uplus C_{2}^{2}\). Now we split our argument into two cases. **Case 1.**\(|C_{2}^{1}|\leq\frac{m+n-1}{2}\). Note that \(|S_{2}|\stackrel{{\eqref{eq:2011}}}{{>}}\frac{3}{4}(m+n-1)-|C_{2}^{1}|> \frac{m+n-1}{4}\). Then \[|S_{1}|\stackrel{{\eqref{eq:2011}}}{{\leq}}n-1-|S_{2}|<\frac{3n-m-3 }{4}, \tag{78}\] implying that \[m+4\leq 3n. \tag{79}\] Then \[|C_{1}^{2}|\stackrel{{\eqref{eq:2011}}}{{>}}\frac{3}{4}(m+n-1)-| S_{1}|\stackrel{{\eqref{eq:2011}}}{{>}}m. \tag{80}\] For each \(i\in[2]\), we have that \[|B_{i}|\stackrel{{\eqref{eq:2011}}}{{\geq}}n\stackrel{{ \eqref{eq:2011}}}{{\geq}}\frac{m+n}{4}+1. \tag{81}\] Let \(x\in B_{2}\). Since \(\mathcal{B}\) is a largest blue component, \(N_{B}(x)\subseteq B_{1}\). By Claim 9 and Claim 10(ii), \(N_{R}(x)\subseteq V(\mathcal{H}_{2})\cap V_{1}=C_{1}^{2}\). So \(N_{G}(x)\cap(V_{1}^{\prime}\backslash C_{1}^{2})=N_{G}(x)\cap(C_{1}^{1} \uplus C_{1}^{3})=\emptyset\). Then \(G[C_{1}^{1}\uplus C_{1}^{3},B_{2}]=\emptyset\). Now \(C_{1}^{1}\uplus C_{1}^{3}=\emptyset\), otherwise by Claim 8(iii), \(|B_{2}|<\frac{m+n-1}{4}\), a contradiction to inequality (81). Then \(V_{1}^{\prime}=C_{1}^{2}\), and so \(|V_{1}^{\prime}|=|C_{1}^{2}|\stackrel{{\eqref{eq:2011}}}{{>}}m\), a contradiction to inequality (70). **Case 2.**\(|C_{2}^{1}|>\frac{m+n-1}{2}\). Let \(x\in C_{1}^{2}=V(\mathcal{H}_{2})\cap V_{1}^{\prime}\). By Claim 8(ii), \(N_{B}(x)\subseteq V_{2}^{\prime}=C_{2}^{1}\uplus C_{2}^{2}\uplus C_{2}^{3}\). Since \(\mathcal{H}_{2}\) is a largest red component containing \(C_{1}^{2}\), by Claim 10(ii), \(N_{R}(x)\subseteq V(\mathcal{H}_{2})\cap V_{2}=B_{2}\uplus C_{2}^{2}\). Then \[|N_{B}(x)\cap(C_{2}^{1}\uplus C_{2}^{3})|\geq\delta(G)-|B_{2}\uplus C_{2}^{2}|. \tag{82}\] For any pair of vertices \(x,x^{\prime}\in C_{1}^{2}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})| \geq|N_{B}(x^{\prime})\cap(C_{2}^{1}\uplus C_{2}^{3})|+|N_{B}(x^{ \prime})\cap(C_{2}^{1}\uplus C_{2}^{3})|-|C_{2}^{1}\uplus C_{2}^{3}|\] \[\stackrel{{\eqref{eq:2011}}}{{\geq}}2(\delta(G)-|B _{2}\uplus C_{2}^{2}|)-|C_{2}^{1}\uplus C_{2}^{3}|\] \[=2(\delta(G)-|V_{2}|)+|C_{2}^{1}\uplus C_{2}^{3}|>|C_{2}^{1} \uplus C_{2}^{3}|-\frac{m+n-1}{2}>0,\] by the assumption \(|C_{2}^{1}|>\frac{m+n-1}{2}\). Thus \(C_{1}^{2}\) is contained in a blue component of \(G\), say \(\mathcal{H}\). Let \(y\in C_{2}^{1}=V(\mathcal{H}_{1})\cap V_{2}^{\prime}\). By Claim 8(ii), \(N_{B}(y)\subseteq V_{1}^{\prime}=C_{1}^{1}\uplus C_{1}^{2}\uplus C_{1}^{3}\). Since \(\mathcal{H}_{1}\) is a largest red component containing \(C_{2}^{1}\), by Claim 10(i), \(N_{R}(y)\subseteq V(\mathcal{H}_{1})\cap V_{1}=B_{1}\uplus C_{1}^{1}\). Then \[|N_{B}(y)\cap C_{1}^{2}| \geq\delta(G)-|B_{1}\uplus C_{1}^{1}\uplus C_{1}^{3}|=\delta(G)-| V_{1}\backslash C_{1}^{2}|\] \[>|C_{1}^{2}|-\frac{m+n-1}{4}\stackrel{{\eqref{eq:2011}}} {{>}}\frac{m+n+1}{4}.\] Since \(C_{1}^{2}\subseteq V(\mathcal{H})\cap V_{1}\), \(C_{2}^{1}\subseteq V(\mathcal{H})\cap V_{2}\). Now \(|\mathcal{H}\cap V_{1}|\geq|C_{1}^{2}|\stackrel{{\eqref{eq:C_{1}^{ \prime}}}}{{>}}\frac{m+n}{2}>|V_{1}|-|C_{1}^{2}|\geq|B_{1}|\). By the assumption \(|C_{2}^{1}|>\frac{m+n-1}{2}\), \(|\mathcal{H}\cap V_{2}|\geq|C_{2}^{1}|>\frac{m+n-1}{2}\geq|V_{2}|-|C_{2}^{1}| \geq|B_{2}|\), a contradiction to the maximality of \(\mathcal{B}\). **Proof of Theorem 1.5.** Suppose that there exists a red-blue-edge-coloring of \(G\) yielding neither red connected m-matching, nor blue connected n-matching. By Lemma 3.2, we can assume that \(\mathcal{B}\) is a largest blue component such that \(|\mathcal{B}\cap V_{i}|\geq n\) for each \(i\in[2]\). Let \(S\) be a minimum vertex cover of \(\mathcal{B}\). By Lemma 3.3, \(\mathcal{B}\backslash S\) is contained in some red component \(\mathcal{R}\). Let \(T\) be a minimum vertex cover of \(\mathcal{R}\). For each \(i\in[2]\), let \(B_{i}=V(\mathcal{B})\cap V_{i}\), \(S_{i}=S\cap V_{i}\), \(B_{i}^{\prime}=B_{i}\backslash S_{i}\) and \(V_{i}^{\prime}=V_{i}\backslash B_{i}\). For each \(i\in[2]\), let \(R_{i}=V(\mathcal{R})\cap V_{i}\), \(T_{i}=T\cap V_{i}\), \(T_{i}^{\prime}=T_{i}\backslash B_{i}\), \(R_{i}^{\prime}=R_{i}\backslash(B_{i}\cup T_{i})\), and \(V_{i}^{\prime\prime}=V_{i}\backslash(B_{i}\cup R_{i})\). For each \(i\in[2]\), \(B_{i}^{\prime}\subseteq B_{i}\cap R_{i}\) and \(V_{i}^{\prime}=R_{i}^{\prime}\uplus T_{i}^{\prime}\uplus V_{i}^{\prime\prime}\). **Claim 11**.: _For each \(i\in[2]\), the following holds._ (i) _If \(x\in B_{i}^{\prime}\backslash T_{i}\), then \(N_{B}(x)\subseteq S_{3-i}\) and \(N_{R}(x)\subseteq T_{3-i}\)._ (ii) _If \(x\in R_{i}^{\prime}\), then \(N_{B}(x)\subseteq T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}\uplus V_{3-i}^{ \prime\prime}\) and \(N_{R}(x)\subseteq T_{3-i}\)._ (iii) _If \(x\in V_{i}^{\prime\prime}\), then \(N_{B}(x)\subseteq T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}\uplus V_{3-i}^{ \prime\prime}\) and \(N_{R}(x)\subseteq(S_{3-i}\backslash R_{3-i})\uplus V_{3-i}^{\prime\prime}\)._ (iv) _If \(x\in S_{i}\backslash R_{i}\), then \(N_{B}(x)\subseteq B_{3-i}\) and \(N_{R}(x)\subseteq T_{3-i}\)._ (vi) _\(G[V_{i}^{\prime\prime},B_{3-i}\cap R_{3-i}]=\emptyset\)._ (vii) _\(G[S_{i}\backslash R_{i},T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}]=\emptyset\)._ (viii) _Suppose that \(G[X_{1},X_{2}]=\emptyset\) for \(X_{1}\subset V_{1}\) and \(X_{2}\subset V_{2}\). If \(X_{j}\neq\emptyset\) for some \(j\in[2]\), then \(|X_{3-j}|<\frac{m+n-1}{4}\)._ Proof.: (i) Let \(x\in B_{i}^{\prime}\backslash T_{i}\subseteq(B_{i}\cap R_{i})\backslash T_{i}\). Since \(T\) is a minimum vertex cover of \(\mathcal{R}\), \(N_{R}(x)\subseteq T_{3-i}\). Since \(x\in B_{i}\backslash S_{i}\) and \(S\) is a minimum vertex cover of \(\mathcal{B}\), \(N_{B}(x)\subseteq S_{3-i}\). (ii) Let \(x\in R_{i}^{\prime}=R_{i}\backslash(B_{i}\cup T_{i})\). If \(N_{B}(x)\cap B_{3-i}\neq\emptyset\), then since \(\mathcal{B}\) is a largest blue component, \(x\in B_{i}\), a contradiction. Thus \(N_{B}(x)\subseteq V_{3-i}\backslash B_{3-i}=T_{3-i}^{\prime}\uplus R_{3-i}^{ \prime}\uplus V_{3-i}^{\prime\prime}\). Since \(T\) is a minimum vertex cover of \(\mathcal{R}\) and \(x\in R_{i}\backslash T_{i}\), \(N_{R}(x)\subseteq T_{3-i}\). (iii) Let \(x\in V_{i}^{\prime\prime}=V_{i}\backslash(B_{i}\cup R_{i})\). If \(N_{B}(x)\cap B_{3-i}\neq\emptyset\), then since \(\mathcal{B}\) is a largest blue component, \(x\in B_{i}\), a contradiction. Thus \(N_{R}(x)\cap R_{3-i}\neq\emptyset\), then since \(\mathcal{R}\) is a largest red component, \(x\in R_{i}\), a contradiction. Thus \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}=(S_{3-i}\backslash R_{3-i})\uplus V _{3-i}^{\prime\prime}\). (iv) Let \(x\in S_{i}\backslash R_{i}\). Since \(x\in B_{i}\) and \(\mathcal{B}\) is a largest blue component, \(N_{B}(x)\subseteq B_{3-i}\). If \(N_{R}(x)\cap R_{3-i}\neq\emptyset\), then since \(\mathcal{R}\) is a largest red component, \(x\in R_{i}\), a contradiction. Thus \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}=(S_{3-i}\backslash R_{3-i})\uplus V _{3-i}^{\prime\prime}\). (v) Let \(x\in(S_{i}\cap R_{i})\backslash T_{i}\). Since \(x\in B_{i}\) and \(\mathcal{B}\) is a largest blue component, \(N_{B}(x)\subseteq B_{3-i}\). Since \(x\in R_{i}\backslash T_{i}\) and \(T\) is a minimum vertex cover of \(\mathcal{R}\), \(N_{R}(x)\subseteq T_{3-i}\). (vi) If \(V_{i}^{\prime\prime}=\emptyset\) or \(B_{3-i}\cap R_{3-i}=\emptyset\), then we are done. Suppose that \(V_{i}^{\prime\prime}\neq\emptyset\) and \(B_{3-i}\cap R_{3-i}\neq\emptyset\). For any \(x\in V_{3-i}^{\prime\prime}\), by (iii), \(N_{B}(x)\subseteq V_{3-i}\backslash B_{3-i}\) and \(N_{R}(x)\subseteq V_{3-i}\backslash R_{3-i}\), then \(N_{G}(x)\cap(B_{3-i}\cap R_{3-i})=\emptyset\). Thus \(G[V_{i}^{\prime\prime},B_{3-i}\cap R_{3-i}]=\emptyset\). (vii) If \(S_{i}\backslash R_{i}=\emptyset\) or \(T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}=\emptyset\), then we are done. Suppose that \(S_{i}\backslash R_{i}\neq\emptyset\) and \(T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}\neq\emptyset\). For any \(x\in S_{i}\backslash R_{i}\), by (iv), \(N_{B}(x)\subseteq B_{3-i}\) and \(N_{R}(x)\subseteq(S_{3-i}\backslash R_{3-i})\uplus V_{3-i}^{\prime\prime}\), then \(N_{G}(x)\cap(T_{3-i}^{\prime}\uplus R_{3-i}^{\prime})=\emptyset\). Thus \(G[S_{i}\backslash R_{i},T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}]=\emptyset\). (viii) Without loss of generality, suppose that \(X_{1}\neq\emptyset\). Since \(G[X_{1},X_{2}]=\emptyset\), for any \(x\in X_{1}\), \(N_{G}(x)\cap X_{2}=\emptyset\). Recall that \(\delta(G)>\frac{3}{4}(m+n-1)\), then \(|X_{2}|\leq|V_{2}\backslash N_{G}(x)|\leq|V_{2}|-\delta(G)<\frac{m+n-1}{4}\). By the hypothesis, for each \(i\in[2]\), \[|B_{i}|=|S_{i}|+|B_{i}^{\prime}|\geq n, \tag{83}\] then \[|V_{i}^{\prime}|=|V_{i}|-|B_{i}|\leq m-1. \tag{84}\] By Theorem 3.1, \[|S_{1}|+|S_{2}|=|S|=\alpha^{\prime}(\mathcal{B})\leq n-1, \tag{85}\] and \[|T_{1}|+|T_{2}|=|T|=\alpha^{\prime}(\mathcal{R})\leq m-1. \tag{86}\] Then \[|S\cup T|\leq|S|+|T|\leq m+n-2. \tag{87}\] Let \(i\in[2]\). By inequalities (83) and (85), we have that \(B_{i}^{\prime}\neq\emptyset\). Since \(B_{i}^{\prime}\subseteq B_{i}\cap R_{i}\), by Claim 11(vi) and (viii), we have that \[|V_{i}^{\prime\prime}|<\frac{m+n-1}{4}. \tag{88}\] **Claim 12**.: _The following holds._ (i) _Either \(B_{1}^{\prime}\subseteq T_{1}\) or \(B_{2}^{\prime}\subseteq T_{2}\). Furthermore, assume that \(B_{1}^{\prime}\subseteq T_{1}\)._ (ii)_\(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\neq\emptyset\)._ Proof.: Let \(i\in[2]\). If \(x\in B_{i}^{\prime}\backslash T_{i}\), then by Claim 11(i), \(N_{G}(x)\subseteq S_{3-i}\cup T_{3-i}\), so \(|S_{3-i}\cup T_{3-i}|\geq\delta(G)\). If \(B_{i}^{\prime}\backslash T_{i}\neq\emptyset\) for each \(i\in[2]\), then \(|S\cup T|=|S_{1}\cup T_{1}|+|S_{2}\cup T_{2}|\geq 2\delta(G)>\frac{3}{2}(m+n-1)\), a contradiction to inequality (87). Thus either \(B_{1}^{\prime}\subseteq T_{1}\) or \(B_{2}^{\prime}\subseteq T_{2}\). Without loss of generality, assume that \(B_{1}^{\prime}\subseteq T_{1}\). (ii) Since \(B_{1}=B_{1}^{\prime}\uplus S_{1}\), by (i), \(B_{1}\cup T_{1}=S_{1}\cup T_{1}\). Then \(|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|=|V_{1}\backslash(B_{1}\cup T_{1})|=|V _{1}\backslash(S_{1}\cup T_{1})|\stackrel{{(\ref{eq:2})}}{{\geq}}1\). So \(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\neq\emptyset\). By Claim 12(i), we have that \[V_{1}=(S_{1}\backslash T_{1})\uplus T_{1}\uplus R_{1}^{\prime}\uplus V_{1}^{ \prime\prime}. \tag{89}\] Let \(B_{2}^{\prime\prime}=B_{2}^{\prime}\backslash T_{2}\), \(ST_{2}=S_{2}\cap T_{2}\), \(SR_{2}=(S_{2}\cap R_{2})\backslash T_{2}\) and \(S_{2}^{\prime}=S_{2}\backslash R_{2}\). Then \[V_{2}=S_{2}^{\prime}\uplus SR_{2}\uplus B_{2}^{\prime\prime}\uplus T_{2}\uplus R _{2}^{\prime}\uplus V_{2}^{\prime\prime}. \tag{90}\] Let \(x\in R_{1}^{\prime}\). By Claim 11(ii), \(N_{B}(x)\subseteq T_{2}^{\prime}\uplus R_{2}^{\prime}\uplus V_{2}^{\prime\prime}\) and \(N_{R}(x)\subseteq T_{2}\). Then \[|N_{B}(x)\cap R_{2}^{\prime}|\geq\delta(G)-|T_{2}\uplus V_{2}^{\prime\prime}|. \tag{91}\] By (90), \(N_{G}(x)\cap(S_{2}^{\prime}\uplus SR_{2}\uplus B_{2}^{\prime\prime})=\emptyset\). Then \(G[R_{1}^{\prime},S_{2}^{\prime}\uplus SR_{2}\uplus B_{2}^{\prime\prime}]=\emptyset\). By Claim 11(vi), since \(B_{2}\cap R_{2}=ST_{2}\uplus SR_{2}\uplus B_{2}^{\prime}\), \(G[V_{1}^{\prime\prime},ST_{2}\uplus SR_{2}\uplus B_{2}^{\prime}]=\emptyset\). Thus \(G[R_{1}^{\prime}\uplus V_{1}^{\prime\prime},SR_{2}\uplus B_{2}^{\prime\prime}]=\emptyset\). By Claim 12(ii) and Claim 11(viii), we have that \[|B_{2}^{\prime\prime}\uplus SR_{2}|<\frac{m+n-1}{4}. \tag{92}\] For any \(y\in V_{1}^{\prime\prime}\), by Claim 11(iii), \(N_{B}(y)\subseteq T_{2}^{\prime}\uplus R_{2}^{\prime}\uplus V_{2}^{\prime\prime}\) and \(N_{R}(y)\subseteq S_{2}^{\prime}\uplus V_{2}^{\prime\prime}\), then \[|N_{B}(y)\cap R_{2}^{\prime}|\geq\delta(G)-|S_{2}^{\prime}\uplus T_{2}^{\prime }\uplus V_{2}^{\prime\prime}|. \tag{93}\] Now we split our argument into two cases. **Case 1.**\(B_{1}^{\prime}\subseteq T_{1}\) and \(B_{2}^{\prime}\backslash T_{2}\neq\emptyset\). Now \(B_{2}^{\prime\prime}=B_{2}^{\prime}\backslash T_{2}\neq\emptyset\). For any \(x\in B_{2}^{\prime\prime}\), by Claim 11(i), \(N_{G}(x)\subseteq S_{1}\cup T_{1}\). Then \[|S_{1}\cup T_{1}|\geq\delta(G)>\frac{3}{4}(m+n-1), \tag{94}\] and so \[|S_{2}\cup T_{2}|=|S\cup T|-|S_{1}\cup T_{1}|\stackrel{{(\ref{eq:2})} }{{<}}\frac{m+n-5}{4}. \tag{95}\] Now \[|B_{2}\cup T_{2}|=|S_{2}^{\prime}\uplus SR_{2}\uplus B_{2}^{\prime\prime} \uplus T_{2}|=|S_{2}^{\prime}\uplus T_{2}|+|SR_{2}\uplus B_{2}^{\prime\prime}|\] \[\stackrel{{(\ref{eq:2})}}{{<}}|S_{2}\cup T_{2}|+\frac{m+n-1}{4} \stackrel{{(\ref{eq:2})}}{{<}}\frac{m+n-3}{2},\] then \[|R_{2}^{\prime}|=|V_{2}|-|B_{2}\cup T_{2}|-|V_{2}^{\prime\prime}|\stackrel{{ (\ref{eq:2})}}{{>}}\frac{m+n+3}{4}. \tag{96}\] By Claim 12(i), \(B_{1}\cup T_{1}=S_{1}\cup T_{1}\), then \[|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|=|V_{1}\backslash(B_{1}\cup T_{1})|=| V_{1}\backslash(S_{1}\cup T_{1})|\ \stackrel{{\eqref{eq:2011}}}{{<}}\frac{m+n-1}{4}. \tag{97}\] For any \(y\in R_{2}^{\prime}\), by Claim 11(ii), \(N_{B}(y)\subseteq T_{1}^{\prime}\uplus R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) and \(N_{R}(y)\subseteq T_{1}\), then \(N_{G}(y)\cap(S_{1}\backslash T_{1})=\emptyset\) by (89). So \(G[S_{1}\backslash T_{1},R_{2}^{\prime}]=\emptyset\). If \(S_{1}\backslash T_{1}\neq\emptyset\), then by Claim 11(viii), \(|R_{2}^{\prime}|<\frac{m+n-1}{4}\), contradicting to inequality (96). Thus \(S_{1}\subseteq T_{1}\). By Claim 12(i), \(B_{1}\subseteq T_{1}\). Now, \(V_{1}=T_{1}\uplus R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\). Then \[|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|=|V_{1}|-|T_{1}|\ \stackrel{{\eqref{eq:2011}}}{{ \geq}}n. \tag{98}\] Combining with inequality (97), we have that \[m\geq 3n+2. \tag{99}\] By the assumption \(B_{2}^{\prime\prime}\neq\emptyset\), we have that \[|S_{2}^{\prime}\uplus T_{2}\uplus V_{2}^{\prime\prime}|\stackrel{{ \eqref{eq:2011}}}{{=}}|V_{2}\backslash(SR_{2}\uplus B_{2}^{\prime \prime}\uplus R_{2}^{\prime})|<|V_{2}\backslash R_{2}^{\prime}|. \tag{100}\] **Claim 13**.: \(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) _is contained in some blue component of \(G\), say \(\mathcal{F}\)._ Proof.: For any pair of vertices \(x,y\in R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R_{2}^{\prime}|+|N_{B}(y)\cap R_{2}^{\prime}|- |R_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:2011}}}{{\geq}}2(\delta(G)-|S_ {2}^{\prime}\uplus T_{2}\uplus V_{2}^{\prime\prime}|)-|R_{2}^{\prime}|\] \[\geq 2(\delta(G)-|S_{2}\cup T_{2}|-|V_{2}^{\prime\prime}|)-|R_{2}^ {\prime}|\] \[\stackrel{{\eqref{eq:2011}}}{{>}}\frac{m+n+3}{2}-|R _{2}^{\prime}|, \tag{101}\] and \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R_{2}^{\prime}|+|N_{B}(y)\cap R_{2}^{\prime}|- |R_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:2011}}}{{\geq}}2(\delta(G)-|S_ {2}^{\prime}\uplus T_{2}\uplus V_{2}^{\prime\prime}|)-|R_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:2011}}}{{>}}2(\delta(G)-|V_{2} \backslash R_{2}^{\prime}|)-|R_{2}^{\prime}|\] \[=2\delta(G)-2|V_{2}|+|R_{2}^{\prime}|>|R_{2}^{\prime}|-\frac{m+n- 1}{2}. \tag{102}\] If \(|R_{2}^{\prime}|<\frac{m+n+3}{2}\), then the rightside of inequality (101) is at least 1. If \(|R_{2}^{\prime}|\geq\frac{m+n+3}{2}\), then the rightside of inequality (102) is at least 1. Thus we have that \(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{F}\) Let \(J\) be a minimum vertex cover of \(\mathcal{F}\). For each \(i\in[2]\), let \(J_{i}=J\cap V_{i}\). By Theorem 3.1, \[|J_{1}|+|J_{2}|=|J|=\alpha^{\prime}(\mathcal{F})\leq n-1. \tag{103}\] Now \((R_{1}^{\prime}\uplus V_{1}^{\prime\prime})\backslash J_{1}\neq\emptyset\), otherwise \(|J_{1}|\geq|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|\stackrel{{ \eqref{eq:J1}}}{{\geq}}n\), a contradiction to inequality (103). Let \(x\in(R_{1}^{\prime}\uplus V_{1}^{\prime\prime})\backslash J_{1}\). By Claim 11(ii)-(iii), \(N_{R}(x)\subseteq S_{2}^{\prime}\uplus T_{2}\uplus V_{2}^{\prime\prime}\). Since \(J\) is a minimum vertex cover of \(\mathcal{F}\), \(N_{B}(x)\subseteq J_{2}\). Then \(|J_{2}\cup(T_{2}\uplus S_{2}^{\prime}\uplus V_{2}^{\prime\prime})|\geq\delta(G)\). So \[|J_{2}|\geq\delta(G)-|T_{2}\uplus S_{2}^{\prime}\uplus V_{2}^{\prime\prime}| \geq\delta(G)-|T_{2}\cup S_{2}|-|V_{2}^{\prime\prime}|\stackrel{{ \eqref{eq:J1}}}{{\geq}}\frac{m+n+3}{4}\stackrel{{\eqref{eq:J1}} }{{\geq}}n+\frac{5}{4}, \tag{95}\] a contradiction to inequality (103). **Case 2.** For each \(i\in[2]\), \(B_{i}^{\prime}\subseteq T_{i}\). For each \(i\in[2]\), we have that \(B_{i}\subseteq S_{i}\cup T_{i}\). **Claim 14**.: _Either \(S_{1}\subseteq T_{1}\) or \(S_{2}\subseteq T_{2}\). Furthermore, assume that \(S_{1}\subseteq T_{1}\). Now \(B_{1}\subseteq T_{1}\)._ Proof.: Let \(i\in[2]\) and let \(x\in S_{i}\backslash T_{i}\). If \(x\in S_{i}\backslash R_{i}\), by Claim 11(iv), \(N_{G}(x)\subseteq B_{3-i}\uplus V_{3-i}^{\prime\prime}\), then \(|B_{3-i}\uplus V_{3-i}^{\prime\prime}|\geq\delta(G)\), thus \[|S_{3-i}\cup T_{3-i}|\geq|B_{3-i}|\geq\delta(G)-|V_{3-i}^{\prime\prime}| \stackrel{{\eqref{eq:J1}}}{{>}}\frac{m+n-1}{2}. \tag{104}\] If \(x\in(S_{i}\cap R_{i})\backslash T_{i}\), by Claim 11(v), \(N_{G}(x)\subseteq B_{3-i}\cup T_{3-i}\), then \[|S_{3-i}\cup T_{3-i}|=|B_{3-i}\cup T_{3-i}|\geq\delta(G)>\frac{3}{4}(m+n-1). \tag{105}\] If \(S_{i}\backslash T_{i}\neq\emptyset\) for each \(i\in[2]\), then \(|S_{i}\cup T_{i}|\stackrel{{\eqref{eq:J1}}}{{>}}\frac{m+n-1}{2}\), and so \(|S\cup T|=|S_{1}\cup T_{1}|+|S_{2}\cup T_{2}|\geq m+n\), a contradiction to inequality (87). Thus either \(S_{1}\subseteq T_{1}\) or \(S_{2}\subseteq T_{2}\). Without loss of generality, assume that \(S_{1}\subseteq T_{1}\). By Claim 12(i), \(B_{1}\subseteq T_{1}\). By Claim 14, \(V_{1}=T_{1}\uplus R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\), then \[|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|=|V_{1}|-|T_{1}|\stackrel{{ \eqref{eq:J1}}}{{\geq}}n. \tag{106}\] Now we split the remainder into two cases. **Subcase 2.1.**\(S_{1}\subset T_{1}\) and \(S_{2}\backslash T_{2}\neq\emptyset\). Recall that \(S_{2}^{\prime}=S_{2}\backslash R_{2}\) and \(SR_{2}=(S_{2}\cap R_{2})\backslash T_{2}\), then \(S_{2}^{\prime}\uplus SR_{2}\neq\emptyset\). **Claim 15**.: \(S_{2}^{\prime}\neq\emptyset\). Proof.: Suppose that \(S_{2}^{\prime}=\emptyset\), then \(SR_{2}\neq\emptyset\). For any \(x\in SR_{2}\), by Claim 11(v) and Claim 14, \(N_{G}(x)\subseteq T_{1}\), then \(N_{G}(x)\cap(V_{1}\backslash T_{1})=N_{G}(x)\cap(R_{1}^{\prime}\uplus V_{1}^{ \prime\prime})=\emptyset\). Thus \(G[R_{1}^{\prime}\uplus V_{1}^{\prime\prime},SR_{2}]=\emptyset\). By Claim 11(viii), \[|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|<\frac{m+n-1}{4}. \tag{107}\] Then \(|T_{1}|=|V_{1}|-|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|>\frac{3}{4}(m+n-1)\), and so \[|T_{2}|\stackrel{{\eqref{eq:S2}}}{{\leq}}m-1-|T_{1}|<\frac{m-3n- 1}{4}. \tag{108}\] By the assumption \(S_{2}^{\prime}=\emptyset\), for any pair of vertices \(x,y\in R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R_{2}^{\prime}|+|N_{B}(y)\cap R_{2}^{\prime}|- |R_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:S2}}}{{\geq}}2(\delta(G)-|T_{2 }\uplus V_{2}^{\prime\prime}|)-|R_{2}^{\prime}|\] \[>2\delta(G)-|V_{2}|-|T_{2}\uplus V_{2}^{\prime\prime}|\stackrel{{ \eqref{eq:S2}}}{{\geq}}n.\] Thus \(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{1}\). Let \(K\) be a minimum vertex cover of \(\mathcal{H}_{1}\). For each \(i\in[2]\), let \(K_{i}=K\cap V_{i}\). By Theorem 3.1, \[|K_{1}|+|K_{2}|=|K|=\alpha^{\prime}(\mathcal{H}_{1})\leq n-1. \tag{109}\] Now \((R_{1}^{\prime}\uplus V_{1}^{\prime\prime})\backslash K_{1}\neq\emptyset\), otherwise \(|K_{1}|\geq|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|\stackrel{{ \eqref{eq:S2}}}{{\geq}}n\), a contradiction to inequality (109). Let \(x\in(R_{1}^{\prime}\uplus V_{1}^{\prime\prime})\backslash K_{1}\). By Claim 11(ii)-(iii), \(N_{R}(x)\subseteq T_{2}\uplus V_{2}^{\prime\prime}\) since \(S_{2}^{\prime}=\emptyset\). Since \(K\) is a minimum vertex cover of \(\mathcal{H}_{1}\), \(N_{B}(x)\subseteq K_{2}\). Then \(|K_{2}\cup(T_{2}\uplus V_{2}^{\prime\prime})|\geq\delta(G)\). Since \(m\geq n+1\), \(|K_{2}|\geq\delta(G)-|T_{2}\uplus V_{2}^{\prime\prime}|\stackrel{{ \eqref{eq:S2}}}{{\geq}}\frac{m+5n-1}{4}\geq\frac{3}{2}n\), a contradiction to inequality (109). This completes the proof of Claim 15. By Claim 15, \(S_{2}^{\prime}=S_{2}\backslash R_{2}\neq\emptyset\). By Claim 11(vii)-(viii), we have that \(|T_{1}^{\prime}\uplus R_{1}^{\prime}|<\frac{m+n-1}{4}\). Then \(|B_{1}\uplus V_{1}^{\prime\prime}|=|V_{1}|-|T_{1}^{\prime}\uplus R_{1}^{\prime} |>\frac{3}{4}(m+n-1)\). By Claim 14, we have that \[|T_{1}|\geq|B_{1}|>\frac{3}{4}(m+n-1)-|V_{1}^{\prime\prime}|\stackrel{{ \eqref{eq:S2}}}{{>}}\frac{m+n-1}{2}. \tag{110}\] Then \[|T_{2}|\stackrel{{\eqref{eq:S2}}}{{\leq}}m-1-|T_{1}|<\frac{m-n-1 }{2}. \tag{111}\] **Claim 16**.: _The following holds._ (i)_\(V_{2}^{\prime\prime}=\emptyset\)._ (ii)_\(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{2}\)._ Proof.: (i) For any \(x\in V_{2}^{\prime\prime}\), by Claim 11(iii) and Claim 14, \(N_{G}(x)\subseteq T_{1}^{\prime}\uplus R_{1}^{\prime}\uplus V_{1}^{\prime \prime}=V_{1}\backslash B_{1}\). Then \(G[B_{1},V_{2}^{\prime\prime}]=\emptyset\). If \(V_{2}^{\prime\prime}\neq\emptyset\), then by Claim 11(viii), \(|B_{1}|<\frac{m+n-1}{4}\), a contradiction to inequality (110). Thus \(V_{2}^{\prime\prime}=\emptyset\). (ii) By (i), \(V_{2}=B_{2}\cup R_{2}=S_{2}^{\prime}\uplus SR_{2}\uplus T_{2}\uplus R_{2}^{\prime}\). Then \[|R_{2}^{\prime}|=|V_{2}|-|S_{2}\cup T_{2}|\stackrel{{\eqref{eq: 11}}}{{\geq}}m-|T_{2}|\stackrel{{\eqref{eq:11}}}{{>}}\frac{m+n+ 1}{2}. \tag{112}\] By (i), for any pair of vertices \(x,y\in R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R_{2}^{\prime}|+|N_{B}(y)\cap R_{2}^{\prime}|-| R_{2}^{\prime}|\] \[\stackrel{{\eqref{eq:11}}}{{\geq}}2(\delta(G)-|T_{2 }\cup S_{2}^{\prime}|)-|R_{2}^{\prime}|\] \[\geq 2\delta(G)-2|V_{2}|+|R_{2}^{\prime}|>|R_{2}^{\prime}|- \frac{m+n-1}{2}\stackrel{{\eqref{eq:11}}}{{>}}1.\] Thus \(R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}_{2}\). Let \(D\) be a minimum vertex cover of \(\mathcal{H}_{2}\). For each \(i\in[2]\), let \(D_{i}=D\cap V_{i}\). By Theorem 3.1, \[|D_{1}|+|D_{2}|=|D|=\alpha^{\prime}(\mathcal{H}_{2})\leq n-1. \tag{113}\] Then \((R_{1}^{\prime}\uplus V_{1}^{\prime\prime})\backslash D_{1}\neq\emptyset\), otherwise \(|D_{1}|\geq|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|\stackrel{{ \eqref{eq:10}}}{{\geq}}n\), a contradiction to inequality (113). Let \(x\in(R_{1}^{\prime}\cup V_{1}^{\prime\prime})\backslash D_{1}\). By Claim 11(ii)-(iii) and Claim 16(i), \(N_{R}(x)\subseteq S_{2}^{\prime}\uplus T_{2}\). Since \(D\) is a minimum vertex cover of \(\mathcal{H}_{2}\), \(N_{B}(x)\subseteq D_{2}\). Then \(|D_{2}\cup(T_{2}\uplus S_{2}^{\prime})|\geq\delta(G)\), and so \[|D_{2}|\geq\delta(G)-|T_{2}\uplus S_{2}^{\prime}|\stackrel{{ \eqref{eq:11}}}{{>}}\frac{m+5n-1}{4}-|S_{2}^{\prime}|. \tag{114}\] Combining with inequality (113), we have that \[|S_{2}^{\prime}|>\frac{m+n+3}{4}. \tag{115}\] We claim that \(T_{1}^{\prime}\uplus R_{1}^{\prime}=\emptyset\), otherwise by Claim 11(vii)-(viii), \(|S_{2}^{\prime}|<\frac{m+n-1}{4}\), a contradiction to inequality (115). Then \(|V_{1}^{\prime\prime}|\stackrel{{\eqref{eq:10}}}{{\geq}}n\). Combining with inequality (88), we have that \(m\geq 3n+2\). Combining inequalities (85) and (115), we have that \(m\leq 3n-8\), a contradiction. **Subcase 2.2.** For each \(i\in[2]\), \(S_{i}\subset T_{i}\). Let \(i\in[2]\). We have that \(B_{i}\subseteq T_{i}\) and \(V_{i}=R_{i}\uplus V_{i}^{\prime\prime}=T_{i}\uplus R_{i}^{\prime}\uplus V_{i}^{ \prime\prime}\). Then \[|R_{i}^{\prime}\uplus V_{i}^{\prime\prime}|=|V_{i}|-|T_{i}|\stackrel{{ \eqref{eq:2.2}}}{{\geq}}n, \tag{116}\] and \[|R_{i}|=|T_{i}|+|R_{i}^{\prime}|=|V_{i}|-|V_{i}^{\prime\prime}|\stackrel{{ \eqref{eq:2.2}}}{{>}}\frac{3}{4}(m+n-1). \tag{117}\] Without loss of generality, by inequality (86), we can assume that \[|T_{1}|\leq\frac{m-1}{2}. \tag{118}\] Let \(i\in[2]\). For any \(x\in V_{i}^{\prime\prime}\), by Claim 11(iii), \(N_{R}(x)\subseteq V_{3-i}^{\prime\prime}\) since \(B_{3-i}\subseteq T_{3-i}\) and \(N_{B}(x)\subseteq T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}\uplus V_{3-i}^{ \prime\prime}\), then \[|N_{B}(x)\cap(T_{3-i}^{\prime}\uplus R_{3-i}^{\prime})|\geq\delta(G)-|V_{3-i} ^{\prime\prime}|. \tag{119}\] For any \(y\in R_{2}^{\prime}\), by Claim 11(ii), \(N_{R}(y)\subseteq T_{1}\) and \(N_{B}(y)\subseteq T_{1}^{\prime}\uplus R_{1}^{\prime}\uplus V_{1}^{\prime\prime}\), then \[|N_{B}(y)\cap(R_{1}^{\prime}\uplus V_{1}^{\prime\prime})|\geq\delta(G)-|T_{1}|. \tag{120}\] **Claim 17**.: _The following holds._ (i)_\(R_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}\)._ (ii)_\(|R_{2}^{\prime}|\leq n-1\)._ (iii) _For each \(i\in[2]\), \(V_{i}^{\prime\prime}\neq\emptyset\)._ Proof.: (i) For any pair of vertices \(y,y^{\prime}\in R_{2}^{\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(y)\cap N_{B}(y^{\prime})| \geq|N_{B}(y)\cap(R_{1}^{\prime}\uplus V_{1}^{\prime\prime})|+|N_ {B}(y^{\prime})\cap(R_{1}^{\prime}\uplus V_{1}^{\prime\prime})|-|R_{1}^{ \prime}\uplus V_{1}^{\prime\prime}|\] \[\stackrel{{\eqref{eq:2.2}}}{{\geq}}2(\delta(G)-|T_{1}| )-|R_{1}^{\prime}\uplus V_{1}^{\prime\prime}|=2\delta(G)-|V_{1}|-|T_{1}| \stackrel{{\eqref{eq:2.2}}}{{\geq}}\frac{n}{2}.\] Thus \(R_{2}^{\prime}\) is contained in some blue component of \(G\), say \(\mathcal{H}\). Let \(U\) be a minimum vertex cover of \(\mathcal{H}\). For each \(i\in[2]\), let \(U_{i}=U\cap V_{i}\). By Theorem 3.1, \[|U_{1}|+|U_{2}|=|U|=\alpha^{\prime}(\mathcal{H})\leq n-1. \tag{121}\] (ii) Suppose that \(|R_{2}^{\prime}|\geq n\). Now \(R_{2}^{\prime}\backslash U_{2}\neq\emptyset\), otherwise \(|U_{2}|\geq|R_{2}^{\prime}|\geq n\), contradicting to inequality (121). Let \(x\in R_{2}^{\prime}\backslash U_{2}\). By Claim 11(ii), \(N_{R}(x)\subseteq T_{1}\). Since \(U\) is a minimum vertex cover of \(\mathcal{H}\), \(N_{B}(x)\subseteq U_{1}\). Then \(|U_{1}\cup T_{1}|\geq\delta(G)\). Since \(m\geq n+1\), \(|U_{1}|\geq\delta(G)-|T_{1}|\stackrel{{\eqref{eq:2.2}}}{{>}} \frac{m+3n-1}{4}\geq n\), a contradiction to inequality (121). Thus \(|R_{2}^{\prime}|\leq n-1\). (iii) Combining (ii) and inequality (116), we have that \(V_{2}^{\prime\prime}\neq\emptyset\). Suppose that \(V_{1}^{\prime\prime}=\emptyset\), then \(V_{1}=R_{1}\). For any pair of vertices \(x,y\in R_{2}^{\prime}\uplus V_{2}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)|\geq|N_{B}(x)\cap R_{1}^{\prime}|+|N_{B}(y)\cap R_{1}^{ \prime}|-|R_{1}^{\prime}|\] \[\mathop{\geq}_{(120)}2(\delta(G)-|T_{1}|)-|R_{1}^{\prime}|=2\delta(G)-|V_{1}| -|T_{1}|\mathop{>}_{>}^{(118)}\frac{n}{2}.\] By (i), the blue component \(\mathcal{H}\) contains \(R_{2}^{\prime}\uplus V_{2}^{\prime\prime}\). Now \((R_{2}^{\prime}\uplus V_{2}^{\prime\prime})\backslash U_{2}\neq\emptyset\), otherwise \(|U_{2}|\geq|R_{2}^{\prime}\uplus V_{2}^{\prime\prime}|\mathop{\geq}_{n}^{(1 16)}n\), a contradiction to inequality (121). Let \(x\in(R_{2}^{\prime}\cup V_{2}^{\prime\prime})\backslash U_{2}\). By Claim 11(ii)-(iii), since \(V_{1}^{\prime\prime}=\emptyset\) and \(B_{1}\subseteq T_{1}\), \(N_{R}(x)\subseteq T_{1}\). Since \(U\) is a minimum vertex of \(\mathcal{H}\), \(N_{B}(x)\subseteq U_{1}\). Then \(|U_{1}\cup T_{1}|\geq\delta(G)\). Since \(m\geq n+1\), \(|U_{1}|\geq\delta(G)-|T_{1}|\mathop{>}_{>}^{(118)}\frac{m+3n-1}{4}\geq n\), a contradiction to inequality (121). Thus \(V_{1}^{\prime\prime}\neq\emptyset\). Let \(i\in[2]\). Since \(B_{i}\subseteq R_{i}\), by Claim 11(vi), \(G[B_{i},V_{3-i}^{\prime\prime}]=\emptyset\). By Claim 17(iii) and Claim 11(viii), we have that \(|B_{i}|<\frac{m+n-1}{4}\). Combining with inequality (83), we have that \[m\geq 3n+2. \tag{122}\] Then \[|\mathcal{B}|=|B_{1}|+|B_{2}|<\frac{m+n-1}{2}. \tag{123}\] Now we have that \(|T_{2}|>\frac{m+n-1}{2}\), otherwise \(|R_{2}^{\prime}|=|R_{2}|-|T_{2}|\mathop{>}_{>}^{(117)}\frac{m+n-1}{4}\mathop{ \geq}_{n}^{(122)}n\), a contradiction to Claim 17(ii). Then \[|T_{1}|\mathop{\leq}_{=}^{(86)}m-1-|T_{2}|<\frac{m-n-1}{2}. \tag{124}\] **Claim 18**.: _For each \(i\in[2]\), the following holds._ _(i) \(V_{i}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{i}\)._ _(ii) \(|\mathcal{F}_{i}\cap V_{3-i}|>|\mathcal{B}|\)._ _(iii) \(|V_{i}^{\prime\prime}|\leq n-1\)._ Proof.: (i) For any pair of vertices \(x,x^{\prime}\in V_{i}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(x^{\prime})|\mathop{\geq}_{\geq}^{(119)}2(\delta(G)-|V_{3- i}^{\prime\prime}|)-|T_{3-i}^{\prime}\uplus R_{3-i}^{\prime}|=2\delta(G)-|V_{3- i}^{\prime}|-|V_{3-i}^{\prime\prime}|\mathop{\gtrsim}_{(88)}\frac{m+5n-1}{4}.\] Thus \(V_{i}^{\prime\prime}\) is contained in some blue component of \(G\), say \(\mathcal{F}_{i}\). (ii) By (i), \(V_{i}^{\prime\prime}\subseteq{\cal F}_{i}\cap V_{i}\). Let \(x\in V_{i}^{\prime\prime}\), then \[|{\cal F}_{i}\cap V_{3-i}|\geq|N_{B}(x)\cap(T_{3-i}^{\prime}\uplus R_{3-i}^{ \prime})|\stackrel{{\eqref{eq:V_1}}}{{\geq}}\delta(G)-|V_{3-i}^{ \prime\prime}|\stackrel{{\eqref{eq:V_1}}}{{>}}\frac{m+n-1}{2} \stackrel{{\eqref{eq:V_1}}}{{>}}|{\cal B}|.\] (iii) If \(|V_{i}^{\prime\prime}|\geq n\) for some \(i\in[2]\), then by (i)-(ii), \(|{\cal F}_{i}\cap V_{i}|\geq|V_{i}^{\prime\prime}|\geq n\) and \(|{\cal F}_{i}\cap V_{3-i}|>|{\cal B}|\), contradicting to the maximality of \({\cal B}\). Thus for each \(i\in[2]\), \(|V_{i}^{\prime\prime}|\leq n-1\). Combining Claim 18(iii) and inequality (116), \(R_{2}^{\prime}\neq\emptyset\). By Claim 17(i), \(R_{2}^{\prime}\subseteq{\cal H}\cap V_{2}\). Let \(y\in R_{2}^{\prime}\), since \(m\geq n+1\), \[|{\cal H}\cap V_{1}|\geq|N_{B}(y)|\stackrel{{\eqref{eq:V_1}}}{{ \geq}}\delta(G)-|T_{1}|\stackrel{{\eqref{eq:V_1}}}{{>}}\frac{m+5n- 1}{4}\geq\frac{3n}{2}. \tag{125}\] Suppose that \(G_{B}[V_{1}^{\prime\prime},R_{2}^{\prime}]\neq\emptyset\). By Claim 17(i) and Claim 18(i), \({\cal H}={\cal F}_{1}\). By Claim 18(ii), \(|{\cal H}\cap V_{2}|=|{\cal F}_{1}\cap V_{2}|>|{\cal B}|\) and \(|{\cal H}\cap V_{1}|\stackrel{{\eqref{eq:V_1}}}{{>}}\frac{3n}{2}\), a contradiction to the maximality of \({\cal B}\). Thus \(G_{B}[V_{1}^{\prime\prime},R_{2}^{\prime}]=\emptyset\). By Claim 11(ii)-(iii), \(G[V_{1}^{\prime\prime},R_{2}^{\prime}]=\emptyset\). Recall that \(B_{2}=BR_{2}\). By Claim 11(vi), \(G[V_{1}^{\prime\prime},B_{2}]=\emptyset\). Hence \(G[V_{1}^{\prime\prime},B_{2}\uplus R_{2}^{\prime}]=\emptyset\). By Claim 17(iii), \(V_{1}^{\prime\prime}\neq\emptyset\). By Claim 11(viii), \(|B_{2}\uplus R_{2}^{\prime}|<\frac{m+n-1}{4}\). By Claim 18(iii), \(|V_{2}^{\prime\prime}|\leq n-1\). Then \(|T_{2}^{\prime}|=|V_{2}|-|B_{2}\uplus R_{2}^{\prime}\uplus V_{2}^{\prime\prime}| \geq m-|B_{2}\uplus R_{2}^{\prime}|>\frac{3m-n+1}{4}\). Recall that \(B_{2}\subseteq T_{2}\), then \(|T_{2}|=|B_{2}|+|T_{2}^{\prime}|\stackrel{{\eqref{eq:V_1}}}{{ \geq}}n+|T_{2}^{\prime}|>\frac{3m+3n+1}{4}\). Thus \[|T_{1}|\stackrel{{\eqref{eq:V_1}}}{{\leq}}m-1-|T_{2}|<\frac{m-3n- 5}{4}. \tag{126}\] For any pair of vertices \(x,y\in R_{2}^{\prime}\uplus V_{2}^{\prime\prime}\), by inclusion-exclusion principle, we have that \[|N_{B}(x)\cap N_{B}(y)| \geq|N_{B}(x)\cap R_{1}^{\prime}|+|N_{B}(y)\cap R_{1}^{\prime}|- |R_{1}^{\prime}|\] \[\stackrel{{\eqref{eq:V_1}}}{{\geq}}2(\delta(G)-|T_{ 1}\uplus V_{1}^{\prime\prime}|)-|R_{1}^{\prime}|\] \[=2\delta(G)-|V_{1}|-|T_{1}\uplus V_{1}^{\prime\prime}|\stackrel{{ \eqref{eq:V_1}}}{{>}}n+1.\] By Claim 17(i) and Claim 18(i), \({\cal H}={\cal F}_{2}\). By Claim 18(ii), \(|{\cal H}\cap V_{1}|=|{\cal F}_{2}\cap V_{1}|>|{\cal B}|\) and \(|{\cal H}\cap V_{2}|\geq|R_{2}^{\prime}\uplus V_{2}^{\prime\prime}|\stackrel{{ \eqref{eq:V_1}}}{{\geq}}n\), a contradiction to the maximality of \({\cal B}\). \(\Box\) ## 4 Monochromatic Cycles For completeness, we will explain how to expand the large monochromatic connected matchings in the auxiliary graph into the monochromatic cycles in the initial graph in this section. The method was initially introduced by Luczak [13]. Given a graph \(G\), let \(X\) and \(Y\) be disjoint subsets of \(V(G)\). The _density_ of the pair \((X,Y)\) is the value \(d(X,Y)=\frac{e(G[X,Y])}{|X||Y|}\). For \(\epsilon>0\), the pair \((X,Y)\) is called \(\epsilon\)_-regular_ for \(G\) if \(|d(X,Y)-d(X^{\prime},Y^{\prime})|<\epsilon\) for any \(X^{\prime}\subseteq X\) and \(Y^{\prime}\subseteq Y\) with \(|X^{\prime}|>\epsilon|X|\) and \(|Y^{\prime}|>\epsilon|Y|\). **Fact 4.1**.: _Let \((U,V)\) be an \(\epsilon\)-regular pair with density d and \(V^{\prime}\subseteq V\) with \(|V^{\prime}|>\epsilon|V|\), then all but at most \(\epsilon|U|\) vertices \(u\in U\) satisfying \(|N(u)\cap V^{\prime}|>(d-\epsilon)|V^{\prime}|\)._ We will use the following bipartite degree form for \(2\)-colored regularity lemma adapted to our needs. **Lemma 4.1** (\(2\)-colored Regularity Lemma-Bipartite Degree Form [18]).: _For any \(\epsilon>0\) and positive integer \(k_{0}\), there exists an \(M=M(\epsilon,k_{0}\)) such that for any \(2\)-edge-colored balanced bipartite graph \(G[X,Y]\) on order \(2N\geq M\) and any \(d\in[0,1]\), there exists an integer \(k\), a partition \(\{X_{0},X_{1},\ldots,X_{k}\}\) of \(X\), a partition \(\{Y_{0},Y_{1},\ldots,Y_{k}\}\) of \(Y\), and a subgraph \(G^{\prime}\subseteq G\) with the following properties:_ (i)_\(|X_{0}|=|Y_{0}|\leq\epsilon N\)._ (ii)_\(k_{0}\leq k\leq M\)._ (iii) _For any \(1\leq i,j\leq k\), \(|X_{i}|=|Y_{j}|=n\)._ (iv) _For any \(v\in V(G)\), \(d_{G^{\prime}}(v)>d_{G}(v)-(2d+\epsilon)N\)._ (v) _For any \(1\leq i,j\leq k\), the pair \((X_{i},Y_{j})\) is \(\epsilon\)-regular for \(G^{\prime}_{R}\) with density either \(0\) or greater than \(d\), and \(\epsilon\)-regular for \(G^{\prime}_{B}\) with density either \(0\) or greater than \(d\), where \(E(G^{\prime})=E(G^{\prime}_{R})\cup E(G^{\prime}_{B})\) is the induced \(2\)-edge-coloring of \(G^{\prime}\)._ **Definition 4.1** (\((\epsilon,\)\(\mathrm{d})\)-reduced graph).: _Given a bipartite graph \(G[X,Y]\), a partition \(\{X_{0},X_{1},\ldots,X_{k}\}\) of \(X\) and a partition \(\{Y_{0},Y_{1},\ldots,Y_{k}\}\) of \(Y\) satisfying properties_ (i)-(v) _of Lemma 4.1, we define the \((\epsilon,d)\)-reduced \(2\)-colored bipartite graph \(\Gamma\) on vertex set \(\{x_{i}:i\in[k]\}\uplus\{y_{j}:j\in[k]\}\) as follows. For any \(1\leq i,j\leq k\),_ \(\bullet\) _let \(x_{i}y_{j}\) be a red edge of \(\Gamma\) when \(G^{\prime}_{R}[X_{i},Y_{j}]\) has density at least \(d\);_ \(\bullet\) _let \(x_{i}y_{j}\) be a blue edge of \(\Gamma\) when \(G^{\prime}_{B}[X_{i},Y_{j}]\) has density at least \(d\)._ The next lemma [[3], Lemma 2.2] due to Benevides, Luczak, Scott, Skokan and White guarantees a long monochromatic path in a regular pair. **Lemma 4.2** (Benevides, Luczak, Scott, Skokan and White [3]).: _For every \(0<\beta<1\), there is an \(m_{0}(\beta)\) such that for every \(m>m_{0}(\beta)\) the following holds: Let \(G\) be a graph, and let \(V_{1}\), \(V_{2}\) be disjoint subsets of \(V(G)\) such that \(|V_{1}|,|V_{2}|\geq m\). Furthermore let the pair \((V_{1},V_{2})\) be \(\epsilon\)-regular for G with density at least \(\frac{\beta}{4}\) for some \(0<\epsilon<\frac{\beta}{4}\). Then for every pair of vertices \(v_{1}\in V_{1}\), \(v_{2}\in V_{2}\) satisfying \(|N_{G}(v_{1})\cap V_{2}|,|N_{G}(v_{2})\cap V_{1}|\geq\frac{\beta m}{5}\), and for every \(1\leq l\leq m-\frac{5\epsilon m}{\beta}\), \(G\) contains a path of length \(2l+1\) connecting \(v_{1}\) and \(v_{2}\)._ **Proof of Theorem 1.6.** Assume that \(0<\eta<\frac{1}{1000}\) and \(N\) is large enough. Let \(G[X,Y]\) be a balanced bipartite graph on \(2(N-1)\) vertices with \(\delta(G)\geq(\frac{3}{4}+3\eta)(N-1)\). Let \(\epsilon=\eta^{3}\) and \(d=\eta\). By Lemma 4.1, there exists a partition \(\{U_{0}^{(1)},U_{1}^{(1)},\ldots,U_{k-1}^{(1)}\}\) of \(X\), a partition \(\{U_{0}^{(2)},U_{1}^{(2)},\ldots,U_{k-1}^{(2)}\}\) of \(Y\) and a subgraph \(G^{{}^{\prime}}\subseteq G\) satisfying properties \((i)-(v)\) in Lemma 4.1. Let \(\Gamma\) be an \((\epsilon,d)\)-reduced \(2\)-colored bipartite graph deduced from \(G\) with bipartition \(\{u_{i}^{(1)}:i\in[k-1]\}\uplus\{u_{i}^{(2)}:i\in[k-1]\}\). By Lemma 4.1(iv), \(\delta(G^{\prime})>\delta(G)-(2d+\epsilon)(N-1)\geq(\frac{3}{4}+\eta-\eta^{3} )(N-1)>\frac{3}{4}(N-1)\). For any \(1\leq i,j\leq k-1\), by Lemma 4.1(i) and (iii), \((1-\epsilon)\frac{N-1}{k-1}\leq|U_{i}^{(1)}|=|U_{j}^{(2)}|=n\leq\frac{N-1}{k-1}\). For any \(1\leq i,j\leq k-1\), by Lemma 4.1(v) and Definition 4.1, \(u_{i}^{(1)}u_{j}^{(2)}\in E(\Gamma)\) if and only if \(G^{\prime}[U_{i}^{(1)},U_{j}^{(2)}]\neq\emptyset\). Then \(\delta(\Gamma)\geq\frac{\delta(G^{\prime})}{n}>\frac{3}{4}(k-1)\). Thus \(\Gamma\) is a \(2\)-colored balanced bipartite graph on \(2(k-1)\) vertices with \(\delta(\Gamma)>\frac{3}{4}(k-1)\). By Theorem 1.5, each \(2\)-edge-coloring of \(\Gamma\) yields a red connected \(\lfloor\alpha_{1}k\rfloor\)-matching or a blue connected \(\lfloor\alpha_{2}k\rfloor\)-matching. Suppose that \(\Gamma\) contains a red connected \(t\)-matching \(M^{*}\), where \(1\leq t\leq\alpha_{1}k\). Let \(F^{*}\) be a red minimal tree containing \(M^{*}\). Let \(W=u_{i_{1}}^{(r)}u_{i_{2}}^{(3-r)}u_{i_{3}}^{(r)}\cdots u_{i_{s}}^{(3-r)}u_{i_{ 1}}^{(r)}\) be a closed walk in \(F^{*}\) containing \(M^{*}\), then \(s\geq 2t\). Since \(F^{*}\) is a tree, \(W\) must be of even length \(s\). Now we view an edge \(u_{i_{q}}^{(p)}u_{i_{q}+1}^{(3-p)}\) of \(W\) as in \(M^{*}\) only when it is an edge in \(M^{*}\) and first appearances in \(W\), where \(p\in\{1,2\}\), \(q\in[s]\), \(i_{0}=i_{s}\) and \(i_{s+1}=i_{1}\). Applying Fact 4.1 repeatedly, for any \(q\in[s]\), there exists a vertex \(v_{i_{q}}^{(p)}\in U_{i_{q}}^{(p)}\), where \(p\in\{1,2\}\), \(i_{0}=i_{s}\) and \(i_{s+1}=i_{1}\), such that: (i) \(v_{i_{q}}^{(p)}\) has at least \((d-\epsilon)n=(\eta-\eta^{3})n\geq\frac{4n}{5}n\) red neighbours in both \(U_{i_{q-1}}^{(3-p)}\) and \(U_{i_{q+1}}^{(3-p)}\); (ii) If an edge \(u_{i_{q}}^{(p)}u_{i_{q}+1}^{(3-p)}\) of \(W\) is not in \(M^{*}\), then \(v_{i_{q}}^{(p)}v_{i_{q}+1}^{(3-p)}\) is a red edge in \(G\). Let \(m=(1-\epsilon)\frac{N-1}{k-1}\) and \(\beta=4\eta\). By Lemma 4.2, we have that for any \(1\leq l\leq(1-\frac{5\eta^{2}}{4})m\), each edge \(u_{i_{q}}^{(p)}u_{i_{q}+1}^{(3-p)}\) in \(M^{*}\) can be extended a red path of length \(2l+1\) connecting vertices \(v_{i_{q}}^{(p)}\in U_{i_{q}}^{(p)}\) and \(v_{i_{q}+1}^{(3-p)}\in U_{i_{q}+1}^{(3-p)}\) in \(G\). Then there exists a red cycle of each even length \(\sum_{j=1}^{t}2l_{j}+s\), where \(1\leq l_{j}\leq(1-\frac{5\eta^{2}}{4})m\) for each \(j\in[t]\). Let \(t=1\), \(s=2\), and \(l=1\), then there exists a red cycle of length \(4\). Recall that \(N\geq k\). For each \(j\in[t]\), let \(t=\lfloor\alpha_{1}k\rfloor\) and \(l_{j}=(1-\frac{5\eta^{2}}{4})m\). Then \[\sum_{j=1}^{t}2l_{j}+s =2\sum_{j=1}^{t}l_{j}+s=2t(1-\frac{5\eta^{2}}{4})m+s\] \[=2t(1-\frac{5\eta^{2}}{4})(1-\epsilon)\frac{N-1}{k-1}+s\] \[=2t(1-\frac{5\eta^{2}}{4})(1-\eta^{3})\frac{N-1}{k-1}+s\] \[>2t(1-\frac{3}{2}\eta^{2})\frac{N}{k}+2t\geq(2-3\eta^{2})\alpha_{ 1}N.\] Therefore there exist red even cycles of each length in \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{1}N\}\). Suppose that \(\Gamma\) contains a blue connected \(t\)-matching, where \(1\leq t\leq\alpha_{2}k\), then as the same argument above, we have that there exist blue even cycles of each length \(\{4,6,8,\ldots,(2-3\eta^{2})\alpha_{2}N\}\). \(\Box\)
2304.01999
Revisiting the Evaluation of Image Synthesis with GANs
A good metric, which promises a reliable comparison between solutions, is essential for any well-defined task. Unlike most vision tasks that have per-sample ground-truth, image synthesis tasks target generating unseen data and hence are usually evaluated through a distributional distance between one set of real samples and another set of generated samples. This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models. In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set. Extensive experiments conducted on multiple datasets and settings reveal several important findings. Firstly, a group of models that include both CNN-based and ViT-based architectures serve as reliable and robust feature extractors for measurement evaluation. Secondly, Centered Kernel Alignment (CKA) provides a better comparison across various extractors and hierarchical layers in one model. Finally, CKA is more sample-efficient and enjoys better agreement with human judgment in characterizing the similarity between two internal data correlations. These findings contribute to the development of a new measurement system, which enables a consistent and reliable re-evaluation of current state-of-the-art generative models.
Mengping Yang, Ceyuan Yang, Yichi Zhang, Qingyan Bai, Yujun Shen, Bo Dai
2023-04-04T17:54:32Z
http://arxiv.org/abs/2304.01999v2
# Revisiting the Evaluation of Image Synthesis with GANs ###### Abstract A good metric, which promises a reliable comparison between solutions, is essential to a well-defined task. Unlike most vision tasks that have per-sample ground-truth, image synthesis targets generating unseen data and hence is usually evaluated with a distributional distance between one set of real samples and another set of generated samples. This work provides an empirical study on the evaluation of synthesis performance by taking the popular generative adversarial networks (GANs) as a representative of generative models. In particular, we make in-depth analyses on how to represent a data point in the feature space, how to calculate a fair distance using selected samples, and how many instances to use from each set. Experiments on multiple datasets and settings suggest that (1) a group of models including both CNN-based and ViT-based architectures serve as reliable and robust feature extractors, (2) Centered Kernel Alignment (CKA) enables better comparison across various extractors and hierarchical layers in one model, and (3) CKA shows satisfactory sample efficiency and complements existing metrics (e.g., FID) in characterizing the similarity between two internal data correlations. These findings help us design a new measurement system, based on which we re-evaluate the state-of-the-art generative models in a consistent and reliable way. ## 1 Introduction Through reproducing realistic data distribution, generative models [59, 2, 19, 34, 26, 46] have enabled thrilling opportunities to go beyond existing observations via content recreation. Their technical breakthroughs in recent years also directly lead to the blooming of metaverse, AI Generated Content (AIGC), and many other downstream applications. However, measuring their advances is notoriously hard, requiring measuring the divergence between the real and synthesized data distribution in a consistent and comprehensive way. Among existing evaluation metrics [37, 4, 32, 40], Frechet Inception Distance (FID) [22] is the most popular evaluation paradigm for synthesis comparison. Despite its widespread use, studies [31, 40, 33, 1, 25] have identified several flaws in FID that may miscalculate the actual improvements of generative models, yet a more systematical study is needed. Therefore, in this paper we present an empirical study that systematically revisits the consistency and comprehensiveness of evaluation paradigms of generative models. Commonly used paradigms including FID usually consist of two major components, namely a feature extractor \(\phi(\cdot)\) and a distributional distance \(d(\cdot)\). Through \(\phi(\cdot)\)_i.e.,_ Inception-V3 [51], a considerable number of real samples (\(x\in\mathcal{X}\)) and synthesized ones (\(y\in\mathcal{Y}\)) are projected into a pre-defined representation space to approximate the corresponding data distribution respectively. \(d(\cdot)\)_i.e.,_ Frechet Distance [17] is then calculated in the aforementioned space to deliver the similarity index, indicating the synthesis quality. Following the philosophy, our study is concerned with the representation space defined by the feature extractor \(\phi(\cdot)\), and the distributional distance \(d(\cdot)\). The most commonly used feature extractor, _i.e.,_ Inception-V3, has been testified to encode limited semantics and exists a large perceptual null representation space [31], making it hardly reflect the actual improvement of synthesis. Accordingly, multiple models that vary in _supervision signals_, _architectures_, and _representation similarities_ are gathered together to investigate the impact of the feature extractor \(\phi(\cdot)\), which are respectively motivated by 1) representation spaces defined by the extractor \(\phi(\cdot)\) usually encode various levels of semantics, understanding which extractor or which set of extractors encode rich semantics is important yet less explored; 2) and it remains uncertain how the correlation of various representation spaces affects the evaluation results. On top of the feature extractor \(\phi(\cdot)\), we further study _the consistency across spaces, the choice of distances_, and _the number of samples_ for the distributional distance \(d(\cdot)\). Specifically, despite being measured in various representation spaces, the distributional distance is required to deliver a comparable similarity index consistently and reliably. Similar issues also hold for the number of samples used to represent the synthesis distribution. Finally, for the choice of \(d\), besides Frechet Distance we also involve Centered Kernel Alignment (CKA) [11, 10], which is widely used in representational similarity analysis to measure the similarity of distributions [29, 39]. In order to compare different choices of aforementioned aspects of \(\phi(\cdot)\) and \(d(\cdot)\) qualitatively and quantitatively, we re-implement the visualization pipeline and the histogram matching technique as in [31], which respectively reflect the most relevant semantics of a given image _w.r.t_ the final similarity indexes, and attack the measurement system through a selected subset. Moreover, we conduct an extensive user study (involving 100 persons) to investigate the correlation between the synthesis measurement with human judgment. Consequently, through these tools we found: 1) One specific extractor (_e.g.,_ Inception-V3) may capture limited semantics and provide unreliable measurement results. 2) Various extractors naturally focus on a wide range of semantics that could generalize well across different domains, inspiring us that taking more extractors into account could deliver a comprehensive and reliable measurement. 3) Regarding features from multiple representation spaces determined by different extractors, CKA could better measure the discrepancy and produce bounded values that make itself comparable across different spaces. 4) Together with the extensive user study, CKA agrees with human judgment in a consistent way, whereas FID failed in some circumstances. After revisiting various factors, we also leverage the new measurement system to re-evaluate a variety of settings and generative models. Current state-of-the-art generative models on several domains are first benchmarked through our system. Moreover, the performances and intrinsic properties of diffusion models and GANs are comprehensively compared with the new measurement system. It turns out that our system does not only deliver the similar assessment with FID and human evaluation in most of cases, but also present more reliable and consistent measurement with human judgment than FID, demonstrating the robustness and superiority of the proposed metric. ## 2 Preliminary In this section, we briefly introduce the feature extractor \(\phi(\cdot)\) that defines the representation space, distributional distance \(d(\cdot)\), evaluated datasets and generative models, as well as auxiliary analysis approaches used in the empirical study. ### Representation Spaces In order to investigate the effect of feature extractors \(\phi(\cdot)\), we collect multiple off-the-shelf models that are usually pre-trained on different objectives, in fully-supervised/self-supervised manners, and with various architectures. **Supervision.** Models that are trained in fully-supervised and self-supervised manners are collected due to their generalization ability. To be specific, we leverage the backbone networks which are well-trained in supervised ImageNet classification [13, 21]. Meanwhile, we also gather the weights derived from single-modal/multi-modal self-supervised learning [42, 8, 9, 6]. **Architecture.** Besides various supervisions, models with different architectures are also taken into account. Concretely, models with CNNs [51, 48, 21, 6, 8], ViTs [9, 35, 42], as well as MLPs [53] architectures are also gathered together. ### Distributional Distances **Frechet Inception Distance (FID)** computes the Frechet Distance [17] between two estimated Gaussian distributions, _i.e.,_\(\mathcal{N}(\mu_{s},\Sigma_{s})\) and \(\mathcal{N}(\mu_{r},\Sigma_{r})\), which represent the feature distributions of synthesized and real images extracted by the pre-trained Inception-V3. Formally, Frechet Distance (FD) is calculated by \[\mathrm{FD}(\mathcal{X},\mathcal{Y})=\left\|\mu_{s}-\mu_{r}\right\|^{2}+ \mathrm{Tr}\left(\Sigma_{s}+\Sigma_{r}-2\left(\Sigma_{s}\Sigma_{r}\right)^{ \frac{1}{2}}\right), \tag{1}\] where \(\mathcal{X}\) and \(\mathcal{Y}\) represent the real distribution and synthesized distribution, respectively. Besides, \(\mu\) and \(\Sigma\) corresponding to the mean and variance of Gaussian distribution, and \(\mathrm{Tr}(\cdot)\) is the trace operation. **Centered Kernel Alignment (CKA)** as a widely used similarity index for quantifying neural network representations [11, 29, 12], could also serve as a metric of similarity between two given distributions. To be specific, CKA is normalized from Hilbert-Schmidt Independence Criterion (HSIC) [20] to ensure invariant to isotropic scaling and is calculated by \[\mathrm{CKA}(\mathcal{X},\mathcal{Y})=\frac{\mathrm{HSIC}(\mathbf{x},\mathbf{ y})}{\sqrt{\mathrm{HSIC}(\mathbf{x},\mathbf{x})\mathrm{HSIC}(\mathbf{y},\mathbf{y})}}. \tag{2}\] Here, HSIC determines whether two distributions are independent, which is equivalent to MMD between the joint distribution and the product of the marginal distributions. Formally, HSIC is defined as \[\mathrm{HSIC}(\mathbf{x},\mathbf{y})=\frac{1}{(n-1)^{2}}\,\mathrm{Tr}( \mathbf{x}H\mathbf{y}H), \tag{3}\] where \(H\) denotes the centering matrix (_i.e.,_\(H_{n}=I_{n}-\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\)). These metrics are compared in a consistent setting for fair comparison, implementation details are given in _Appendix_. ### Benchmarks **Generative models.** In order to analyze various factors of measurement, we also collect multiple generators to produce synthesized images. To be specific, we employ state-of-the-art generative models trained on various datasets for comparison in Tab. 1. We download corresponding publicly available models for comparison. Unless otherwise specified, all of these models are compared in a consistent setting. ### Analysis Approaches Beside the measurement itself, we also leverage the technique developed in [31] that provides a new perspective for analysis. **Visualization tool.** In order to qualitatively compare where these feature extractors "focus on", we re-implement the visualization technique of [31] to localize the regions that contribute most to the final similarity. The highlighted regions reveal the most relevant parts of given images regarding the distributional distance. Accordingly, large highlighted regions indicate that more visual semantics are considered for the measurement. Recall that generating a realistic image requires all parts, even every pixel, to be well-synthesized. Naturally, a metric that could focus on more visual regions seems to be much more reliable. **Histogram matching based hacking mechanism**[31] is employed to attack the measurement system to investigate the robustness against the histogram matching. To be specific, a subset is selected from a superset by matching the class distribution histogram of synthesis with that of the real one. As pointed out by [31], the performance could be substantially improved using the chosen subset. Accordingly, we prepare two different sets of synthesis for comparison. One reference set (denote as "Random") is produced by generating images randomly. Besides, one chosen set "\(\text{Chosen}_{I}\)" is gathered via matching the class distribution of the supervised Inception-V3 [51]. The matched histograms are available in _Appendix_. Since there is no modification on the generator and real data, the evaluation should keep consistent, and any performance gains could directly reflect the unreliability of a given extractor. **User study.** In order to investigate the correlation between the evaluation system and human perceptual judgment, extensive user studies are conducted. Concretely, two strategies of human evaluation are designed for different investigations. For benchmarking the synthesis quality of various generative models, we prepare considerable randomly generated images (_i.e.,_\(5K\)), and ask \(100\) individuals to determine how many of these images are photorealistic. The final scores are averaged by the judgments of \(100\) individuals. On the other hand, to qualitatively compare two paired generative models with similar quantitative performances (_e.g.,_ Projected-GAN [45] and Aided-GAN [30] on LSUN Church dataset in Sec. 4.1), we prepare groups of paired images of different generative models and ask \(100\) individuals to assess the perceptual quality of grouped images. Similarly, the same groups are repeated several times to make the human evaluation more reliable. The interface of our user studies is presented in _Appendix_. In this way, we could obtain reliable and consistent human judgments, facilitating better investigation with the evaluation system. ## 3 Analysis on Representation Spaces and Distributional Distances In this section, the potential impacts of the representation space and distributional distance with respect to the final similarity index are investigated. To be specific, Sec. 3.1 presents the study of feature extractors that define the representation spaces, followed by the investigation of distributional distances in Sec. 3.2. ### Representation Spaces Prior works [31, 33, 37, 25] have demonstrated that Inception-V3 could hardly reflect the exact improvement of synthesis, taking limited semantics into account. We thus conduct a comprehensive study by including various models that differ in _supervision signals_ and _architectures_, to find which or which set of extractors serve as reliable feature extractors for synthesis comparison. **Multiple feature extractors with different architectures lead to various focuses.** Through the visualization, Fig. 1 shows the most important regions that contribute to the measurement results. Obviously, with or without any manual labels for pre-training of CNN-based extractors, they consistently pour more attention on concentrated regions, including relatively few semantics. In particular, CNN-based extractors remain to highlight objects (_e.g.,_ microphone, hat, and sunglasses), neglecting the main \begin{table} \begin{tabular}{l|c c} \hline \hline Method & Year & Training Datasets \\ \hline StyleGAN2 [28] & 2020 & FFHQ, LSUN Church \\ BigGAN [5] & 2019 & ImageNet \\ BigGAN-deep [5] & 2019 & ImageNet \\ ADM [14] & 2021 & ImageNet \\ ADM-G [14] & 2021 & ImageNet \\ Projected-GAN [45] & 2021 & FFHQ, LSUN Church \\ InsGen [60] & 2021 & FFHQ \\ StyleGAN-XL [60] & 2022 & FFHQ, ImageNet \\ Aided-GAN [30] & 2022 & LSUN Church \\ EqGAN [55] & 2022 & FFHQ, LSUN Church \\ Diffusion-GAN [57] & 2022 & LSUN Church \\ DiT [41] & 2022 & ImageNet \\ BigRoC [18] & 2022 & ImageNet \\ \hline \hline \end{tabular} \end{table} Table 1: **Generative models used in our analysis**. The publicly accessible models are gathered for evaluation. focus of the evaluation domains (_i.e.,_ Human Faces here). Differently, ViT-based extractors could capture a larger region that covers more synthesis details and semantics. Such observation also agrees with the conclusion that ViTs have a global and large receptive field, compared with the local receptive field of the CNN-based ones [43, 58, 16]. Moreover, ViT-based and CNN-based extractors seem complement since the former captures larger region while the latter focuses on dense objects. When reproducing the whole data distribution, generative models are required to synthesize not only the objects (_e.g.,_ human faces), but also the background, texture and other details. Similarly, the extractors are supposed to capture more regions of given images to approximate the data distribution, enabling better visual perception. However, the above observation regarding the heatmaps of various extractors demonstrates that one extractor could only capture partial semantics of the whole image for measurement, hardly reflecting the synthesis performance comprehensively. Consequently, various extractors with different architectures should be considered for more reliable evaluation since they could involve more semantics in a complementary manner. **Extractors that could be hacked by the histogram matching mechanism are not reliable for evaluation.** Prior study [31] has identified that concentrating on limited semantics might make the extractor fragile to the histogram matching attack, thus making the evaluation unreliable. The above investigation demonstrates that extractors with various architectures attach different importance to the visual concepts, we are curious about what would happen when these extractors are attacked. Accordingly, we turn to investigate the robustness of various extractors toward the histogram matching attack [31]. Concretely, we obtain the publicly released generator of StyleGAN1[27] and calculate the Frechet Distance (FD) results on FFHQ dataset. One evaluated set is produced randomly with the generator, and the other evaluated set is chosen by matching the prediction of Inception-V3. Footnote 1: [https://github.com/NVlabs/stylegan3](https://github.com/NVlabs/stylegan3) Tab. 2 presents the quantitative results. Comparing the performances between the random set and chosen set (_i.e.,_ Chosen\({}_{I}\)), we could tell that some of these extractors are hackable regardless of the architectures (_e.g.,_ CNN-based Inception, ViT-based Swin-Transformer, and ResMLP). For instance, the FD score of Inception is improved by \(5.7\)% and \(3.8\)% for ResMLP. Namely, FD could be improved without any changes on the generators, matching the arguments of [31]. We thus filter the extractors that are fragile to the attack since their ranks could be altered without actual improvement to the generative models. **Extractors that define similar representation spaces should be further filtered.** So far, we have demonstrated the desirability of considering multiple extractors for a more comprehensive evaluation and filtered some extractors that could be hacked by the histogram matching. Despite the remaining extractors (_i.e.,_ not hackable in Tab. 2) define \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} \hline \hline Model & Inception & ConvNeXt & SWAV & MoCo-R & RepVGG & CLIP-R & Swin-Trans & ViT & DeiT & CLIP-V & MoCo-V & ResMLP \\ \hline Random & 2.81 & 78.03 & 0.13 & 0.24 & 129.61 & 10.34 & 142.87 & 15.11 & 437.80 & 1.06 & 7.32 & 99.11 \\ Chosen\({}_{I}\) & 2.65(\(\downarrow\)) & 78.19 & 0.13 & 0.24 & 129.67 & 10.36 & 140.01(\(\downarrow\)) & 15.11 & 430.81(\(\downarrow\)) & 1.06 & 7.40 & 95.36(\(\downarrow\)) \\ \hline **Original** & **Inception** & **ConvNeXt** & **SWAV** & **MoCo-R** & **CLIP-R** & **RepVGG** & **Swin-Trans** & **ViT** & **DeiT** & **CLIP-V** & **MoCo-V** & **ResMLP** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparison results of Fréchet Distance (FD \(\downarrow\)) on FFHQ dataset**. “Random, Chosen\({}_{I}\)” represent the synthesis distribution of randomly generated and histogram matching with the prediction of Inception-V3. Moreover, “-R” and “-V” respectively denote the architecture of ResNet and ViT. (\(\downarrow\)) indicates the results are hacked by the histogram matching mechanism. Notably, the values across different rows are not comparable and the results are tested for three times. The results of Fréchet Distance (FD) on ImageNet can be found in _Appendix_. Figure 1: **Heatmaps from extractors with various architectures.** CNN-based extractors (_i.e.,_ Inception [51], ConvNeXt [36], SWAV [6], MoCo-R [9], CLIP-R [42], and RepVGG [15]) focus on dense regions whereas ViT-based (_i.e.,_ Swin-Transformer [35], ViT [16], DeiT [54], CLIP-V [42], and MoCo-V [9]) and MLP-based (_i.e.,_ ResMLP [53]) ones pour attention on wider areas, including more visual concepts for measurement. various representation spaces for quantifying the synthesis distribution, their representation may share many similar concepts. Consequently, extractors that produce similar representation spaces should be filtered to avoid redundancy. Accordingly, we identify the correlation between representations of high dimension in different feature extractors following [29]. In particular, a considerable number of images (_i.e.,_\(10K\) images from ImageNet) are fed into these extractors for computing their correspondence. Fig. 2 shows the similarity of their representations. Obviously, the representations of CLIP-ResNet and MoCo-ResNet have higher similarity with other extractors. Considering these two extractors are both CNN-based and they capture similar semantics with other CNN-based extractors, we remove the CLIP-ResNet and MoCo-ResNet in the following context. Such that, we obtain a set of feature extractors that 1) capture rich semantics in a complementary way, 2) are robust toward the histogram matching attack, and 3) define meaningful and distinctive representation spaces for synthesis comparison. The following table presents these feature extractors. Both CNN-based and ViT-based extractors are considered and all of them have been demonstrated strong performance for the pre-defined and downstream tasks, facilitating more comprehensive and reliable evaluation. Besides, the conclusion that self-supervised extractors SWAV, CLIP-V, and MoCo-V are suitable choices agrees with [37, 31, 3]. \begin{tabular}{c c} \hline **CNN-based** & ConvNeXt [36], SWAV [6], RepVGG [15] \\ **ViT-based** & CLIP-ViT [42], MoCo-ViT [9], ViT [16] \\ \hline \end{tabular} **The selected extractors show consistent awareness of synthesis quality.** In order to investigate the reliability of the selected extractors, we leverage them for testing the synthesis quality of representative generative models on ImageNet dataset. Tab. 3 presents the quantitative Frechet Distance (FD) results of various extractors. Although their results differ in numerical scales, they consistently reflect the trend of StyleGAN-XL's performance significantly surpasses BigGAN and BigGAN-deep, and BigGAN-deep is superior than BigGAN in a slighter way. Such observation also agrees with human judgment in the last column of Tab. 5. Namely, the final chosen models could serve as reliable extractors for synthesis comparison. ### Distributional Distances After the study of feature extractors, we turn to investigate another essential component of measurement, _i.e.,_ the distributional distance \(d(\cdot)\). Beside Frechet Distance, we also include Centered Kernel Alignment (CKA) to conduct the quantitative comparison and the study on various number of samples to be used for evaluation. **CKA provides normalized distances _w.r.t_ numerical scales in variable representation spaces.** Tab. 5 demonstrates the quantitative results of Centered Kernel Alignment (CKA). Despite the Frechet Distance (FD) results across various extractors in Tab. 3 consistently reflect synthesis quality, their values fluctuate dramatically. For instance, the FD score of BigGAN on MoCo-ViT is 238.78 whereas 3.35 on CLIP-ViT. This makes it difficult to combine these results between different extractors. By contrast, CKA scores are substantially stable when evaluated in various representation spaces and one can combine results from multiple extractors (_e.g.,_ average) for better comparison. **CKA demonstrates great potential for comparison across hierarchical layers.** One neural network could always extract multi-level features from various depths that usually capture a broader set of visual concepts from high-level semantics to low-level details. Accordingly, we are wondering whether the richer information could provide more overall measurement. Hierarchical features are thus taken into account in this part. The left part of Tab. 4 presents the qualitative results of different layers' heatmaps. We can observe that different layers indeed aim at extracting different semantics. Additionally, we provide the quantitative results of FD and CKA in the right part of Tab. 4. Still, the FD scores of various layers fluctuate dramatically, _e.g.,_ the FD results of \(0.60\) in Layer\({}_{1}\) while 104.10 in Layer\({}_{4}\). Differently, the CKA results from hierarchical layers are comparable and the overall score could be derived by averaging multi-level scores, and the overall score still reflects synthesis quality in a consistent and reliable way. **CKA is more sample efficient and comparable across various layers.** Typically, the synthesis quality are measured between real and synthesized distributions. Among them, the whole training data is used as the referenced real distribution \(\mathcal{X}\), and \(50\textit{K}\) generated images as the synthesized distribution \(\mathcal{Y}\), regardless of how many samples contained in the training data. However, when evaluating on the large-scale training data (_e.g.,_ ImageNet with \(1.28\) Figure 2: **Representation similarity of various extractors.** Darker Yellow color denotes higher similarity. million images), \(50K\) images may be insufficient for representing the entire distribution. Therefore, we study the impacts of the amount of synthesized samples. Concretely, FFHQ (with \(70K\) images) and ImageNet (with \(1.28\) million images) are investigated for universal conclusions. For both datasets, we synthesis \(500K\) images as candidate, and randomly choose \(5K\), \(10K\), \(50K\), \(100K\), \(250K\), and \(500K\) images as the synthesized distribution for computing the metrics. The entire training data is utilized as the real distribution, and the publicly accessible models on FFHQ2 and ImageNet3 are employed. Footnote 2: [https://github.com/NVlabs/stylegan3](https://github.com/NVlabs/stylegan3) Footnote 3: [https://github.com/autonomousvision/stylegan-xl](https://github.com/autonomousvision/stylegan-xl) Fig. 3 demonstrates the curves of FD and CKA scores evaluated under different data regimes on FFHQ dataset. Obviously, we could tell that the results of FD can be drastically improved by synthesizing more data regardless of different extractors, until sufficient samples (\(\sim 100K\)) are used. It is worth mentioning that CKA results are stable under different data regimes. Moreover, CKA could measure the distributional distances precisely with only \(5K\) synthesized samples, suggesting significant sample efficiency. Such observations demonstrate CKA's impressive adaptability toward the amount of synthesized data. Overall, a set of feature extractors that are 1) robust to the histogram matching attack, 2) capture sufficient visual concepts in a complementary manner, and 3) define distinctive representation spaces could provide a more comprehensive and consistent awareness of synthesis quality. Together with a normalized distance (_i.e.,_ CKA) that is comparable across various representation spaces and hierarchical layers of feature extractors, as well as being stable under different numbers of synthesized samples. These two essential components constitute a reliable measurement system to deliver the distributional discrepancy. ## 4 Benchmark Existing Generative Models With the above findings about the feature extractors \(\phi(\cdot)\) and distributional distances \(d(\cdot)\), we construct a new measurement system for synthesis evaluation. Concretely, our system leverages a set of models with both CNN and ViT architectures as the feature extractor, namely, CNN-based models ConvNeXt [36], RepVGG [15], SWAV [6] and ViT-based ViT [16], MoCo-ViT [9], CLIP-ViT [42], with which \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline Model & \multicolumn{2}{c|}{BigGAN} & \multicolumn{2}{c|}{BigGAN-deep} & \multicolumn{2}{c}{StyleGAN-XL} \\ \hline Layer & FD\({}_{\downarrow}\) & CKA\({}_{\uparrow}\) & FD\({}_{\downarrow}\) & CKA\({}_{\uparrow}\) & FD\({}_{\downarrow}\) & CKA\({}_{\uparrow}\) \\ \hline Layer\({}_{1}\) & 0.60 & 99.06 & 0.54 & 98.95 & 0.05 & 99.84 \\ Layer\({}_{2}\) & 7.45 & 86.89 & 5.58 & 90.09 & 0.77 & 91.06 \\ Layer\({}_{3}\) & 30.24 & 82.80 & 23.55 & 83.63 & 6.11 & 85.75 \\ Layer\({}_{4}\) & 104.10 & 80.13 & 81.02 & 81.05 & 35.77 & 83.55 \\ \hline Overall & N/A & 87.22 & N/A & 88.43 & N/A & 90.05 \\ \hline \hline \end{tabular} \end{table} Table 4: **Heatmaps from various semantic levels on FFHQ dataset (_left_)** and **quantitative results of Fréchet Distance (FD \(\downarrow\)) and Centered Kernel Alignment (CKA \(\uparrow\)) on ImageNet dataset (_right_)**. CLIP-ViT serves as the feature extractor for hierarchical evaluation here, more results can be found in _Appendix_. Figure 3: **Frechet Distance (FD) and Centered Kernel Alignment (CKA) scores evaluated under various data regimes on FFHQ dataset.** FID scores are scaled for better visualization. \(\downarrow\) denotes the results fluctuate downward. The percentages represent the magnitude of the numerical variation. The curve of ImageNet datasets can be found in _Appendix_. \begin{table} \begin{tabular}{l|c|c c c c c c|c} \hline \hline Model & FID\({}^{\dagger}\) & ConvNeXt & RepVGG & SWAV & ViT & MoCo-ViT & CLIP-ViT & Overall & User study \\ \hline BigGAN [5] & 8.70 & 140.04 & 67.53 & 1.12 & 29.95 & 238.78 & 3.35 & N/A & 53\% \\ BigGAN-deep [5] & 6.02 & 102.26 & 58.85 & 0.87 & 23.98 & 85.83 & 3.22 & N/A & 55\% \\ StyleGAN-XL [47] & 1.81 & 19.22 & 15.93 & 0.18 & 8.51 & 29.38 & 1.85 & N/A & 67\% \\ \hline \hline \end{tabular} \end{table} Table 3: **Quantitative comparison results of Fréchet Distance (FD \(\downarrow\)) on ImageNet dataset. \({}^{\dagger}\)** scores are quoted from the original paper and other results are tested for three times. Notably, the values across different columns are not comparable. More quantitative results of Fréchet Distance (FD) on ImageNet can be found in _Appendix_. more comprehensive evaluation could be accomplished. Further, Centered Kernel Alignment (CKA) serves as the similarity indicator. Accordingly, in this section we re-evaluate and compare the progress of existing generative models with our measurement system. Concretely, latest generative models that advance the progress are re-evaluated with our measurement system in Sec. 4.1, followed by our discussion about the diffusion models and GANs in Sec. 4.2. Notably, user studies are conducted for investigating the correlation between our measurement system and human perceptual judgment. ### Comparison on Existing Generative models In order to investigate the actual progress of existing generative models, multiple publicly available generators trained on several popular benchmarks (_i.e.,_ FFHQ, LSUN Church, and ImageNet) are collected for comparison. Benefiting from the impressive sample efficiency of CKA, we consistently produce \(50K\) images as the synthesis distribution and use the whole datasets as the real distribution. Results from various selected extractors, and their averaged scores are reported for thorough comparisons. **Measuring synthesis in a consistent and reliable way.** Tab. 6, Tab. 7, and Tab. 5 respectively demonstrate the quantitative results of different generative models on FFHQ, LSUN Church, and ImageNet datasets evaluated by our measurement system. In most cases, CKA scores from various extractors of our evaluation system and the overall scores provide a consistent ranking with the original FID results, as well as agree with human perceptual judgment. These results suggest that our new metric could consistently and reliably measure the synthesis quality. However, for the Projected-GAN [45] and StyleGAN2 [28] (_resp.,_ AidedGAN [30]) evaluated on FFHQ (_resp.,_ LSUN Church) dataset in Tab. 6 (_resp.,_ Tab. 7), our evaluation system gives the opposite ranking to the FID. Namely, the quantitative results of StyleGAN2 (_resp.,_ Aided-GAN) are determined better than that of Projected-GAN under our evaluation, whereas the FID scores vote Projected-GAN for the better one. Additionally, the performances of ICGAN [7] and class-conditional ICGAN on ImageNet in Tab. 5 are identified basically the same by our metric, while FID scores indicate that the class-conditional one significantly surpasses the unconditional one. In order to compare the performance of these models in a fine-grained way, we further perform paired-wise human evaluation for investigation. To be specific, groups of paired images synthesized by different models (_e.g.,_ StyleGAN2 and Projected-GAN on FFHQ dataset) are randomly picked for visual comparison. Then, presented with two sets of images produced by different models, 100 users are asked to determine which set of images are more plausible. The same group of images may be repeated for several times in a random order, ensuring the consistency and faithfulness. **Providing right rankings and better correlations with human evaluation.** Tab. 8 presents the quantitative results of human visual comparison. Observably, the synthesis quality of StyleGAN (_resp._ Aided-GAN) is more preferred by human visual judgment. That is, our measurement system produces the same rankings as the human perceptual evaluation, demonstrating the reliability of our metric. \begin{table} \begin{tabular}{l|c|c c c c c c|c|c} \hline \hline Model & FID\({}^{\dagger}\) & ConvNeXt & RepVGG & SWAV & ViT & MoCo-ViT & CLIP-ViT & Overall & User study \\ \hline StyleGAN2 [28] & 3.66 & 92.64 & 62.10 & 99.55 & 99.32 & 98.59 & 97.46 & 91.61 & 45\% \\ Projected-GAN [45] & 3.39 & 92.34 & 61.62 & 99.37 & 98.99 & 98.81 & 97.31 & 91.41 & 39\% \\ InsGen [60] & 3.31 & 94.17 & 65.72 & 99.60 & 99.36 & 98.90 & 97.75 & 92.58 & 58\% \\ EqGAN [55] & 2.89 & 94.54 & 64.36 & 99.69 & 99.49 & 99.10 & 98.61 & 92.63 & 62\% \\ StyleGAN-XL [47] & 2.19 & 93.78 & 67.59 & 99.68 & 99.49 & 99.25 & 97.33 & 92.85 & 66\% \\ \hline \hline \end{tabular} \end{table} Table 6: **Quantitative comparison results of Centered Kernel Alignment (CKA \(\uparrow\)) on FFHQ dataset. \({}^{\dagger}\) scores are quoted from the original paper and other results are tested for three times.** \begin{table} \begin{tabular}{l|c|c c c c c c|c|c} \hline \hline Model & FID\({}^{\dagger}\) & ConvNeXt & RepVGG & SWAV & ViT & MoCo-ViT & CLIP-ViT & Overall & User study \\ \hline ICGAN [7] & 15.60 & 62.65 & 72.25 & 86.10 & 97.04 & 77.99 & 92.16 & 81.37 & 32\% \\ ADM [14] & 10.94 & 63.12 & 72.90 & 87.71 & 97.39 & 78.69 & 92.92 & 82.12 & 45\% \\ BigGAN [5] & 8.70 & 63.89 & 74.62 & 87.94 & 97.60 & 79.60 & 93.25 & 82.82 & 53\% \\ C-ICGAN [7] & 7.50 & 62.53 & 72.20 & 86.12 & 97.05 & 78.01 & 92.08 & 81.33 & 31\% \\ BigGAN-deep [5] & 6.95 & 64.97 & 76.45 & 88.31 & 97.77 & 80.27 & 94.11 & 83.65 & 55\% \\ Guided-ADM [14] & 4.59 & 65.99 & 78.99 & 89.44 & 98.13 & 80.46 & 94.96 & 84.66 & 57\% \\ BigRec [18] & 3.69 & 67.86 & 79.48 & 89.93 & 98.23 & 82.25 & 96.07 & 85.64 & 65\% \\ StyleGAN-XL [47] & 2.30 & 68.75 & 80.28 & 91.54 & 98.52 & 82.64 & 97.41 & 86.52 & 67\% \\ DiT [41] & 2.27 & 68.94 & 80.65 & 91.03 & 99.05 & 82.90 & 97.08 & 86.61 & 67\% \\ \hline \hline \end{tabular} \end{table} Table 5: **Quantitative comparison results of Centered Kernel Alignment (CKA \(\uparrow\)) on ImageNet dataset. \({}^{\dagger}\) scores are quoted from the original paper and other results are tested for three times.** Moreover, our metric's indication that there's no significant gap between ICGAN and class-conditional ICGAN is also verified by the human evaluation. Considering the perceptual null space of Inception-V3, one possible reason for the FID performance gains of Projected-GAN might be the usage of pre-trained models, which is also identified by [31]. By contrast, our measurement produces the right rankings and agrees well with human evaluation, reflecting the actual improvement of synthesis. ### Comparison between GANs and Diffusion Models Diffusion models [23, 49, 50, 14, 41, 44, 19, 62, 38, 24] have recently demonstrated significant advancements in visual synthesis and became the new trend of generative models over the past two years. Benefiting from the reliability of our new measurement system, here we perform a comprehensive comparison between GANs and diffusion models. Specifically, we report the FID and the overall CKA scores, as well as human judgment for quantitative comparison. Additionally, the model parameters and the synthesis speed (_i.e.,_ generating \(1K\) images on an A100 GPU) are also included for investigation. Tab. 9 presents the quantitative results. Obviously, diffusion model (_i.e.,_ DiT) achieves comparable results with GAN (_i.e.,_ StyleGAN-XL), yet with much more parameters (675\(M\)_v.s._ 166.3\(M\)). Moreover, diffusion model usually requires extra inference time to obtain realistic images. Such comparisons reveal that GANs achieve better trade-offs between efficiency and synthesis quality, and designing computation-efficient diffusion models is essential for the community. ## 5 Conclusion In this work, we revisit the evaluation of generative models from the perspectives of the feature extractor and the distributional distance. Through extensive investigation regarding the potential contribution of various feature extractors and distributional distances, we identify the impacts of several potential factors that contribute to the final similarity index. With these findings, we construct a new measurement system that provides more comprehensive and holistic comparison for synthesis evaluation. Notably, our system could present more consistent measurements with human judgment, enabling more reliable synthesis comparison. **Limitations.** Despite a comprehensive investigation, our study could still be extended in several aspects. For instance, the impacts of different low-level image processing techniques (_e.g.,_ resizing) could be identified since they also play an important role in synthesis evaluation [40]. Besides, comparing datasets with various resolutions could be further studied. Nonetheless, our study could be considered as an empirical revisiting towards the paradigm of evaluating generative models. We hope this work could inspire more fascinating works of synthesis evaluation and provide potential insight to develop more comprehensive evaluation protocols. We will also conduct more investigation on the unexplored factors and compare more generative models with our system.
2305.18139
SDE driven by cylindrical $α$-stable process with distributional drift and application
For $\alpha \in (1,2)$, we study the following stochastic differential equation driven by a non-degenerate symmetric $\alpha$-stable process in ${\mathbb R}^d$: \begin{align*} {\mathord{{\rm d}}} X_t=b(t,X_t){\mathord{{\rm d}}} t+\sigma(t,X_{t-}){\mathord{{\rm d}}} L_t^{(\alpha)},\ \ X_0 =x \in {\mathbb R}^d, \end{align*} where $b$ belongs to $ L^\infty({\mathbb R}_+;{\mathbf B}_{\infty,\infty}^{-\beta}({\mathbb R}^d))$ with some $\beta\in(0,\frac{\alpha-1}{2})$, and $\sigma:{\mathbb R}_+\times {\mathbb R}^d \to {\mathbb R}^d \otimes {\mathbb R}^d$ is a $d \times d $ matrix-valued measurable function. We point out that the noise could be a cylindrical $\alpha$-stable process. We first show the generalized martingale problems and then establish the stability estimates of solutions. As an application, we give the weak convergence rate of the Euler scheme for additive noises with drift coefficient $b=b(x)$.
Mingyan Wu, Zimo Hao
2023-05-29T15:07:25Z
http://arxiv.org/abs/2305.18139v2
# SDE Driven by cylindrical \(\alpha\)-stable process with distributional drift and application ###### Abstract. For \(\alpha\in(1,2)\), we study the following stochastic differential equation driven by a non-degenerate symmetric \(\alpha\)-stable process in \(\mathbb{R}^{d}\): \[\mathrm{d}X_{t}=b(t,X_{t})\mathrm{d}t+\sigma(t,X_{t-})\mathrm{d}L_{t}^{( \alpha)},\ \ X_{0}=x\in\mathbb{R}^{d},\] where \(b\) belongs to \(L^{\infty}(\mathbb{R}_{+};\mathbf{B}^{-\beta}_{\infty,\infty}(\mathbb{R}^{d}))\) with some \(\beta\in(0,\frac{\alpha-1}{2})\), and \(\sigma:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\otimes\mathbb{R}^{d}\) is a \(d\times d\) matrix-valued measurable function. We point out that the noise could be a cylindrical \(\alpha\)-stable process. We first show the generalized martingale problems and then establish the stability estimates of solutions. As an application, we give the weak convergence rate of the Euler scheme for additive noises with drift coefficient \(b=b(x)\). _Keywords: Singular SDE; Martingale problem; Littlewood-Paley's decomposition; Euler's approximation._ Mingyan Wu is partially supported by the National Natural Science Foundation of China (Grant No. 61873320 and No. 12201227). Zimo Hao is grateful for the financial support of NNSFC grants of China (Nos. 12131019, 11731009), and the DFG through the CRC 1283 "Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications". * Corresponding author ### Well-posedness of generalized martingale problems The first goal of this paper is to establish the well-posedness of the generalized martingale solution for SDE (1.1) including the stability (see Theorem 1.1 and Theorem 4.6). A \(d\)-dimentional \(\alpha\)-stable process with \(\alpha\in(0,2)\) is a purely jumped Levy process with Levy measure (called the \(\alpha\)-stable measure) \[\nu^{(\alpha)}(A)=\int_{0}^{\infty}\left(\int_{\mathbb{S}^{d-1}}\frac{1_{A}(r \theta)\Sigma(\mathrm{d}\theta)}{r^{1+\alpha}}\right)\mathrm{d}r,\quad A\in \mathcal{B}(\mathbb{R}^{d}),\] where \(\Sigma\) is a finite measure over the unit sphere \(\mathbb{S}^{d-1}\) (called spherical measure of \(\nu^{(\alpha)}\)). In this paper, we consider \(\alpha\)-stable measures that satisfy the following _non-degenerate_ assumption: (**ND**) For each \(\theta_{0}\in\mathbb{S}^{d-1}\), \[\int_{\mathbb{S}^{d-1}}|\theta\cdot\theta_{0}|\Sigma(\mathrm{d}\theta)>0.\] The most common \(\alpha\)-stable process is the standard (or strictly) \(\alpha\)-stable process whose Levy measure is absolutely continuous with the Lebesgue measure and given by \(\frac{1}{|\mathbb{C}^{d+\alpha}}\mathrm{d}z\), and the infinitesimal generator is the fractional Laplace operator \(\Delta^{\alpha/2}\). From the point of view of Fourier analysis, (fractional) derivatives are characterized by Fourier multipliers. It is well-known that the symbol of the fractional Laplacian is given by \(|\xi|^{\alpha}\) with \(\xi\in\mathbb{R}^{d}\). If we consider \(\alpha=2\), the situation becomes the usual Laplacian \(\Delta\), i.e. the infinitesimal generator of the \(d\)-dimensional standard Brownian motion \[L^{(2)}:=(W^{1},W^{2},\ldots,W^{d})\] where \(\{W^{i}\}_{i=1}^{d}\) are independent \(1\)-dimensional Brownian motions. Under the distributional drift assumption, there have been several pieces of literature on Brownian motion cases. The earliest works are [4] and [20], both focusing on the one-dimensional time-independent multiplicative noise. We also refer to the works [21][14], [24], [15], and [25] for the one-dimensional case, and [19], [46], [8], and [47] for the multi-dimensional case. In recent years, researchers have paid more and more attention to the jumped case (for example [1], [10], [17], [30], [33] and so on). However, these studies consider only additive \(\alpha\)-stable noises except for [33]. Although the authors of [33] study the SDE (1.1) with multiplicative Levy noises, their results do not cover the singular case such as cylindrical \(\alpha\)-stable noises. We say an \(\alpha\)-stable process \[L^{(\alpha)}:=(L^{1},L^{2},\ldots,L^{d})\] is cylindrical, if its components are independent one-dimensional standard \(\alpha\)-stable processes. In fact, components of a multi-dimensional standard \(\alpha\)-stable process are not jointly independent, which is different from the Brownian case. Nevertheless, in many models, the joint independence of \(\{L^{i}\}_{i=1}^{d}\) plays a vital role. For instance, in the following \(N\)-particle system: \[\mathrm{d}X_{t}^{N,i}=\frac{1}{N}\sum_{i\neq i}K(X_{t}^{N,i}-X_{t}^{N,i}) \mathrm{d}t+\mathrm{d}L_{t}^{i},\] \(K:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is the interaction kernel, and \(\{L^{i}\}_{i=1}^{N}\) is a family of independent \(\alpha\)-stable processes, which stands for the random phenomenon like collisions between two particles (see [9] and references therein for examples). On the other hand, the Levy measure of a cylindrical \(\alpha\)-stable process (called the cylindrical \(\alpha\)-stable measure) is given by \[\nu^{(\alpha),c}(\mathrm{d}z):=\sum_{k=1}^{d}\delta_{0}(\mathrm{d}z_{1})\cdots \delta_{0}(\mathrm{d}z_{i-1})\frac{\mathrm{d}z_{i}}{|z_{i}|^{1+\alpha}}\delta_ {0}(\mathrm{d}z_{i+1})\cdots\delta_{0}(\mathrm{d}z_{d}),\] where \(\delta_{0}\) is the Dirac measure at zero. Then the symbol of the associated infinitesimal generator is \[\sum_{i=1}^{d}|\xi_{i}|^{\alpha},\ \ \xi=(\xi_{1},\cdots,\xi_{d})\in\mathbb{R}^{d},\] which is more singular than the symbol of the standard \(\alpha\)-stable process since \(|\xi|^{\alpha}\) is only not smooth at origin, but \(\sum_{i=1}^{d}|\xi_{i}|^{\alpha}\) is not smooth on all axes \(\cup_{i=1}^{d}\{\xi_{i}=0\}\). That is why we say the cylindrical one is singular. In [33], Ling and Zhang obtained the well-posedness of generalized martingale problems relying on the following change of variables: \[\begin{split}\mathcal{L}_{\sigma}^{(\alpha),s}f(x)& =\int_{\mathbb{R}^{d}}(f(x+\sigma(x)z)-f(x)-\nabla f(x)\cdot\sigma( x)z^{(\alpha)})\frac{\mathrm{d}z}{|z|^{d+\alpha}}\\ &=\int_{\mathbb{R}^{d}}(f(x+z)-f(x)-\nabla f(x)\cdot z^{(\alpha) })\frac{\mathrm{d}z}{|\sigma^{-1}(x)z|^{d+\alpha}|\det\sigma(x)|},\end{split} \tag{1.3}\] where \(z^{(\alpha)}:=z\mathbf{1}_{|z|\leqslant 1}\mathbf{1}_{\alpha=1}+z\mathbf{1}_{ \alpha\in(1,2)}\), the symbol \(\nabla\) denotes the gradient operator, and the diffusion coefficient \(\sigma\) is time-independent. It is obvious that such a change of variables method in [33] might not be valid in general cases, since, for example, the infinitesimal generator of a cylindrical one, \[\mathcal{L}_{\sigma}^{(\alpha),c}f(x)=\sum_{i=1}^{d}\int_{\mathbb{R}}(f(x+( \sigma(x)z)_{i})-f(x)-\nabla f(x)\cdot(\sigma(x)z^{(\alpha)})_{i})\frac{ \mathrm{d}z_{i}}{|z_{i}|^{1+\alpha}}, \tag{1.4}\] where \((\sigma z)_{i}:=(\sigma_{1i}z_{i},...,\sigma_{di}z_{i})\) for any \(\sigma=(\sigma_{ij})\in\mathbb{R}^{d}\otimes\mathbb{R}^{d}\), is essentially different from the standard one. To sum up, studying the generalized martingale problem with singular multiplicative Levy noises is interesting. The preceding theorem is our first main result: **Theorem 1.1**.: _Assume that \(\alpha\in(1,2)\) and \((\mathbf{H}^{\sigma})\) holds with constant \(c_{0}\). Let \(b\in L^{\infty}([0,T];\mathbf{B}_{\infty,\infty}^{-\beta})\) with some \(\beta\in(0,\frac{\alpha-1}{2})\). Then there is a unique generalized martingale solution \(\mathbf{Q}\in\mathcal{M}_{b,\sigma}(x)\) in the sense of Definition 4.2 below._ **Remark 1.2**.: _It is worth pointing out that the definition of martingale solutions presented in [33] looks different from Definition 4.2, but verifying their equivalence is not a challenging task. We prove the approximating equation, wherein these two definitions coincide, converges to the solution in Theorem 1.1._ In order to explore martingale solutions as defined in Definition 4.2, we consider the following forward partial differential equation (PDE for short): \[\partial_{t}u=\mathcal{L}_{\sigma}^{(\alpha)}u+b\cdot\nabla u+f,\ \ u(0)=u_{0}, \tag{1.5}\] where \(\alpha\in(1,2)\), \(b,f\in L^{\infty}([0,T];\mathbf{B}_{\infty,\infty}^{-\beta})\) with some \(\beta\in(0,\frac{\alpha-1}{2})\), and \[\mathcal{L}_{\sigma}^{(\alpha)}g(t,x):=\int_{\mathbb{R}^{d}}\Big{(}g(t,x+ \sigma_{t}(x)z)-g(t,x)-\sigma_{t}(x)z\cdot\nabla g(t,x)\Big{)}\nu^{(\alpha)}( \mathrm{d}z) \tag{1.6}\] with a non-degenerate symmetric \(\alpha\)-stable measure \(\nu^{(\alpha)}\). Note that in [33], the assumption on \(\sigma\), where \(\sigma\in L^{\infty}([0,T];\mathbf{B}_{\infty,\infty}^{\beta+\varepsilon})\), is weaker than ours. As a matter of fact, it seems to be open to give the well-posedness of martingale (or weak) solutions for SDEs driven by cylindrical \(\alpha\)-stable processes with distributional drifts when \(\sigma\) is just Holder continuous with respect to the space variable. Specifically, the key ingredient of reaching our first goal is the boundedness of operator \(\mathcal{L}_{\sigma}^{(\alpha)}:\mathbf{B}_{\infty,\infty}^{\alpha-\beta} \rightarrow\mathbf{B}_{\infty,\infty}^{-\beta}\) (see Lemma 3.1). Regarding the operator \(\mathcal{L}_{\sigma}^{(\alpha)}\) as a natural extension of the operator \(\kappa(x)\Delta^{\frac{\alpha}{2}}\) as we illustrated in (1.3), the authors in [33] achieved the boundedness when \(\nu^{(\alpha)}\) is absolutely continuous with respect to the Lebesgue measure, where the function \(\kappa\) possesses the same Besov-Holder regularity as that of \(\sigma\). Thus, the boundedness of \(\mathcal{L}_{\sigma}^{(\alpha)}\) in [33] can be attributed to the product law: (see Lemma 2.4 \[\|\kappa\Delta^{\frac{\alpha}{2}}u\|_{\mathbf{B}_{\infty,\infty}^{-\beta}} \leqslant c\|\Delta^{\frac{\alpha}{2}}u\|_{\mathbf{B}_{\infty,\infty}^{-\beta }}\|\kappa\|_{\mathbf{B}_{\infty,\infty}^{\beta+\varepsilon}},\] where \(c>0\) is a constant only depends on \(\beta,\varepsilon>0\); and this is the reason they only need \(\sigma\) belongs to \(L^{\infty}([0,T];\mathbf{B}_{\infty,\infty}^{\beta+\varepsilon})\). Nevertheless, the focus of this paper is on compound operators, as exemplified by \(u(x)\to u(\sigma(x))\). And then it is imperative that \(\sigma\) is Lipschitz in order to ensure the Besov-Holder regularity of \(u\). Moreover, we require a distinct technique to that presented in [33] for handling PDE (1.5). See more details in Section 3. ### Weak convergence rate of Euler scheme Our second aim is to investigate the Euler scheme for a toy model of SDE (1.1) with singular noises, where \(\sigma\equiv\mathbb{I}_{d\times d}\) and \(b=b(x)\) belongs to the negative order Besov space \(\mathbf{B}_{\infty,\infty}^{-\beta}\). To study such a singular stochastic model, a natural problem is how to define the distributional drift term. Modifying \(b(x)\) in the sense of \[b_{m}:=b*\mathcal{K}_{m},\quad m>0,\] where \(\mathcal{K}_{m}(x):=m^{d}\mathcal{K}(mx)\) with some good kernel \(\mathcal{K}\in C_{0}^{\infty}(\mathbb{R}^{d})\) with \(\int_{\mathbb{R}^{d}}\mathcal{K}(x)\mathrm{d}x=1\), we consider the following modified Euler scheme for SDE (1.1): for any \(n\in\mathbb{N}\), \[X_{t}^{m,n}=x+\int_{0}^{t}b_{m}(X_{\phi_{n}(t)}^{m,n})\mathrm{d}t+L_{t}^{( \alpha)},\ \ x\in\mathbb{R}^{d}\] where \(\phi_{n}(t):=k/n\) when \(t\in[k/n,(k+1)/n)\) with \(k=0,1,2,....,n\). Set \(\mathbf{P}_{m,n}(t):=\mathbb{P}\circ(X_{t}^{m,n})^{-1}\). In this paper, we study the so-called moderate case, i.e. \(m=n^{\gamma}\) with some \(\gamma>0\). Here is our second main result, the weak convergence of this Euler scheme. **Theorem 1.3**.: _Let \(\alpha\in(1,2)\). Assume \(\mathbf{(H^{\sigma})}\) holds with constant \(c_{0}\), \(b\in\mathbf{B}_{\infty,\infty}^{-\beta}\) with some \(\beta\in(0,\frac{\alpha-1}{2})\) and \(m=n^{\gamma}\) with some \(\gamma>0\). Take the probability measure \(\mathbf{P}\) be the unique solution of the generalized martingale problem \(\mathcal{M}_{b,\sigma}(x)\) (see Definition 4.2). Then, for any \(\gamma\in(0,\frac{\alpha-1}{2\alpha\beta})\) and \(T>0\), there are constants \(c=c(d,\alpha,\beta,\mathcal{K},T,\delta)>0\) and \(\delta=\delta(\alpha,\beta,\gamma)>0\) such that for any \(n\in\mathbb{N}\),_ \[\sup_{t\in[0,T]}\|\mathbf{P}_{m,n}(t)-\mathbf{P}(t)\|_{\mathrm{var}}\leqslant cn ^{-\delta},\] _where \(\|\cdot\|_{\mathrm{var}}\) denotes the total variance of measures._ **Remark 1.4**.: _In the current paper, for simplicity, we only consider the additive noise case. Combining with a classical Levi's freezing coefficients method, our method might work as well for the case of uniformly elliptic and Lipschitz diffusion coefficients. We will study this case in the future._ So far, the rate of convergence for the Euler scheme of SDEs with regular or irregular drift coefficients has been widely studied. Whatever the Brownian case or the jumped case, people are used to adopting the Yamada-Watanabe approximation technique and the upper bound of heat kernels to prove the convergence rate. One could see the introduction section of [3] for more discussion of the Yamada-Watanabe approximation; see [26] or [43] for an example of using heat kernel pointwise estimates in this direction. It is worth emphasizing that the density of the standard \(\alpha\)-stable is equivalent with (cf. [5, Theorem 2.1]) \[\frac{t}{(t^{1/\alpha}+|x|)^{d+\alpha}},\] but the cylindrical one has an asymptotic behavior as follows: \[\prod_{i=1}^{d}\frac{t}{(t^{1/\alpha}+|x_{i}|)^{1+\alpha}}.\] More generally, when the spherical measure \(\Sigma\) of \(\alpha\)-stable measure \(\nu^{(\alpha)}\) is not equivalent to the uniform measure, though a smooth density exists (cf. [11]), it is so far away to get an analytic expression for it (see [42] for more details). The best we know is just an integral-type estimate (see Subsection 2.3), not a pointwise estimate. Moreover, for more irregular coefficient cases, the methods described above do not deal well with the problem, which leads to the search for more powerful tools. Thanks to the deep connection between Kolmogorov equations and SDEs, to the best of our knowledge, in [34] Menoukeu Pamen and Taguchi first exploited the so-called Ito-Tanaka trick to get the strong rate of the Euler-Maruyama approximation of SDEs driven by a Wiener process or a truncated symmetric \(\alpha\)-stable process, with Holder drift continuous drift coefficients. Meanwhile, concerning standard \(\alpha\)-stable processes, inspired by [18], Mikulevicius and Xu in [35] obtained the following strong convergence under \(\alpha\in(1,2)\) and \(\beta\)-Holder drift with \(\beta<-(1-\alpha/2)\): \[\mathbb{E}\left[\sup_{t\in[0,T]}|X_{t}^{m,n}-X_{t}|^{p}\right]\lesssim n^{-p \beta/\alpha},\] where \(p\in(0,\alpha)\), relying on the Ito-Tanaka trick as well. Nowdays, it has been gained much attention on SDEs with irregular coefficients driven by singular noises. In [32], Kuhn and Schilling studied the strong convergence of the Euler-Maruyama approximation for a class of Levy-driven SDEs with some Holder assumption on the drift coefficient through the Ito-Tanaka approach. Moreover, under irregular \(\beta\)-Holder drift, instead of using the Ito-Tanaka trick, in [7] Butkovsky, Dareiotis, and Gerencser showed the strong rate of convergence of the Euler scheme for SDEs driven by a variety of Levy noises, based on a new extended stochastic sewing lemma, where the rate can be uniformly with respect to \(\beta\) and any moment \(p>2\). On the other hand, observing the equivalence between norms of Holder spaces and positive integer order Besov spaces (cf. [41]), it is natural to think about negative order cases. In addition, singular SDEs with distributional drifts actually appear in the context of many stochastic models (cf. [30]). In the present paper, partially inspired by mentioned works above, we study the weak convergence rate of the Euler approximation by applying the Ito-Tanaka trick to estimate the difference from \(\mathbb{P}\circ(X_{t}^{m,n})^{-1}-\mathbb{P}\circ(X_{t})^{-1}\) to \(\mathbb{P}\circ(X_{t}^{m,n})^{-1}-\mathbb{P}\circ(X_{\phi_{n}(t)}^{m,n})^{-1}\), where the later one is obtained by heat kernel estimates (see Section 5 for more details). It is worth noting that before being adopted to prove the strong rate in [34], the Ito-Tanaka technique has traditionally been used to obtain weak convergence rates for Euler schemes which has important applications in stochastic financial theory (cf. [37]). ### Outline of paper The rest of this paper is organized as follows. In Section 2, we introduce some basic concepts and estimates. In Section 3, we establish Schauder's estimate and obtain the well-posedness for the non-local parabolic equation with singular \(\alpha\)-stable measure and distributional drift term. In Section 4, we show the first main result of this paper, Theorem 1.1, which gives the well-posedness of the generalized martingale solution to SDE (1.1) for any \(b\in L^{\infty}([0,T];\mathbf{B}_{\infty,\infty}^{-\beta})\) with some \(\beta\in(0,\frac{\alpha-1}{2})\). As a result, we prove the stability estimate Theorem 4.6. In Section 5, based on the results in Section 3 and 4, we show the weak convergence rate of the Euler scheme. Conventions and notationsThroughout this paper, we use the following conventions and notations: As usual, we use \(:=\) as a way of definition. Define \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\) and \(\mathbb{R}_{+}:=[0,\infty)\). The letter \(c=c(\cdots)\) denotes an unimportant constant, whose value may change in different places. We use \(A\asymp B\) and \(A\lesssim B\) to denote \(c^{-1}B\leqslant A\leqslant cB\) and \(A\leqslant cB\), respectively, for some unimportant constant \(c\geqslant 1\). We also use \(A\lesssim_{c}B\) to denote \(A\leqslant cB\) when we want to emphasize the constant. Denote by Beta functions and Gamma functions, respectively, \[\mathrm{B}(s_{1},s_{2}):=\int_{0}^{1}x^{s_{1}-1}(1-x)^{s_{2}-1} \mathrm{d}x,\ \ \forall s_{1},s_{2}>0 \tag{1.7}\] and \[\Gamma(s):=\int_{0}^{\infty}x^{s-1}\mathrm{e}^{-x}\mathrm{d}x,\ \ \forall s>0. \tag{1.8}\] * Let \(\mathbb{M}^{d}\) be the space of all real \(d\times d\)-matrices, and \(\mathbb{M}^{d}_{non}\) the set of all non-singular matrices. Denote the identity \(d\times d\)-matrix by \(\mathbb{I}_{d\times d}\). * For every \(p\in[1,\infty)\), we denote by \(L^{p}\) the space of all \(p\)-order integrable functions on \(\mathbb{R}^{d}\) with the norm denoted by \(\|\cdot\|_{p}\). * For a Banach space \(\mathbb{B}\) and \(T>0\), \(q\in[1,\infty]\), we denote by \[L^{q}_{T}\mathbb{B}:=L^{q}([0,T];\mathbb{B}),\ \ L^{q}_{T}:=L^{q}([0,T]\times\mathbb{R}^{d}).\] ## 2. Preliminary ### Besov spaces In this subsection, we introduce Besov spaces. Let \(\mathscr{S}(\mathbb{R}^{d})\) be the Schwartz space of all rapidly decreasing functions on \(\mathbb{R}^{d}\), and \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\) the dual space of \(\mathscr{S}(\mathbb{R}^{d})\) called Schwartz generalized function (or tempered distribution) space. Given \(f\in\mathscr{S}(\mathbb{R}^{d})\), the Fourier transform \(\hat{f}\) and the inverse Fourier transform \(\check{f}\) are defined by \[\hat{f}(\xi):=(2\pi)^{-d/2}\int_{\mathbb{R}^{d}}\mathrm{e}^{-i \xi\cdot x}f(x)\mathrm{d}x,\ \ \ \xi\in\mathbb{R}^{d},\] \[\check{f}(x):=(2\pi)^{-d/2}\int_{\mathbb{R}^{d}}\mathrm{e}^{i\xi \cdot x}f(\xi)\mathrm{d}\xi,\ \ \ x\in\mathbb{R}^{d}.\] For every \(f\in\mathscr{S}^{\prime}(\mathbb{R}^{d})\), the Fourier and the inverse transforms are defined by \[\langle\hat{f},\varphi\rangle:=\langle f,\hat{\varphi}\rangle, \ \ \ \ \ \ \ \langle\check{f},\varphi\rangle:=\langle f,\check{\varphi}\rangle,\ \ \forall\varphi\in\mathscr{S}(\mathbb{R}^{d}).\] Let \(\chi:\mathbb{R}^{d}\to[0,1]\) be a radial smooth function with \[\chi(\xi)=\begin{cases}1,&\ |\xi|\leqslant 1,\\ 0,&\ |\xi|>3/2.\end{cases}\] For \(\xi\in\mathbb{R}^{d}\), define \(\psi(\xi):=\chi(\xi)-\chi(2\xi)\) and for \(j\in\mathbb{N}_{0}\), \[\psi_{j}(\xi){:=}\psi(2^{-j}\xi).\] Let \(B_{r}:=\{\xi\in\mathbb{R}^{d}\mid|\xi|\leqslant r\}\) for \(r>0\). It is easy to see that \(\psi\geqslant 0\), \(\text{supp}\psi\subset B_{3/2}/B_{1/2}\), and \[\chi(2\xi)+\sum_{j=0}^{k}\psi_{j}(\xi)=\chi(2^{-k}\xi)\to 1,\ \ \text{as}\ \ k\to\infty. \tag{2.1}\] Since \(\breve{\psi}_{j}(y)=2^{id}\breve{\psi}(2^{j}y)\), \(j\geqslant 0\), we have \[\int_{\mathbb{R}^{d}}|x|^{q}|\nabla^{k}\breve{\psi}_{j}|(x)\mathrm{d}x\leqslant c 2^{(k-\theta)j},\ \ \theta>0,\ \ k\in\mathbb{N}_{0},\] where the constant \(c\) is equal to \(\int_{\mathbb{R}^{d}}|x|^{\theta}|\nabla^{k}\breve{\psi}|(x)\mathrm{d}x\) and \(\nabla^{k}\) stands for the \(k\)-order gradient. The block operators \(\mathcal{R}_{j}\), \(j\geqslant 0\) are defined on \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\) by \[\mathcal{R}_{j}f(x):=(\psi_{j}\hat{f})\check{(}x)=\breve{\psi}_{j}*f(x)=2^{id} \int_{\mathbb{R}^{d}}\breve{\psi}(2^{j}y)f(x-y)\mathrm{d}y, \tag{2.2}\] and \(\mathcal{R}_{-1}f(x):=(\chi\hat{f})\check{(}x)=\breve{\chi}*f(x)\). Then by (2.1), \[f=\sum_{j\geqslant-1}\mathcal{R}_{j}f. \tag{2.3}\] **Remark 2.1**.: _For \(j\geqslant-1\), by definitions, it is easy to see that_ \[\mathcal{R}_{j}=\mathcal{R}_{j}\widetilde{\mathcal{R}}_{j},\quad\text{where } \widetilde{\mathcal{R}}_{j}:=\sum_{\ell=-1}^{1}\mathcal{R}_{j+\ell}\text{ with }\mathcal{R}_{-2}:=0,\] _and \(\mathcal{R}_{j}\) is symmetric in the sense of_ \[\int_{\mathbb{R}^{d}}\mathcal{R}_{j}f(x)g(x)\mathrm{d}x=\int_{\mathbb{R}^{d}} f(x)\mathcal{R}_{j}g(x)\mathrm{d}x,\ \ f\in\mathscr{S}^{\prime}(\mathbb{R}^{d}),\ g\in\mathscr{S}(\mathbb{R}^{d}).\] Now we state the definitions of Besov spaces. **Definition 2.2** (Besov spaces).: _For every \(s\in\mathbb{R}\) and \(p,q\in[1,\infty]\), the Besov space \(\mathbf{B}^{s}_{p,q}(\mathbb{R}^{d})\) is defined by_ \[\mathbf{B}^{s}_{p,q}(\mathbb{R}^{d}):=\Big{\{}f\in\mathscr{S}^{\prime}( \mathbb{R}^{d})\,\big{|}\,\|f\|_{\mathbf{B}^{s}_{p,q}}:=\Big{[}\sum_{j \geqslant-1}\left(2^{sj}\|\mathcal{R}_{j}f\|_{p}\right)^{q}\Big{]}^{1/q}<\infty \Big{\}}.\] _If \(p=q=\infty\), it is in the sense_ \[\mathbf{B}^{s}_{\infty,\infty}(\mathbb{R}^{d}):=\Big{\{}f\in\mathscr{S}^{ \prime}(\mathbb{R}^{d})\,\big{|}\,\|f\|_{\mathbf{B}^{s}_{\infty,\infty}}:=\sup _{j\geqslant-1}2^{sj}\|\mathcal{R}_{j}f\|_{\infty}<\infty\Big{\}}.\] This definition and Young's inequality ensure that for any \(s_{i}\in\mathbb{R}\), \(i=0,1,2\), with \(s_{0}<s_{1}<s_{2}\) and \(\kappa>0\), there is a constant \(c_{\kappa}>0\) such that \[\|f\|_{\mathbf{B}^{s_{1}}_{\infty,\infty}}\leqslant\kappa\|f\|_{\mathbf{B}^{ s_{2}}_{\infty,\infty}}+c_{\kappa}\|f\|_{\mathbf{B}^{s_{0}}_{\infty,\infty}},\] In addition, for any \(s_{2}>s_{1}>0\), \[\|f\|_{\mathbf{B}^{s_{1}}_{\infty,\infty}}\leqslant\kappa\|f\|_{\mathbf{B}^{s _{2}}_{\infty,\infty}}+c_{\kappa}\|f\|_{\infty}. \tag{2.4}\] Recall the following Bernstein's inequality (cf. [2, Lemma 2.1]). **Lemma 2.3** (Bernstein's inequality).: _For every \(k\in\mathbb{N}_{0}\), there is a constant \(c=c(d,k)>0\) such that for all \(j\geqslant-1\) and \(1\leqslant p_{1}\leqslant p_{2}\leqslant\infty\),_ \[\|\nabla^{k}\mathcal{R}_{j}f\|_{p_{2}}\lesssim_{c}2^{(k+d(\frac{1}{p_{1}}- \frac{1}{p_{2}}))j}\|\mathcal{R}_{j}f\|_{p_{1}}.\] _In particular, for any \(s\in\mathbb{R}\) and \(1\leqslant p,q\leqslant\infty\),_ \[\|\nabla^{k}f\|_{\mathbf{B}^{s}_{p\neq q}}\lesssim_{c}\|f\|_{ \mathbf{B}^{s\neq k}_{p,q}}. \tag{2.5}\] It is worth discussing here the equivalence between the Besov and Holder spaces, which will be used in various contexts in this paper without much explanation. For \(s>0\), let \(\mathbf{C}^{s}(\mathbb{R}^{d})\) be the classical \(s\)-order Holder space consisting of all measurable functions \(f:\mathbb{R}^{d}\to\mathbb{R}\) with \[\|f\|_{\mathbf{C}^{s}}:=\sum_{j=0}^{[s]}\|\nabla^{j}f\|_{\infty }+[\nabla^{[s]}f]_{\mathbf{C}^{s-[s]}}<\infty,\] where \([s]\) denotes the greatest integer not more than \(s\), and \[\|f\|_{\infty}:=\sup_{x\in\mathbb{R}^{d}}|f(x)|,\quad[f]_{\mathbf{C }^{s}}:=\sup_{h\in\mathbb{R}^{d}}\frac{\|f(\cdot+h)-f(\cdot)\|_{\infty}}{|h|^{ \gamma}},\;\gamma\in(0,1).\] If \(s>0\) and \(s\notin\mathbb{N}\), we have the following equivalence between \(\mathbf{B}^{s}_{\infty,\infty}(\mathbb{R}^{d})\) and \(\mathbf{C}^{s}(\mathbb{R}^{d})\): (cf. [41]) \[\|f\|_{\mathbf{B}^{s}_{\infty,\infty}}\asymp\|f\|_{\mathbf{C}^{s}}.\] However, for any \(n\in\mathbb{N}_{0}\), we only have one side control that is \(\|f\|_{\mathbf{B}^{n}_{\infty,\infty}}\lesssim\|f\|_{\mathbf{C}^{n}}\). At the end of this subsection, we introduce the following product law (cf. [22, Lemma 2.1] or [6]) and interpolation inequality (cf. [2, Theorem 2.80]). **Lemma 2.4** (Product laws).: _For any \(s>0\) and \(\varepsilon>0\), there is a constant \(c=c(s,\varepsilon)>0\) such that_ \[\|fg\|_{\mathbf{B}^{-s}_{\infty,\infty}}\lesssim_{c}\|f\|_{ \mathbf{B}^{s\neq k}_{\infty,\infty}}\|g\|_{\mathbf{B}^{-s}_{\infty,\infty}}. \tag{2.6}\] **Lemma 2.5** (Interpolation inequality).: _Let \(s_{1},s_{2}\in\mathbb{R}\) with \(s_{2}>s_{1}\). For any \(p\in[1,\infty]\) and \(\theta\in(0,1)\), there is a constant \(c=c(s_{1},s_{2},p)>0\) such that_ \[\|f\|_{g^{n_{1}+(1-\theta)s_{2}}_{p,1}}\lesssim_{c}\|f\|^{\theta ^{s_{1}}}_{\mathbf{B}^{s_{1}}_{p,\infty}}\|f\|^{1-\theta}_{\mathbf{B}^{s_{2}}_ {p,\infty}}.\] _Furthermore, for any \(s_{2}>0>s_{1}\),_ \[\|f\|_{\infty}\lesssim_{c}\|f\|^{\theta}_{\mathbf{B}^{s_{1}}_{ \infty,\infty}}\|f\|^{1-\theta}_{\mathbf{B}^{s_{2}}_{\infty,\infty}}, \tag{2.7}\] _where \(\theta=s_{2}/(s_{2}-s_{1})\)._ ### \(\alpha\)-stable processes We call a \(\sigma\)-finite positive measure \(\nu\) on \(\mathbb{R}^{d}\) a Levy measure if \[\nu(\{0\})=0,\ \ \int_{\mathbb{R}^{d}}(1\wedge|z|^{2})\nu(\mathrm{d}z)<+\infty. \tag{2.8}\] Fix \(\alpha\in(0,2)\). Let \(L^{(\alpha)}_{t}\) be a \(d\)-dimentional \(\alpha\)-stable process with Levy measure (or \(\alpha\)-stable measure) \[\nu^{(\alpha)}(A)=\int_{0}^{\infty}\left(\int_{\mathbb{S}^{d-1}} \frac{1_{A}(r\theta)\Sigma(\mathrm{d}\theta)}{r^{1+\alpha}}\right)\mathrm{d}r,\ \ \ A\in\mathcal{B}(\mathbb{R}^{d}), \tag{2.9}\] where \(\Sigma\) is a finite measure over the unit sphere \(\mathbb{S}^{d-1}\) (called spherical measure of \(\nu^{(\alpha)}\)). We say an \(\alpha\)-stable measure \(\nu^{(\alpha)}\) is non-degenerate, if the assumption (**ND**) holds. Note that \(\alpha\)-stable process \(L^{(\alpha)}_{t}\) has the scaling property, \[(L^{(\alpha)}_{t})_{t\geqslant 0}\stackrel{{(d)}}{{=}}( \lambda^{-1/\alpha}L^{(\alpha)}_{\lambda t})_{t\geqslant 0},\ \ \forall\,\lambda>0, \tag{2.10}\] and for any \(\gamma_{2}>\alpha>\gamma_{1}\geqslant 0\), \[\int_{|z|\leqslant 1}|z|^{\gamma_{2}}\nu^{(\alpha)}(\mathrm{d}z)+ \int_{|z|>1}|z|^{\gamma_{1}}\nu^{(\alpha)}(\mathrm{d}z)<\infty. \tag{2.11}\] Moreover, it is easy to see that for any \(\lambda>0\) and \(p\geqslant 2\), \[\int_{\mathbb{R}^{d}}(1\wedge|\lambda z|^{p})\nu^{(\alpha)}( \mathrm{d}z)=\lambda^{\alpha}\int_{\mathbb{R}^{d}}(1\wedge|z|^{p})\nu^{( \alpha)}(\mathrm{d}z). \tag{2.12}\] Let \(N(\mathrm{d}r,\mathrm{d}z)\) be the associated Poisson random measure defined by \[N((0,t]\times A):=\sum_{s\in(0,t]}\mathbf{1}_{A}(L^{(\alpha)}_{s }-L^{(\alpha)}_{s-}),\ \ A\in\mathcal{B}(\mathbb{R}^{d}\setminus\{0\}),t>0.\] By Levy-Ito's decomposition (cf. [38, Theorem 19.2]), one sees that \[L^{(\alpha)}_{t}=\lim_{\varepsilon\downarrow 0}\int_{0}^{t}\int_{s<|z| \leqslant 1}z\tilde{N}(\mathrm{d}r,\mathrm{d}z)+\int_{0}^{t}\int_{|z|>1}zN( \mathrm{d}r,\mathrm{d}z).\] In the sequel, we always assume that \(\nu^{(\alpha)}\) is symmetric. Hence, we can write \[L^{(\alpha)}_{t}=\int_{0}^{t}\int_{|z|\leqslant c}z\tilde{N}( \mathrm{d}r,\mathrm{d}z)+\int_{0}^{t}\int_{|z|>c}zN(\mathrm{d}r,\mathrm{d}z), \ \ \forall c>0. \tag{2.13}\] The following moment estimate is taken from [12, Lemma 2.4] with some slight modification. For the readers' convenience, we provide detailed proof here. **Lemma 2.6**.: _Let \(T,\delta>0\). Assume that \(0\leqslant\tau_{1}<\tau_{2}\leqslant\tau_{1}+\delta\leqslant T\) are two bounded stopping times and \(p\in(0,\alpha)\). Let \(g:\mathbb{R}_{+}\times\Omega\to\mathbb{M}^{d}_{non}\) be a bounded predictable process, where \(\mathbb{M}^{d}_{non}\) is the set of all non-singular \(d\times d\) matrices. Then, there is a constant \(c=c(d,\alpha,p,T,\nu^{(\alpha)})>0\) such that_ \[\mathbb{E}\left|\int_{\tau_{1}}^{\tau_{2}}g(r)\mathrm{d}L^{( \alpha)}_{r}\right|^{p}\lesssim_{c}\delta^{p/\alpha}\mathbb{E}\|g\|_{L^{\infty} ([0,T))}^{p}. \tag{2.14}\] Proof.: Noticing that Poisson measures are counting measures, by (2.13), we have \[\int_{\tau_{1}}^{\tau_{2}}g(r)\mathrm{d}L^{(\alpha)}_{r}=\int_{0}^{T}\int_{|z| \leqslant\delta^{1/\alpha}}\tilde{g}(r,z)\tilde{N}(\mathrm{d}r,\mathrm{d}z)+ \int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}\tilde{g}(r,z)N(\mathrm{d}r,\mathrm{d}z),\] where \(\tilde{g}(r,z):=g(r)z\mathbf{1}_{(\tau_{1},\tau_{2}]}(r)\) is left continuous. On the one hand, by Jensen's inequality and the isometry of stochastic integral, \[\mathbb{E}\left|\int_{0}^{T}\int_{|z|<\delta^{1/\alpha}}\tilde{g} (r,z)\tilde{N}(\mathrm{d}r,\mathrm{d}z)\right|^{p} \leqslant\left[\mathbb{E}\left|\int_{0}^{T}\int_{|z|<\delta^{1/ \alpha}}\tilde{g}(r,z)\tilde{N}(\mathrm{d}r,\mathrm{d}z)\right|^{2}\right]^{p/2}\] \[=\left[\mathbb{E}\int_{0}^{T}\int_{|z|<\delta^{1/\alpha}}|\tilde{ g}(r,z)|^{2}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\right]^{p/2}\] \[\leqslant T^{p/2}\mathbb{E}\|g\|_{L^{\infty}([0,T])}^{p}\left[ \int_{|z|<\delta^{1/\alpha}}|z|^{2}\nu^{(\alpha)}(\mathrm{d}z)\right]^{p/2}\] \[\lesssim\delta^{p/\alpha}\mathbb{E}\|g\|_{L^{\infty}([0,T])}^{p},\] where we used (2.9) in the last inequality. On the other hand, using Burkholder's inequality (cf. [39, Lemma 2.3]) and (2.9) for \(p\in(1,\alpha)\), we have \[\mathbb{E}\left|\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}\tilde{g} (r,z)N(\mathrm{d}r,\mathrm{d}z)\right|^{p} \leqslant\mathbb{E}\left(\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}| \tilde{g}(r,z)|\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\right)^{p}\] \[\quad+\mathbb{E}\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}|\tilde{ g}(r,z)|^{p}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\] \[\lesssim T^{p}\mathbb{E}\|g\|_{L^{\infty}([0,T])}^{p}\left(\int_{|z |>\delta^{1/\alpha}}|z|\nu^{(\alpha)}(\mathrm{d}z)\right)^{p}\] \[\quad+T\mathbb{E}\|g\|_{L^{\infty}([0,T])}^{p}\int_{|z|>\delta^{1/ \alpha}}|z|^{p}\nu^{(\alpha)}(\mathrm{d}z)\lesssim\delta^{p/\alpha}\mathbb{E} \|g\|_{L^{\infty}([0,T])}^{p}.\] Observing that \(|\sum_{i=1}^{n}a_{i}|^{p}\leqslant(n^{p-1}\lor 1)\sum_{i=1}^{n}|a_{i}|^{p}= \sum_{i=1}^{n}|a_{i}|^{p}\) when \(p\in(0,1]\), we get \[\mathbb{E}\left|\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}\tilde{g} (r,z)N(\mathrm{d}r,\mathrm{d}z)\right|^{p} \leqslant\mathbb{E}\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}| \tilde{g}(r,z)|^{p}N(\mathrm{d}r,\mathrm{d}z)\] \[=\mathbb{E}\int_{0}^{T}\int_{|z|>\delta^{1/\alpha}}|\tilde{g}(r,z) |^{p}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\lesssim\delta^{p/\alpha}\mathbb{E} \|g\|_{L^{\infty}([0,T])}^{p},\] where the equality is yielded by the property of martingales (cf. [27]). The above calculations derive the desired estimates. ### Time-dependent Levy-type operator Fix \(\alpha\in(1,2)\). We start with the following time-inhomogeneous Levy process: for \(0\leqslant t<\infty\), \[L^{\sigma}_{t}:=\int_{0}^{t}\sigma_{r}\mathrm{d}L^{(\alpha)}_{r}=\int_{0}^{t} \int_{|z|\leqslant 1}\sigma_{r}z\tilde{N}(\mathrm{d}r,\mathrm{d}z)+\int_{0}^{t} \int_{|z|>1}\sigma_{r}zN(\mathrm{d}r,\mathrm{d}z), \tag{2.15}\] where \(\hat{N}(\mathrm{d}r,\mathrm{d}z):=N(\mathrm{d}r,\mathrm{d}z)-\nu^{(\alpha)}( \mathrm{d}z)\mathrm{d}r\) is the compensated Poisson random measure, and \(\sigma:\mathbb{R}_{+}\to\mathbb{M}_{non}^{d}\) is a bounded measurable function. Define \[P_{s,t}^{\sigma}f(x):=\mathbb{E}(x+\int_{s}^{t}\sigma_{r}\mathrm{d}L_{r}^{( \alpha)})\] for all \(f\in C_{b}^{2}(\mathbb{R}^{d})\). By Ito's formula (cf. [27, Theorem 5.1 of Chapter II]), one sees that \[\partial_{t}P_{s,t}^{\sigma}f(x)=\mathscr{L}_{\sigma_{t}}^{(\alpha)}P_{s,t}^{ \sigma}f(x),\] where \[\mathscr{L}_{\sigma_{t}}^{(\alpha)}f(x):=\int_{\mathbb{R}^{d}}\Big{(}f(x+ \sigma_{t}z)-f(x)-\sigma_{t}z\mathbf{1}_{|z|\leqslant 1}\cdot\nabla f(x)\Big{)} \nu^{(\alpha)}(\mathrm{d}z).\] Below, we always make the following assumption in this subsection: **(H0)** There is a constant \(a_{0}>1\) such that \[a_{0}|\xi|\leqslant|\sigma_{t}\xi|\leqslant a_{0}^{-1}|\xi|,\ \ \forall(t,\xi)\in\mathbb{R}_{+}\times\mathbb{R}^{d}.\] Under the assumption **(H0)**, owing to Levy-Khintchine's formula (cf. [38, Theorem 8.1]) and (2.9), for all \(|\xi|\geqslant 1\), we have \[|\mathbb{E}\mathrm{e}^{i\xi\cdot L_{t}^{\sigma}}| \leqslant \exp\left(t\int_{\mathbb{R}^{d}}(\cos(\xi\cdot\sigma_{t}z)-1)\nu^ {(\alpha)}(\mathrm{d}z)\right)\] \[\leqslant \exp\left(-t|\xi|^{\alpha}\int_{0}^{\infty}\int_{\mathbb{S}^{d-1 }}\frac{1-\cos(\xi/|\xi|\cdot\sigma_{t}r\theta)}{r^{1+\alpha}}\Sigma(\mathrm{ d}\theta)\mathrm{d}r\right)\leqslant\mathrm{e}^{-cf|\xi|^{\alpha}},\] where the constant \(c>0\) depends only on \(\alpha\) and \(\Sigma(\mathbb{S}^{d-1})\). Hence, by [38, Proposition 28.1], the random variable \(L_{t}^{\sigma}\) defined by (2.15) admits a smooth density \(p^{\sigma}(t,x)\) given by Fourier's inverse transform \[p^{\sigma}(t,x)=(2\pi)^{-d/2}\int_{\mathbb{R}^{d}}\mathrm{e}^{-ix\xi}\mathbb{E }\mathrm{e}^{i\xi\cdot L_{t}^{\sigma}}\mathrm{d}\xi,\ \ \forall t>0,\] and the partial derivatives of \(p^{\sigma}(t,\cdot)\) at any orders tend to \(0\) as \(|x|\to\infty\). Furthermore, by [11, Lemma 3.2] or [12, Lemma 2.5], for each \(0\leqslant t<\infty\), \(p^{\sigma}(t,x)\) satisfies that for any \(k\in\mathbb{N}_{0}\) and \(0\leqslant\beta<\alpha\), \[\int_{\mathbb{R}^{d}}|x|^{\beta}|\nabla^{k}p^{\sigma}(t,x)|\mathrm{d}x \lesssim_{c}t^{-\frac{k-\beta}{\alpha}}, \tag{2.16}\] where \(c=c(a_{0},k,d,\alpha,\beta)>0\). We need the following heat kernel estimates in integral form with Littlewood-Paley's decomposition, which is obtained in [11, Lemma 3.3 ] (see also [23, Lemma 2.12]). **Lemma 2.7** (Heat kernel estimates).: _Let \(\alpha\in(0,2)\). Suppose that_ **(H0)** _holds with constant \(a_{0}\in(0,1)\). For any \(\vartheta\geqslant 0\) and \(\gamma\in[0,\alpha)\), there is a constant \(c>0\) such that for all \(0\leqslant s<t<\infty\) and \(j\geqslant-1\),_ \[\int_{\mathbb{R}^{d}}|x|^{\gamma}|\mathcal{R}_{j}p_{s,t}^{\sigma}(x)|\mathrm{ d}x\lesssim_{c}(t-s)^{-\frac{\vartheta-\gamma}{\alpha}}2^{-j\vartheta}, \tag{2.17}\] _where the block operators \(\mathcal{R}_{j}\) are defined by (2.2). In particular,_ \[\int_{0}^{t}\int_{\mathbb{R}^{d}}|x|^{\gamma}|\mathcal{R}_{j}p_{s, \cdot}^{\sigma}(x)|\mathrm{d}x\mathrm{d}s\lesssim_{c}2^{-j\alpha}. \tag{2.18}\] In particular, when \(\sigma_{t}\) is always equal to the identity matrix \(\mathbb{I}_{d\times d}\), one has \[\mathcal{L}_{\mathbb{I}_{d\times d}}^{(\alpha)}f(x)=\int_{\mathbb{R}^{d}} \Big{(}f(x+z)-f(x)-z\mathbf{1}_{|\xi|\leqslant 1}\cdot\nabla f(x)\Big{)}\nu^{( \alpha)}(\mathrm{d}z), \tag{2.19}\] which is the infinitesimal generator of \(\alpha\)-stable process \(L^{(\alpha)}\) (cf. [38, Theorem 31.5]). Consider the following equation: \[\partial_{t}u=\mathcal{L}_{\mathbb{I}_{d\times d}}^{(\alpha)}u, \quad u(0)=f,\] where \(f\in C_{b}^{\infty}(\mathbb{R}^{d})\). By Ito's formula (cf. [27, Theorem 5.1 of Chapter II]), it is easy to check that \[u(t,x)=\mathbb{E}f(x+L_{t}^{(\alpha)})=(p_{t}*f)(x),\] where \(*\) stands for the convolution operation and \(p_{t}(x):=p^{\mathbb{I}_{d\times d}}(t,x)\). Then for \(k=0,1\), by (2.16), \[\|\nabla^{k}u(t)\|_{\infty}\lesssim t^{-\frac{k}{\alpha}}\|f\|_{ \infty}. \tag{2.20}\] Moreover, by (2.3) and (2.17), we have \[\|\nabla^{k}\mathcal{L}_{\mathbb{I}_{d\times d}}^{(\alpha)}u(t)\| _{\infty} \lesssim\sum_{j\beta-1}\|\nabla^{k}\mathcal{L}_{\mathbb{I}_{d \times d}}^{(\alpha)}\mathcal{R}_{j}p_{t}*f\|_{\infty}\lesssim\sum_{j\beta-1} 2^{(k+\alpha)j}\|\mathcal{R}_{j}p_{t}\|_{1}\|f\|_{\infty}\] \[\lesssim\sum_{j\beta-1}2^{(k+\alpha)j}\left([2^{-(k+\alpha+1)j}t ^{-\frac{k+\alpha}{\alpha}}]\wedge 1\right)\|f\|_{\infty}\lesssim t^{-\frac{k+\alpha}{ \alpha}}\|f\|_{\infty},\] where we used the following estimate in the last step: for any \(0<\beta<\gamma\) and \(\lambda>0\), \[\sum_{j>0}2^{\beta j}\left([2^{-\gamma j}\lambda]\wedge 1\right) \leqslant\lambda\wedge 1+\int_{0}^{\infty}2^{\beta s}\left([2^{- \gamma s}\lambda]\wedge 1\right)\mathrm{d}s\] \[\lesssim\lambda\wedge 1+\lambda^{\frac{\beta}{\gamma}}\int_{ \lambda^{-1/\gamma}}^{\infty}r^{\beta-1}\left(r^{-\gamma}\wedge 1\right)\mathrm{d}r \lesssim\lambda^{\frac{\beta}{\gamma}}.\] Hence, for all \(0\leqslant s<t\leqslant T\), \[|\nabla^{k}u(t,x)-\nabla^{k}u(s,x)| =\left|\int_{s}^{t}\nabla^{k}\partial_{r}u(r,x)\mathrm{d}r \right|=\left|\int_{s}^{t}\nabla^{k}\mathcal{L}_{\mathbb{I}_{d\times d}}^{( \alpha)}u(r,x)\mathrm{d}r\right|\] \[\lesssim\|f\|_{\infty}\int_{s}^{t}r^{-\frac{k+\alpha}{\alpha}} \mathrm{d}r\lesssim s^{-\frac{k+\alpha}{\alpha}}(t-s)\|f\|_{\infty}.\] Combining this with (2.20), we deduce that for all \(0\leqslant s<t\leqslant T\), \[\|\nabla^{k}u(t)-\nabla^{k}u(s)\|_{\infty}\lesssim\left[s^{-\frac{k}{\alpha}} \wedge(s^{-\frac{k+\alpha}{\alpha}}(t-s))\right]\|f\|_{\infty}. \tag{2.21}\] We conclude this section with the statement of Volterra-type Gronwall's inequality which is taken from [44, Lemma 2.2]. **Lemma 2.8**.: _Let \(f\in L^{1}_{loc}(\mathbb{R}_{+};\mathbb{R}_{+})\) and \(T>0\). Assume that for some \(\gamma_{1},\gamma_{2}\in[0,1)\) and \(c_{1},c_{2}>0\),_ \[f(t)\leqslant c_{1}t^{-\gamma_{1}}+c_{2}\int_{0}^{t}(t-s)^{-\gamma_{2}}f(s) \mathrm{d}s,\ \ t\in(0,T].\] _Then there is a constant \(c_{3}=c_{3}(c_{2},T,\gamma_{1},\gamma_{2})>0\) such that for all \(t\in(0,T]\),_ \[f(t)\leqslant c_{3}c_{1}t^{-\gamma_{1}}.\] ## 3. Nonlocal equations with Singular Levy measures Fix \(\alpha\in(1,2)\). Let \(\sigma(t,x):\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{M}^{d}_{non}\) be a bounded measurable function. In this section, we establish Schauder's estimate and obtain the well-posedness for the following non-local parabolic equation with time-dependent variable diffusion coefficient \(\sigma_{t}(x):=\sigma(t,x)\): \[\partial_{t}u=\mathscr{L}^{(\alpha)}_{\sigma}u+b\cdot\nabla u+f,\ \ u(0)=u_{0}, \tag{3.1}\] where \(b,f\in L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}\) with some \(\beta\in(0,\frac{\alpha-1}{2})\), \(\sigma\) satisfies the condition (\(\mathbf{H}^{\sigma}\)) with constant \(c_{0}\), and \(\mathscr{L}^{(\alpha)}_{\sigma}\) is given by (1.6): \[\mathscr{L}^{(\alpha)}_{\sigma}g(t,x):=\int_{\mathbb{R}^{d}}\Big{(}g(t,x+ \sigma_{t}(x)z)-g(t,x)-\sigma_{t}(x)z\cdot\nabla g(t,x)\Big{)}\nu^{(\alpha)}( \mathrm{d}z). \tag{3.2}\] Here \(\nu^{(\alpha)}\) is defined by (2.9) and satisfies condition (\(\mathbf{ND}\)). It is well known that the best regularity of the solution \(u(t)\) is in \(\mathbf{B}^{\alpha-\beta}_{\infty,\infty}\), but the domain of the operator \(\mathscr{L}^{(\alpha)}_{\sigma_{t}}\) is \(\cup_{x\geqslant 0}\mathbf{B}^{\alpha+\varepsilon}_{\infty,\infty}\). Therefore, in order to define the solutions of PDE (3.1), we fisrt extend the domain of \(\mathscr{L}^{(\alpha)}_{\sigma_{t}}\). Before that, we introduce some notations. For any \(x,y,z\in\mathbb{R}^{d}\) with \(|z|\leqslant\frac{1}{2}c_{0}^{-1}\), define \[\sigma_{t}^{y}(x):=\sigma(t,x+y),\ \ \text{and}\ \ \Lambda_{\sigma_{t}^{y},z}(x):=x+ \sigma_{t}^{y}(x)z.\] By [45, Lemma2.1], it is easy to see that for any \(t\geqslant 0\) and \(x_{1},x_{2}\in\mathbb{R}^{d}\), \[\frac{1}{2}|x_{1}-x_{2}|\leqslant|\Lambda_{t,z}^{y}(x_{1})-\Lambda_{t,z}^{y}(x _{2})|\leqslant 2|x_{1}-x_{2}|.\] Define \[\mathscr{D}_{\sigma_{t}^{y},z}f(x):=f(x+\sigma_{t}^{y}(x)z)-f(x+\sigma_{t}^{y }(0)z)-(\sigma_{t}^{y}(x)-\sigma_{t}^{y}(0))z\cdot\nabla f(x).\] Then, by [11, Lemma 2.2], for any \(f\in C^{1}\), \(g\in W^{2,1}\) and \(\theta\in[0,1]\), we have \[|\langle\mathscr{D}_{\sigma_{t}^{y},z}f,g\rangle|\leqslant c_{d,\theta}|z|^{1 +\theta}\|f\|_{\mathbf{B}^{\theta}_{\infty,\infty}}\Big{(}\sum_{j=0}^{1}\mu_{ j}(|\nabla^{j}g|)+\mu_{1+\theta}(|\nabla^{2}g|)^{\theta}\mu_{1+\theta}(|\nabla g |)^{1-\theta}\Big{)}, \tag{3.3}\] where the constant \(c_{d,\theta}>0\) is independent of the variables \(t,y,z\), and \[\mu_{\theta}(\mathrm{d}x):=(|x|\wedge 1)^{\theta}\mathrm{d}x,\ \ \text{and}\ \ \mu_{\theta}(f):=\int_{\mathbb{R}^{d}}f(x)\mu_{\theta}(\mathrm{d}x). \tag{3.4}\] In particular, \[|\langle\mathcal{D}_{\sigma^{x}_{t},z}f,g\rangle|\leqslant c_{d}|z|^{2}\|f\|_{ \mathbf{B}^{1}_{\infty,\infty}}\sum_{j=0}^{2}\mu_{j}(|\nabla^{j}g|). \tag{3.5}\] The following lemma is crucial to give Definition 3.2. **Lemma 3.1** ( Boundedness of operator \(\mathcal{L}_{\sigma}^{(\alpha)}\)).: _Let \(\alpha\in(1,2)\) and \(\beta\in(0,1)\). Under the condition \((\mathbf{H}^{\sigma})\), there is a constant \(c=c(\alpha,\beta,c_{0})>0\) such that for any \(u\in C^{\infty}_{b}(\mathbb{R}^{d})\) and \(t\geqslant 0\),_ \[\|\mathcal{L}_{\sigma}^{(\alpha)}u(t)\|_{\mathbf{B}^{-\beta}_{\infty,\infty}} \lesssim_{c}\|u(t)\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}, \tag{3.6}\] _where \(\mathcal{L}_{\sigma}^{(\alpha)}\) is given by (3.2)._ Proof.: For simplicity, we drop the time variable \(t\) and the superscript \(\alpha\) in \(\nu^{(\alpha)}\) in the following proof. Observe that \[\mathcal{R}_{j}\mathcal{L}_{\sigma}^{(\alpha)}u(x)=\mathcal{L}_{\sigma}^{( \alpha)}\mathcal{R}_{j}u(x)+[\mathcal{R}_{j},\mathcal{L}_{\sigma}^{(\alpha)}] \mu(x),\] where \([\mathcal{A}_{1},\mathcal{A}_{2}]:=\mathcal{A}_{1}\mathcal{A}_{2}-\mathcal{A} _{2}\mathcal{A}_{1}\) called commutator operator. Define \[u^{x}(y):=u(y+x).\] Recall the definition (2.2) of block operators \(\mathcal{R}_{j}\). By the change of variable, we have \[\int_{\mathbb{R}^{d}}\breve{\psi}_{j}(x-y)\Big{(}u(y+\sigma(y)z)- u(y+\sigma(x)z)-(\sigma(y)-\sigma(x))z\cdot\nabla u(y)\Big{)}\mathrm{d}y\] \[=\int_{\mathbb{R}^{d}}\breve{\psi}_{j}(-y)\Big{(}u^{x}(y+\sigma^{ x}(y)z)-u^{x}(y+\sigma^{x}(0)z)-(\sigma^{x}(y)-\sigma^{x}(0))z\cdot\nabla u^{x}( y)\Big{)}\mathrm{d}y,\] which yields \[[\mathcal{R}_{j},\mathcal{L}_{\sigma}^{(\alpha)}]u(x)=\int_{\mathbb{R}^{d}} \langle\mathcal{D}_{\sigma^{x},z}u^{x},\breve{\psi}_{j}(-\cdot)\rangle\nu^{( \alpha)}(\mathrm{d}z).\] For \(|z|\leqslant\delta\leqslant\frac{1}{2c_{0}}\), based on the fact \[\mu_{\delta}(|\nabla^{k}\breve{\psi}_{j}|)\lesssim 2^{(k-\delta)j},\ \ \delta\geqslant 0,\ k\in\mathbb{N}_{0},\] and (3.3) with \(\theta\in(\alpha-1,\alpha-\beta)\), one sees that \[\sup_{x}|\langle\mathcal{D}_{\sigma^{x},z}u^{x},\breve{\psi}_{j}(-\cdot) \rangle|\lesssim|z|^{1+\theta}\|u\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty} }\Big{(}1+2^{\theta(2-1-\theta)j}2^{(1-\theta)(1-1-\theta)j}\Big{)}\lesssim|z |^{1+\theta}\|u\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}.\] For \(|z|>\delta\), by the mean-value theorem and (1.2), we get \[\sup_{x}|\langle\mathcal{D}_{\sigma^{x},z}u^{x},\breve{\psi}_{j}(-\cdot) \rangle|\leqslant 2c_{0}|z|\|\nabla u\|_{\infty}\|\breve{\psi}\|_{1}\lesssim|z| \|u\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}},\] where we used the fact \(\alpha-\beta>1\). Hence, by (2.11), we obtain that \[\|[\mathcal{R}_{j},\mathcal{L}_{\sigma}^{(\alpha)}]u\|_{\infty} \leqslant\int_{\mathbb{R}^{d}}\sup_{x}|\langle\mathcal{D}_{\sigma^{x},z}u^{x}, \breve{\psi}_{j}(-\cdot)\rangle|\nu(\mathrm{d}z)\lesssim\|u\|_{\mathbf{B}^{ \alpha-\beta}_{\infty,\infty}}. \tag{3.7}\] On the other hand, through Bernstein's inequality Lemma 2.3, we have \[\sup_{x}|\mathcal{R}_{j}\mu(x+\sigma(x)z)-\mathcal{R}_{j}\mu(x)- \sigma(x)z\cdot\nabla\mathcal{R}_{j}\mu(x)|\lesssim(|2^{j}z|\wedge|2^{j}z|^{2}) \|\mathcal{R}_{j}\mu\|_{\infty},\] which implies that \[\|\mathcal{L}_{\sigma}^{(\alpha)}\mathcal{R}_{j}\mu\|_{\infty} \lesssim\|\mathcal{R}_{j}\mu\|_{\infty}\int_{\mathbb{R}^{d}}(|2^{j}z|\wedge|2^ {j}z|^{2})\nu(\mathrm{d}z)\lesssim 2^{\alpha j}\|\mathcal{R}_{j}\mu\|_{\infty},\] provided by the scaling property (2.10) and (2.11). Therefore, combining this with (3.7), we have \[\|\mathcal{R}_{j}\mathcal{L}_{\sigma}^{(\alpha)}u\|_{\infty} \lesssim\|u\|_{\mathbf{B}_{\infty,\infty}^{\alpha-\beta}}+2^{\alpha j}\| \mathcal{R}_{j}\mu\|_{\infty}\lesssim 2^{\beta j}\|u\|_{\mathbf{B}_{\infty, \infty}^{\alpha-\beta}},\] which derives (3.6) by taking the supremum of \(j\). The proof is completed. Based on Lemma 3.1, we extend the domain of the linear operator \(\mathcal{L}_{\sigma_{t}}^{(\alpha)}\) from \(\mathbf{B}_{\infty,\infty}^{\alpha+\varepsilon}\) to \(\mathbf{B}_{\infty,\infty}^{\alpha-\beta}\) with \(\beta\in(0,1)\). Now, we can state the definitions of solutions to PDE (3.1). **Definition 3.2** (Solutions).: _Let \(\alpha\in(1,2)\), \(\beta\in(0,1)\) and assume (\(\mathbf{H}^{\sigma}\)) holds with constant \(c_{0}\). For any \(T>0\), \(u_{0}\in\mathbf{B}_{\infty,\infty}^{\alpha-\beta}\), and \(b,f\in L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}\), we call a function \(u\in\cup_{\varepsilon>0}L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{1+\beta+ \varepsilon}\) a solution to PDE (3.1) on \([0,T]\), if for any \(t\in[0,T]\),_ \[u(t)=u_{0}+\int_{0}^{t}\Big{(}\mathcal{L}_{\sigma}^{(\alpha)}u+b \cdot\nabla u+f\Big{)}(s)\mathrm{d}s, \tag{3.8}\] _where \(\mathcal{L}_{\sigma}^{(\alpha)}\) is defined by (3.2)._ **Remark 3.3**.: _Notice that every term in (3.8) is well defined since we have (3.6) and \(1+\beta>\alpha-1\)._ Here is our main result in this section. **Theorem 3.4**.: _Let \(\alpha\in(1,2)\), \(T>0\) and assume (\(\mathbf{H}^{\sigma}\)) holds with constant \(c_{0}\). For any \(\beta\in(0,\frac{\alpha-1}{2})\), \(\gamma>\frac{\alpha}{\alpha-1-2\beta}\), \(u_{0}\in\mathbf{B}_{\infty,\infty}^{\alpha-\beta}\), and \(b,f\in L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}\), there is a unique solution \(u\) to PDE (3.1) in the sense of Definition 3.2 satisfying_ \[\|u\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{\alpha-\beta}} \lesssim_{c}(1+\|b\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}})^{ \gamma}\Big{(}\|u_{0}\|_{\mathbf{B}_{\infty,\infty}^{\alpha-\beta}}+\|f\|_{L_ {T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}}\Big{)}, \tag{3.9}\] _where \(c>0\) is a constant only depending on \(d,\alpha,\beta,T,c_{0}\). Moreover, for all \(s,t\in[0,T]\),_ \[\|u(t)-u(s)\|_{\infty}\lesssim_{c}|t-s|^{\frac{\alpha-\beta}{ \alpha}}(1+\|b\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}})^{ \frac{\alpha-\beta}{\alpha}+\gamma}\Big{(}\|u_{0}\|_{\mathbf{B}_{\infty,\infty }^{\alpha-\beta}}+\|f\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}} \Big{)}. \tag{3.10}\] ### \(\lambda\)-dissipative equation with \(\lambda\geqslant 0\) To prove Theorem 3.4, we introduce the following \(\lambda\)-dissipative equation with \(\lambda\geqslant 0\): \[\partial_{t}\mu^{\lambda}=\mathcal{L}_{\sigma}^{(\alpha)}u^{ \lambda}-\lambda u^{\lambda}+b\cdot\nabla u^{\lambda}+f,\ \ u^{\lambda}(0)=u_{0}^{\lambda}. \tag{3.11}\] In this section, we establish a priori estimates, Lemma 3.7, for \(\lambda\)-dissipative PDE (3.11) under the case of smooth coefficients \(b,f\). #### 3.1.1. The zero-drift case In this part, we assume \(b\equiv 0\) and investigate the following \(\lambda\)-dissipative equation with \(\lambda\geqslant 0\): \[\partial_{t}w=\mathscr{L}_{\sigma}^{(\alpha)}w-\lambda w+f_{1}+f_{2},\ \ w(0)=w_{0}, \tag{3.12}\] where \(f_{1},f_{2},w_{0}\) are smooth functions. We first show the following result with \(\lambda\) large enough. **Lemma 3.5**.: _Fix \(T>0\). Let \(\alpha\in(1,2)\), \(\beta\in(0,\frac{\alpha-1}{2})\), and \(f_{1},f_{2},w_{0}\) be smooth functions. Assume \((\mathbf{H}^{\sigma})\) holds with constant \(c_{0}\). If \(w\) is a classical solution to PDE (3.12), then for any \(\theta\in[0,\alpha-\beta]\), there are constants \(\lambda_{0}>1\), and \(c>0\) depending on \(d,\alpha,\beta,\theta,c_{0},T\) such that for all \(\lambda\geqslant\lambda_{0}\),_ \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta-\theta}_{\infty,\theta}} \lesssim_{c}\|w_{0}\|_{\mathbf{B}^{\alpha-\beta-\theta}_{\infty,\theta}}+( \lambda+1)^{-\frac{\theta}{\alpha}}\left(\|w_{0}\|_{\mathbf{B}^{\alpha-\beta }_{\infty,\theta}}+\|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty, \theta}}\right)+(\lambda+1)^{-\frac{\theta+\theta}{\alpha}}\|f_{2}\|_{L^{ \infty}_{T}}, \tag{3.13}\] _and in particular, when \(f_{2}=0\),_ \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{0}_{\infty,1}}\lesssim_{c}\|w_{0}\|_{ \mathbf{B}^{0}_{\infty,1}}+(\lambda+1)^{-\frac{\alpha-\theta}{\alpha}}\left( \|w_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\theta}}+\|f_{1}\|_{L^{\infty}_{ T}\mathbf{B}^{-\beta}_{\infty,\theta}}\right). \tag{3.14}\] To prove this result, we only need to establish a priori estimates (3.13) and (3.14). For simplicity, we assume that \(w_{0}=0\). If \(w_{0}\neq 0\), we shall substitute \(w\) and \(f\) in (3.12) by \(\bar{w}(t):=w(t)-\mathrm{e}^{-\lambda t}w_{0}\) and \(\bar{f_{1}}:=f_{1}+\mathrm{e}^{-\lambda t}\mathscr{L}_{\sigma}^{(\alpha)}w_{0}\), respectively. Denote \(f:=f_{1}+f_{2}\). Fix \(y\in\mathbb{R}^{d}\). For any function \(h\), let \(h^{y}(x):=h(x+y)\). Then, we have \[\partial_{t}w^{y}=\mathscr{L}_{y}^{(\alpha)}w^{y}-\lambda w^{y}+\mathscr{A}w^ {y}+f^{y},\] where \(\mathscr{A}:=\mathscr{L}_{\sigma^{y}}^{(\alpha)}-\mathscr{L}_{y}^{(\alpha)}\) and \[\mathscr{L}_{y}^{(\alpha)}g(t,x):=\mathscr{L}_{\sigma^{y}(0)}^{(\alpha)}g(t,x )=\int_{\mathbb{R}^{d}}(g(t,x+\sigma_{t}^{y}(0)z)-g(t,x)-\sigma_{t}^{y}(0)z \cdot\nabla g(t,x))\nu^{(\alpha)}(\mathrm{d}z).\] By subsection 2.3 and [11, Section 3], one sees that the operator \(\partial_{t}-(\mathscr{L}_{y}^{\alpha}-\lambda)\) associates to a semigroup \(\mathrm{e}^{-\lambda(t-s)}P_{s,t}^{\sigma^{y}(0)}\), i.e. \[w^{y}(t,x)=\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}P_{s,t}^{\sigma^{y}(0)}\Big{(} \mathscr{A}w^{y}+f^{y}\Big{)}(s,x)\mathrm{d}s,\] and \(p_{s,t}^{\sigma^{y}(0)}\) is the heat kernel of \(P_{s,t}^{\sigma^{y}(0)}\). For the sake of simplicity, in the sequel, we write \(P_{s,t}^{(y)}=P_{s,t}^{\sigma^{y}(0)}\) and \(p_{s,t}^{(y)}=p_{s,t}^{\sigma^{y}(0)}\). Thus, for \(j\geqslant-1\), acting on both sides of the above equations by \(\mathcal{R}_{j}\), we get that \[\mathcal{R}_{j}w(t,y)=\mathcal{R}_{j}w^{y}(t,0)=\int_{0}^{t}\mathrm{e}^{- \lambda(t-s)}\mathcal{R}_{j}P_{s,t}^{(y)}\Big{(}\mathscr{A}w^{y}+f_{1}^{y}+f_ {2}^{y}\Big{)}(s,0)\mathrm{d}s. \tag{3.15}\] Let us separately estimate the terms on the right-hand side of (3.15). **Lemma 3.6**.: _Fix \(T>0\). Suppose that \(\alpha\in(1,2)\) and \(\beta\in(0,\frac{\alpha-1}{2})\). Assume \((\mathbf{H}^{\sigma})\) holds with constant \(c_{0}\). Let \(\beta_{1}=\beta\), \(\beta_{2}=0\), and \(i=1,2\). There is a constant \(c>0\) such that for any \(\vartheta\in[0,\alpha]\),_ \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t}^{(y)}f_{i} ^{y}(s,0)\Big{|}\mathrm{d}s\lesssim_{c}2^{-(\alpha-\beta_{i}-\vartheta)j}( \lambda+1)^{-\frac{\vartheta}{\alpha}}\|f_{i}\|_{L^{\infty}_{T}\mathbf{B}^{- \beta_{i}}_{\infty,\infty}}, \tag{3.16}\] _and for any \(0\leqslant\eta<\alpha-\varepsilon\) with \(\alpha>\varepsilon>0\),_ \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t}^{(y)} \mathcal{A}w^{y}(s,0)\Big{|}\mathrm{d}s\lesssim_{c}2^{-\eta j}(\lambda+1)^{- \frac{\alpha-\eta-\varepsilon}{\alpha}}\|w\|_{L^{\infty}_{T}\mathbf{B}^{- \beta_{i+\varepsilon}}_{\infty,\infty}}. \tag{3.17}\] Proof.: For the first one, by Remark 2.1 and (2.17), we have that for any \(\vartheta\in(0,\alpha]\), \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t }^{(y)}f_{i}^{y}(s,0)\Big{|}\mathrm{d}s \leqslant\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\int_{\mathbb{R}^{ d}}|\mathcal{R}_{j}p_{s,t}^{(y)}(x)\widetilde{\mathcal{R}}_{j}f_{i}(s,x+y)| \mathrm{d}x\mathrm{d}s\] \[\lesssim 2^{\beta_{i}j}\|f_{i}\|_{L^{\infty}_{T}\mathbf{B}^{- \beta_{i}}_{\infty,\infty}}\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}(t-s)^{-\frac {\alpha-\vartheta}{\alpha}}2^{-j(\alpha-\vartheta)}\mathrm{d}s\] \[\lesssim 2^{-(\alpha-\beta_{i}-\vartheta)j}(\lambda+1)^{- \frac{\vartheta}{\alpha}}\|f_{i}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta_{i}}_{ \infty,\infty}},\] where the last inequality is provided by a change of variables and the definitions (1.8) of Gamma functions; and for \(\vartheta=0\), similarly, by Remark 2.1 and (2.18), \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t }^{(y)}f_{i}^{y}(s,0)\Big{|}\mathrm{d}s \lesssim 2^{\beta_{i}j}\|f_{i}\|_{L^{\infty}_{T}\mathbf{B}^{- \beta_{i}}_{\infty,\infty}}\int_{0}^{t}\int_{\mathbb{R}^{d}}|\mathcal{R}_{j}p_{ s,t}^{(y)}(x)|\mathrm{d}x\mathrm{d}s\] \[\lesssim 2^{-(\alpha-\beta_{i})j}\|f_{i}\|_{L^{\infty}_{T}\mathbf{B }^{-\beta_{i}}_{\infty,\infty}}.\] For the second one, applying [11, Lemma 4.4] to \(\mathrm{e}^{-\lambda(t-s)}w(s)\), we have that for any \(T>0\), \(\eta\in[0,\alpha-\varepsilon)\) with \(\alpha>\varepsilon>0\), \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t }^{(y)}\mathcal{A}w^{y}(s,0)\Big{|}\mathrm{d}s \lesssim 2^{-\eta j}\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}(t-s)^{- \frac{\eta\varepsilon}{\alpha}}\|w(s)\|_{\mathbf{B}^{\infty-1+\varepsilon}_ {\infty,\infty}}\mathrm{d}s\] \[\lesssim 2^{-\eta j}(\lambda+1)^{-\frac{\alpha-\eta-\varepsilon}{ \alpha}}\|w\|_{L^{\infty}_{T}\mathbf{B}^{\infty-1+\varepsilon}_{\infty,\infty }},\] where we used a change of variables and the definitions (1.8) of Gamma functions in the second inequality. The proof is finished. Now, we give the Proof of Lemma 3.5.: Notice that, by (3.16), for \(\theta\in[0,\alpha-\beta]\), \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t}^{(y)}f_{1} ^{y}(s,0)\Big{|}\mathrm{d}s\lesssim 2^{-(\alpha-\beta-\theta)j}(\lambda+1)^{- \frac{\theta}{\alpha}}\|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty, \infty}}\] and \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\Big{|}\mathcal{R}_{j}P_{s,t}^{(y)}f_{2} ^{y}(s,0)\Big{|}\mathrm{d}s\lesssim 2^{-(\alpha-(\beta+\theta))j}(\lambda+1)^{- \frac{\beta+\theta}{\alpha}}\|f_{2}\|_{L^{\infty}_{T}\mathbf{B}^{0}_{\infty, \infty}}.\] Moreover, by (3.17) with \(\eta=\alpha-\beta-\theta\), we have that for any \(\beta+\theta>\varepsilon>0\), \[\int_{0}^{t}\mathrm{e}^{-\lambda(t-s)}\big{|}\mathcal{R}_{j}P^{(y)}_{s,t} \mathcal{A}w^{y}(s,0)\big{|}\mathrm{d}s\lesssim 2^{-(\alpha-\beta-\theta)j}( \lambda+1)^{-\frac{\beta+\theta-\varepsilon}{\alpha}}\|w\|_{L^{\infty}_{T} \mathbf{B}^{\alpha-1+\varepsilon}_{\infty,\infty}}.\] Thus, taking supremum of \(y\) in (3.15), we have \[\|w(t)\|_{\mathbf{B}^{\alpha-\beta-\theta}_{\alpha,\infty}}=\sup_ {j\geqslant 1}2^{(\alpha-\beta-\theta)j}\|\mathcal{R}_{j}w(t)\|_{\infty} \lesssim(\lambda+1)^{-\frac{\beta}{\alpha}}\|f_{1}\|_{L^{\infty}_{T} \mathbf{B}^{\alpha-\beta}_{\alpha,\infty}}+(\lambda+1)^{-\frac{\beta+\theta}{ \alpha}}\|f_{2}\|_{L^{\infty}_{T}}\] \[\qquad\qquad\qquad\qquad+(\lambda+1)^{-\frac{\beta+\theta- \varepsilon}{\alpha}}\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_ {\infty,\infty}}. \tag{3.18}\] On the other hand, when \(f_{2}=0\), from (3.15), by (3.17) with \(\eta=\varepsilon<\alpha/2\), taking \(\vartheta=0,\alpha\) in (3.16), we have \[\|\mathcal{R}_{j}w(t)\|_{\infty}\lesssim(\lambda+1)^{-\frac{\alpha-2 \varepsilon}{\alpha}}2^{-\varepsilon j}\|w\|_{L^{\infty}_{T}\mathbf{B}^{ \alpha-1+\varepsilon}_{\infty,\infty}}+\left[(2^{\beta j}(\lambda+1)^{-1}) \wedge 2^{-(\alpha-\beta)j}\right]\|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta }_{\alpha,\infty}},\] which derives that \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_{\infty, \infty}}=\sum_{j\geqslant-1}\|\mathcal{R}_{j}w(t)\|_{\infty}\lesssim(\lambda+1 )^{-\frac{\beta-2\varepsilon}{\alpha}}\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha -1+\varepsilon}_{\infty,\infty}}+(\lambda+1)^{-\frac{\alpha-\beta}{\alpha}}\|f _{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\alpha,\infty}}. \tag{3.19}\] Now we estimate the term \(\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_{\infty,\infty}}\). Similarly, taking \(\eta=\varepsilon<1/2\) in (3.17), and using (3.16) with \(\vartheta=1-\beta-\varepsilon\) when \(i=1\) and \(\vartheta=1-\varepsilon\) when \(i=2\), we obtain that there is a constant \(c>0\) such that \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_{\infty, \infty}}\lesssim_{c}(\lambda+1)^{-\frac{1-2\varepsilon}{\alpha}}\|w\|_{L^{ \infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_{\infty,\infty}}+(\lambda+1)^{ -\frac{1-\beta-\varepsilon}{\alpha}}\|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{- \beta}_{\alpha,\infty}}+(\lambda+1)^{-\frac{1-\varepsilon}{\alpha}}\|f_{2}\| _{L^{\infty}_{T}},\] which yields that for any \(\lambda\geqslant\lambda_{0}:=(2c)^{\alpha/(1-2\varepsilon)}\), \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-1+\varepsilon}_{\infty, \infty}}\lesssim(\lambda+1)^{-\frac{1-\beta-\varepsilon}{\alpha}}\|f_{1}\|_{L^ {\infty}_{T}\mathbf{B}^{-\beta}_{\alpha,\infty}}+(\lambda+1)^{-\frac{1- \varepsilon}{\alpha}}\|f_{2}\|_{L^{\infty}_{T}}.\] Substituting the above inequality into (3.18) and (3.19) with \(0<\varepsilon<1/3\), we have that for any \(\theta\in[0,\alpha-\beta]\), \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta-\theta}_{\alpha, \infty}}\lesssim(\lambda+1)^{-\frac{\beta}{\alpha}}\|f_{1}\|_{L^{\infty}_{T} \mathbf{B}^{-\beta}_{\alpha,\infty}}+(\lambda+1)^{-\frac{\beta+\theta}{\alpha} }\|f_{2}\|_{L^{\infty}_{T}},\] and when \(f_{2}=0\), \[\|w\|_{L^{\infty}_{T}\mathbf{B}^{0}_{\alpha,1}}\lesssim(\lambda+1)^{-\frac{ \alpha-\beta}{\alpha}}\|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\alpha, \infty}}.\] These complete the proof. #### 3.1.2. Distributional drift case Backing to (3.11), we establish the following a priori estimates. **Lemma 3.7**.: _Let \(\alpha\in(1,2)\), \(\beta\in(0,\frac{\alpha-1}{2})\), \(T>0\), \(\gamma>\frac{\alpha}{\alpha-1-2\beta}\), and \(b,f_{1},f_{2},u_{0}^{\lambda}\) be smooth functions. Assume \((\mathbf{H}^{\sigma})\) holds with constant \(c_{0}\). If \(u^{\lambda}\) is a classical solution to PDE (3.11), then for any \(\theta\in[0,\alpha-\beta]\), there is a constant \(c=c(d,\alpha,\beta,\theta,c_{0},T,\gamma,\|b\|_{L^{\infty}_{T}\mathbf{B}^{- \beta}_{\alpha,\infty}})>0\) such that for all \(\lambda\geqslant\lambda_{0}:=c(1+\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{ \alpha,\infty}})^{\gamma}\),_ \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta-\theta}_{ \alpha,\infty}}\lesssim_{c}\|u_{0}^{\lambda}\|_{\mathbf{B}^{\alpha-\beta-\theta }_{\infty,\infty}}+(\lambda+1)^{-\frac{\beta}{\alpha}}\left(\|u_{0}^{\lambda} \|_{\mathbf{B}^{\alpha-\beta}_{\alpha,\infty}}+\|f_{1}\|_{L^{\infty}_{T} \mathbf{B}^{-\beta}_{\alpha,\infty}}\right)+(\lambda+1)^{-\frac{\beta+\theta}{ \alpha}}\|f_{2}\|_{L^{\infty}_{T}}, \tag{3.20}\] _and in particular when \(f_{2}=0\),_ \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{0}_{\infty,0}}\lesssim_{c}\|u^{ \lambda}_{0}\|_{\mathbf{B}^{0}_{\infty,1}}+(\lambda+1)^{-\frac{\alpha-\beta}{ \alpha}}\left(\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+\|f_{ 1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\right). \tag{3.21}\] _Moreover, when \(f_{2}=0\), for any \(\lambda\geqslant 0\),_ \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{\infty,0}}\lesssim_{c }\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+\|f_{1}\|_{L^{ \infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+(1+\|b\|_{\mathbf{B}^{-\beta}_{ \infty,0}})^{\gamma^{\frac{\alpha-\beta}{\alpha}}}\|u^{\lambda}\|_{L^{\infty} _{T}}. \tag{3.22}\] Proof.: By Lemma3.5, (2.6), and (2.5), for any \(\theta\in[0,\alpha-\beta]\) and \(\varepsilon>0\), there is a constant \(\widetilde{\lambda}_{0}\) such that for any \(\lambda\geqslant\widetilde{\lambda}_{0}\), \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta-\theta}_{ \infty,0}} \lesssim(\lambda+1)^{-\frac{\theta}{\alpha}}\left(\|u^{\lambda}_{0 }\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+\|f_{1}+b\cdot\nabla u^{\lambda}\|_ {L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}\right)+\|u^{\lambda}_{0}\|_{ \mathbf{B}^{\alpha-\beta-\theta}_{\infty,0}}+(\lambda+1)^{-\frac{\beta+\theta }{\alpha}}\|f_{2}\|_{L^{\infty}_{T}}\] \[\lesssim(\lambda+1)^{-\frac{\theta}{\alpha}}\left(\|u^{\lambda}_{0 }\|_{\mathbf{B}^{\alpha-\theta}_{\infty,0}}+\|f_{1}\|_{L^{\infty}_{T}\mathbf{B }^{-\beta}_{\infty,0}}+\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}\| u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{1+\beta}_{\infty,0}}\right)\] \[\quad+\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta-\theta}_{ \infty,0}}+(\lambda+1)^{-\frac{\theta+\theta}{\alpha}}\|f_{2}\|_{L^{\infty}_{ T}}. \tag{3.23}\] Hence, in particular, taking \(\varepsilon\in(0,\alpha-1-2\beta]\) and \(\theta=\alpha-1-2\beta-\varepsilon\) in (3.23), we infer that for any \(\lambda\geqslant\widetilde{\lambda}_{0}\), \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{1+\beta\varepsilon}_{ \infty,0}} \lesssim\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+\|f_{1} \|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+(\lambda+1)^{-\frac{\alpha- 1-2\beta-\varepsilon}{\alpha}}\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{ \infty,0}}\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{1+\beta\varepsilon}_{ \infty,0}}\] \[\quad+(\lambda+1)^{-\frac{\beta}{\alpha}}\|f_{2}\|_{L^{\infty}_{ T}},\] which implies that for any \(0<\varepsilon<\alpha-1-2\beta\), there is a constant \(c>0\) such that for any \(\lambda_{0}:=c(\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+1)^{\alpha /(\alpha-1-2\beta-\varepsilon)}\geqslant\widetilde{\lambda_{0}}\), \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{1+\beta\varepsilon}_{ \infty,0}}\lesssim\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+ \|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+(\lambda+1)^{-\frac {\beta}{\alpha}}\|f_{2}\|_{L^{\infty}_{T}}.\] Substituting this into (3.23), we obtain the desired estimates (3.20). As for (3.21), the proof is similar. For (3.22), note that when \(f_{2}=0\), for any \(\lambda\geqslant 0\), we get that \[\partial_{t}u^{\lambda}=\mathcal{L}^{(\alpha)}_{\sigma}u^{\lambda}-(\lambda+ \lambda_{0})u^{\lambda}+b\cdot\nabla u^{\lambda}+f_{1}+\lambda_{0}u^{\lambda}.\] Taking \(f_{2}=\lambda_{0}u^{\lambda}\) in (3.20) with \(\theta=0\), we deduce that \[\|u^{\lambda}\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{ \infty,0}} \lesssim\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+\|f _{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+(1+\lambda_{0})^{1-\frac{ \beta}{\alpha}}\|u^{\lambda}\|_{L^{\infty}_{T}}\] \[\lesssim\|u^{\lambda}_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,0}}+ \|f_{1}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,0}}+(1+\|b\|_{L^{\infty}_{ T}\mathbf{B}^{-\beta}_{\infty,0}})^{\gamma^{\frac{\alpha-\beta}{\alpha}}}\|u^{\lambda}\|_{L^{ \infty}_{T}},\] and the result follows. ### The proof of Theorem3.4 In this subsection, we prove the main result of this section. Recalling PDE (3.1) (i.e. the equation (3.11) with \(\lambda=0\)), we begin by establishing the following a priori estimates as a corollary of Lemma3.7. **Corollary 3.8**.: _Let \(\alpha\in(1,2)\), \(\beta\in(0,\frac{\alpha-1}{2})\), \(T>0\), \(\gamma>\frac{\alpha}{\alpha-1-2\beta}\) and \(b,f\) be smooth functions. Assume \((\mathbf{H}^{\sigma})\) holds with constant \(c_{0}\). If \(u\) is a classical solution to PDE (3.1), then there is a constant \(c=c(d,\alpha,\beta,c_{0},T,\gamma,\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{ \infty,\infty}})>0\) such that_ \[\|u\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}\lesssim_{c}(1+ \|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}})^{\gamma}\left(\|u_{ 0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}+\|f\|_{L^{\infty}_{T}\mathbf{ B}^{-\beta}_{\infty,\infty}}\right). \tag{3.24}\] Proof.: Let \(\lambda_{0}\) be the same constant in Lemma 3.7 and \(u^{\lambda_{0}}\) be a classical solution to PDE (3.11) with \(\lambda=\lambda_{0}\) and \(u^{\lambda_{0}}(0)=u_{0}\). Denote by \(\mathrm{v}:=u-u^{\lambda_{0}}\). Then, \[\partial_{t}\mathrm{v}=\mathscr{L}_{\sigma}^{(\alpha)}\mathrm{v}+b\cdot \nabla\mathrm{v}+\lambda_{0}u^{\lambda_{0}},\ \ \mathrm{v}(0)=0.\] Thus, by maximum principle (see [23, Lemma 3.3]) and (3.21), we have \[\|\mathrm{v}\|_{L^{\infty}_{T}} \leqslant T\lambda_{0}\|u^{\lambda_{0}}\|_{L^{\infty}_{T}} \lesssim(1+\lambda_{0})^{1-\frac{\alpha-\beta}{\alpha}}\left(\|u_{0}\|_{ \mathbf{B}^{\alpha-\beta}_{\infty,\infty}}+\|f\|_{L^{\infty}_{T}\mathbf{B}^{ -\beta}_{\infty,\infty}}\right)\] \[\lesssim(1+\|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty, \infty}})^{\frac{2\beta}{\alpha}}\left(\|u_{0}\|_{\mathbf{B}^{\alpha-\beta}_{ \infty,\infty}}+\|f\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}} \right),\] which together with (3.22) yields \[\|u\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{\infty,\infty}} \lesssim\|u_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}+\|f \|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}+(1+\|b\|_{\mathbf{B}^{ -\beta}_{\infty,\infty}})^{\gamma\frac{\alpha-\beta}{\alpha}}\|u\|_{L^{ \infty}_{T}}\] \[\lesssim(1+\|b\|_{\mathbf{B}^{-\beta}_{\infty,\infty}})^{\gamma} \left(\|u_{0}\|_{\mathbf{B}^{\alpha-\beta}_{\infty,\infty}}+\|f\|_{L^{\infty}_ {T}\mathbf{B}^{-\beta}_{\infty,\infty}}\right).\] The proof is finished. Now we are in a position to give Proof of the Theorem 3.4.: The uniqueness is obvious and we only need to show the existence. Let \(\rho_{m}(\cdot):=m^{d}\rho(m\cdot)\) be the usual mollifier with \(\rho\in C^{\infty}_{c}(\mathbb{R}^{d})\) and \[b_{m}(t):=b(t)*\rho_{m},\ \ f_{m}(t):=f(t)*\rho_{m}.\] Then, \(b_{m},f_{m}\in L^{\infty}_{T}C^{\infty}_{c}\) and for any \(\gamma>\beta\), \[\lim_{m\to\infty}\left(\|b_{m}-b\|_{L^{\infty}_{T}\mathbf{B}^{-\gamma}_{ \infty,\infty}}+\|f_{m}-f\|_{L^{\infty}_{T}\mathbf{B}^{-\gamma}_{\infty,\infty }}\right)=0.\] Moreover, \[\sup_{m}\|b_{m}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\leqslant \|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}},\ \ \sup_{m}\|f_{m}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\leqslant \|f\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}.\] Let \(u_{n}\) be the classical solution of PDE (3.1) with \((b,f)=(b_{m},f_{m})\). Then, by Corollary 3.8 and Lemma 3.1, we have \[\sup_{m}\left(\|u_{m}\|_{L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{ \infty,\infty}}+\|\mathscr{L}_{\sigma}^{(\alpha)}u_{m}\|_{L^{\infty}_{T} \mathbf{B}^{-\beta}_{\infty,\infty}}\right)\lesssim\|f\|_{L^{\infty}_{T} \mathbf{B}^{-\beta}_{\infty,\infty}}+\|u_{0}\|_{\mathbf{B}^{\alpha-\beta}_{ \infty,\infty}}.\] By a standard argument, we obtain the existence. For (3.10), on the one hand, it follows from (3.8), (3.6), and (2.6) that \[\|u(t)-u(s)\|_{\mathbf{B}^{-\beta}_{\infty,\infty}}\lesssim_{c}|t-s|\Big{(}(1+ \|b\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}})\|u\|_{L^{\infty}_{T} \mathbf{B}^{-\beta}_{\infty,\infty}}+\|f\|_{L^{\infty}_{T}\mathbf{B}^{-\beta} _{\infty,\infty}}\Big{)}\] \[\mathop{\lesssim}\limits^{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq: which yields \[\|u(T-(T-t))\|_{\mathbf{B}^{1+\beta+\varepsilon}_{\infty,\infty}} \lesssim\int_{0}^{T-t}(T-t-s)^{-\frac{1+2\beta+\varepsilon}{\alpha }}\|u(T-s)\|_{\mathbf{B}^{1+\beta+\varepsilon\varepsilon}_{\infty,\infty}} \mathrm{d}s\] \[\qquad+(T-t)^{-\frac{1+\beta+\varepsilon+\gamma}{\alpha}}\| \varphi\|_{\mathbf{B}^{-\gamma}_{\infty,\infty}},\] and then \[\|u(t)\|_{\mathbf{B}^{1+\beta+\varepsilon}_{\infty,\infty}} \lesssim(T-t)^{-\frac{1+\beta+\varepsilon+\gamma}{\alpha}}\|\varphi\|_{ \mathbf{B}^{-\gamma}_{\infty,\infty}} \tag{3.30}\] since Gronwall's inequality of Volterra type (see Lemma 2.8) with \(\frac{1+\beta+\varepsilon+\gamma}{\alpha},\frac{1+2\beta+\varepsilon}{\alpha }<1\). 1. _Case one:_\(\delta\in[0,\alpha-\beta)\). Substituting (3.30) to (3.29), by a change of variables, we obtain that for any \(0<\varepsilon<\alpha-1-2(\beta\vee\gamma)\), \[\|u(t)\|_{\mathbf{B}^{\delta}_{\infty,\infty}} \lesssim(T-t)^{-\frac{\delta+\gamma}{\alpha}}\|\varphi\|_{ \mathbf{B}^{-\gamma}_{\infty,\infty}}+\|\varphi\|_{\mathbf{B}^{-\gamma}_{ \infty,\infty}}\int_{t}^{T}(s-t)^{-\frac{\delta+\beta}{\alpha}}(T-s)^{-\frac{1 +\beta+\varepsilon+\gamma}{\alpha}}\mathrm{d}s\] \[\lesssim(T-t)^{-\frac{\delta+\gamma}{\alpha}}\|\varphi\|_{ \mathbf{B}^{-\gamma}_{\infty,\infty}}\left(1+(T-t)^{\frac{\alpha-1-2\beta- \varepsilon}{\alpha}}\int_{0}^{1}s^{-\frac{\delta+\beta}{\alpha}}(1-s)^{-\frac {1+\beta+\varepsilon+\gamma}{\alpha}}\mathrm{d}s\right)\] \[\lesssim(T-t)^{-\frac{\delta+\gamma}{\alpha}}\|\varphi\|_{ \mathbf{B}^{-\gamma}_{\infty,\infty}},\] where we used the definitions of Beta functions with \(0<\frac{\delta+\beta}{\alpha},\frac{1+\beta+\varepsilon+\gamma}{\alpha}<1\). 2. _Case two:_\(\delta=\alpha-\beta\). By taking \(\eta=\beta\) in (3.26), and \(\delta=0,\alpha\) respectively, we deduce that \[\|\mathcal{R}_{j}P_{t,s}(b\cdot\nabla u(s))\|_{\infty}\lesssim\left[(2^{- \alpha j}(s-t)^{-\frac{\alpha+\beta}{\alpha}})\wedge(s-t)^{-\frac{\beta}{ \alpha}}\right]\|b\cdot\nabla u(s)\|_{\mathbf{B}^{-\beta}_{\infty,\infty}},\] which derives that for any \(0<\varepsilon<\alpha-1-2(\beta\vee\gamma)\), \[\|\mathcal{R}_{j}P_{t,s}(b\cdot\nabla u(s))\|_{\infty}\lesssim\left[(2^{- \alpha j}(s-t)^{-\frac{1+\beta}{\alpha}})\wedge(s-t)^{-\frac{\beta}{\alpha}} \right](T-s)^{-\frac{1+\beta+\varepsilon+\gamma}{\alpha}}\|\varphi\|_{ \mathbf{B}^{-\gamma}_{\infty,\infty}}\] provided by (3.28) and (3.30). Consequently, by Lemma A.1, one sees that \[\left\|\mathcal{R}_{j}\int_{t}^{T}P_{t,s}(b\cdot\nabla u(s)) \mathrm{d}s\right\|_{\infty}\] \[\lesssim\|\varphi\|_{\mathbf{B}^{-\gamma}_{\infty,\infty}}\int_{ 0}^{T-t}\left[(2^{-\alpha j}s^{-\frac{\alpha+\beta}{\alpha}})\wedge s^{-\frac {\beta}{\alpha}}\right](T-t-s)^{-\frac{1+\beta+\varepsilon+\gamma}{\alpha}} \mathrm{d}s\] \[\lesssim\|\varphi\|_{\mathbf{B}^{-\gamma}_{\infty,\infty}}2^{- \bar{j}(\alpha-\beta)}(T-t)^{-\frac{1+\beta+\varepsilon+\gamma}{\alpha}} \lesssim\|\varphi\|_{\mathbf{B}^{-\gamma}_{\infty,\infty}}2^{-j(\alpha-\beta) }(T-t)^{-\frac{\alpha-\beta+\gamma}{\alpha}},\] which leads to the desired estimates trivially. Combining these two cases, we complete the proof. ## 4. Well-posedness of SDEs Fix \(\alpha\in(1,2)\). Let \((\Omega,\mathcal{F},(\mathcal{F})_{t\geqslant 0},\mathbb{P})\) be a filtered probability space satisfying the usual argument and \(\{L^{(\alpha)}_{t},t\geqslant 0\}\) be a non-degenerate symmetric \(\alpha\)-stable process on it. The expectation with respect to \(\mathbb{P}\) denoted by \(\mathbb{E}^{\mathbb{P}}\) or simply by \(\mathbb{E}\) if there is no confusion possible. In this section, assuming that the Levy measure of \(L^{(\alpha)}_{t}\) satisfies condition **(N)**, we focus on the well-posedness of SDEs with multiplicative noises and distributional drifts: \[\mathrm{d}X_{t}=b(t,X_{t})\mathrm{d}t+\sigma(t,X_{t-})\mathrm{d}L^{(\alpha)}_{t },\ X_{0}=x\in\mathbb{R}^{d}, \tag{4.1}\] where the drift coefficient \(b\) is a distribution belongs to the space \(L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}\) with \(\beta\in(0,\frac{\alpha-1}{2})\), and the diffusion coefficient \(\sigma:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\otimes\mathbb{R}^{d}\) satisfies the condition \((\mathbf{H}^{\sigma})\). Given \(T>0\), let \(\mathbf{D}:=D([0,T],\mathbb{R}^{d})\) be the space of all cadlag functions from \(\mathbb{R}_{+}\) to \(\mathbb{R}^{d}\). It is easy to see that, for any \(\varepsilon,t>0\), such a function has at most finitely many jumps of size greater than \(\varepsilon\) before time \(t\). In the sequel, \(\mathbf{D}\) is equipped with Skorokhod \(J_{1}\)-topology, making \(\mathbf{D}\) into a Polish space. Then we endow \(\mathbf{D}\) with the usual Borel \(\sigma\)-algebra \(\mathcal{B}(\mathbf{D})\). Denote all the probability measures over \(\mathbf{D}\) by \(\mathcal{P}(\mathbf{D})\). In this section, let the evaluation map \(\pi_{t}:\mathbf{D}\to\mathbb{R}^{d}\) be the coordinate process defined by \[\pi_{t}(\omega)=\omega_{t},\ \ \forall t\in[0,T],\ \omega\in\mathbf{D}. \tag{4.2}\] Define the natural filtration \[\mathcal{D}_{t}:=\sigma(\pi_{u},0\leqslant u\leqslant t).\] By [16, Proposition 7.1 of Chapter 3] or [28, c) of Theorem 1.14, p.328]), one sees that \(\mathcal{D}_{t}\) is equal to the Borel \(\sigma\)-algebra \(\mathcal{B}(D([0,t],\mathbb{R}^{d}))\). For any probability measure \(\mathbf{Q}\) on \(\mathbf{D}\), we denote the expectation with respect to \(\mathbf{Q}\) by \(\mathbf{E}^{\mathbf{Q}}\) or simply by \(\mathbf{E}\) if there is no risk of confusion. ### Generalized martingale problems In this subsection, we concentrate on showing our first main result Theorem 1.1. For fixed \(T>0\) and \(f\in C^{\infty}_{c}(\mathbb{R}^{d})\), we consider the following backward nonlocal parabolic equation: \[\partial_{t}u+\mathscr{L}^{(\alpha)}_{\sigma}u+b\cdot\nabla u+f=0,\ \ u(T)=0, \tag{4.3}\] where \(\mathscr{L}^{(\alpha)}_{\sigma}\) is defined by (1.6) with symmetric non-degenerate \(\nu^{(\alpha)}\) given by (2.9). **Remark 4.1**.: _According to Theorem 3.4, the backward nonlocal parabolic partial differential equation (4.3) has a unique solution \(u\in L^{\infty}_{T}\mathbf{B}^{\alpha-\beta}_{\infty,\infty}\) for any \(f\in C^{\infty}_{c}(\mathbb{R}^{d})\)._ **Definition 4.2** (Generalized martingale solution).: _Let \(\alpha\in(1,2)\) and \(T>0\). For \(x\in\mathbb{R}^{d}\), a probability measure \(\mathbf{Q}_{x}\in\mathcal{P}(\mathbf{D})\) is a generalized martingale solution of SDE (4.1) starting at \(x\) with coefficients \(b,\sigma\) if_ 1. \(\mathbb{P}(\pi_{0}=x)=1\) _._ 2. _For any_ \(t\in[0,T]\) _and_ \(f\in C_{c}^{\infty}(\mathbb{R}^{d})\)_, the process_ \[M_{t}:=u(t,\pi_{t})-u(0,x)+\int_{0}^{t}f(\pi_{r})\mathrm{d}r\] _is a martingale under_ \(\mathbf{Q}_{x}\) _with respect to filtration_ \((\mathcal{D}_{t})_{t\in[0,T]}\)_, where_ \(u\) _is the unique solution of PDE (_4.3_)._ _The set of all the generalized martingale solutions is denoted by \(\mathcal{M}_{b,\sigma}(x)\)._ First of all, we consider the following approximation SDE: \[\mathrm{d}X_{t}^{m}=b_{m}(t,X_{t}^{m})\mathrm{d}t+\sigma(t,X_{t-}^{m})\mathrm{ d}L_{t}^{(\alpha)},\ \ X_{0}^{m}=x\in\mathbb{R}^{d}, \tag{4.4}\] where \(b_{m}(t,x):=\varphi_{m}*b(t,x)\), \(m\in\mathbb{N}\), is a sequence of smooth functions such that \[\lim_{m\to\infty}\|b_{m}-b\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{- \vartheta}}=0,\ \ \forall\,\vartheta>\beta.\] Obviously, \[\|b_{m}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\vartheta}}\lesssim\|b \|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\vartheta}}.\] It is well-known (for example, see [13, Theorem 1.1]) that there is a unique strong solution \(X^{m}(x)\) for SDE (4.4) on \((\Omega,\mathcal{F},(\mathcal{F})_{t\geqslant 0},\mathbb{P})\). Denote the law of \(X^{m}(x)\) by \[\mathbf{Q}_{x}^{m}:=\mathbb{P}\circ(X^{m}(x))^{-1}.\] It is well-known that \(\mathbf{Q}_{x}^{m}\) is also a martingale solution of SDE (4.4), denoted by \(\mathbf{Q}_{x}^{m}\in\mathcal{M}_{b_{m},\sigma}(x)\). Observe that for any good enough function \(g:\mathbb{R}^{d}\to\mathbb{R}\), \[\int_{\Omega}g(X_{t}^{m}(x,\omega))\mathbb{P}(\mathrm{d}\omega)=\int_{ \mathbf{D}}g(\pi_{t}(\omega))\mathbf{Q}_{x}^{m}(\mathrm{d}\omega),\] that is \(\mathbb{E}g(X_{t}^{m}(x))=\mathbf{E}^{\Omega_{x}^{m}}g(\pi_{t})\). The following result is used to estimate the term \(\int_{0}^{\cdot}b_{m}(r,X_{r}^{m}(x))\mathrm{d}r\), and then to show the tightness of \(\{X^{m}(x),m\geqslant 1\}\) in Theorem 4.5. In the sequel, we use \(X^{m}\) to denote \(X^{m}(x)\) when there is no chance of confusion. **Lemma 4.3**.: _Let \(T>0\) and \(\tau\) be a bounded stopping time satisfying \(0\leqslant\tau\leqslant\tau+\delta\leqslant T\) with some \(\delta\in(0,1)\). Assume that \(f\in L_{T}^{\infty}C_{c}^{\infty}\) be a bounded and smooth function. For any \(p>0\) and \(\beta\in(0,\frac{\alpha-1}{2})\), there is a constant \(c>0\) independent of \(m,\tau\) such that_ \[\mathbb{E}\left|\int_{\tau}^{\tau+\delta}f(r,X_{r}^{m})\mathrm{d}r\right|^{p} \lesssim_{c}\delta^{\frac{p(\alpha-\beta)}{\alpha}}\|f\|_{L_{T}^{\infty} \mathbf{B}_{\infty,\infty}^{-\beta}}^{p}. \tag{4.5}\] Proof.: Without loss of generality, we assume \(p\geqslant 2\). When \(p<2\), it is directly from the case \(p=2\) and Jensen's inequality. Recalling PDE (3.1), let \(u_{m}^{\lambda}\) be a classical solution of the backward equation \[\partial_{t}u_{m}^{\lambda}+\big{(}\mathcal{L}_{\sigma}^{(\alpha)}-\lambda \big{)}u_{m}^{\lambda}+b_{m}\cdot\nabla u_{m}^{\lambda}+f=0,\ \ u_{m}^{\lambda}(T)=0. \tag{4.6}\] By Ito's formula (cf. [27, Theorem 5.1 of Chapter II]), we have that for any stopping time \(\tilde{\tau}\leqslant T\), \[u_{m}^{\lambda}(\tilde{\tau},X_{\tilde{\tau}}^{m}) -u_{m}^{\lambda}(0,x)=\ \ \ \int_{0}^{\tilde{\tau}}(\partial_{r}u_{m})(r,X_{r}^{m})\mathrm{d}r+\int_{0}^{ \tilde{\tau}}b_{m}(r,X_{r}^{m})\cdot\nabla u_{m}^{\lambda}(r,X_{r}^{m}) \mathrm{d}r\] \[+\int_{0}^{\tilde{\tau}}\int_{\mathbb{R}^{d}}\Big{(}u_{m}(r,X_{r- }^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{\lambda}(r,X_{r-}^{m})\Big{)}\tilde{N}( \mathrm{d}r,\mathrm{d}z)\] \[+\int_{0}^{\tilde{\tau}}\int_{\mathbb{R}^{d}}\Big{(}u_{m}^{ \lambda}(r,X_{r}^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{\lambda}(r,X_{r}^{m})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\sigma(r,X_{r- }^{m})z\cdot\nabla u_{m}^{\lambda}(r,X_{r}^{m})\Big{)}\nu^{(\alpha)}(\mathrm{ d}z)\mathrm{d}r,\] which together with (4.6) derives that \[\int_{\tau}^{\tau+\delta}f(r,X_{r}^{m})\mathrm{d}r=\ u_{m}^{\lambda}(\tau,X_{ r}^{m})-u_{m}^{\lambda}(\tau+\delta,X_{\tau+\delta}^{m})+\lambda\int_{\tau}^{ \tau+\delta}u_{m}^{\lambda}(r,X_{r}^{m})\mathrm{d}r+M_{\tau,\tau+\delta} \tag{4.7}\] where we substituted \(\tau\) and \(\tau+\delta\) for \(\tilde{\tau}\), and \[M_{\tau,\tau+\delta}:=\int_{\tau}^{\tau+\delta}\int_{\mathbb{R}^{d}}\Big{(}u_ {m}^{\lambda}(r,X_{r-}^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{\lambda}(r,X_{r-}^{m} )\Big{)}\tilde{N}(\mathrm{d}r,\mathrm{d}z).\] Notice that, by the condition \((\mathbf{H}^{\sigma})\), we have that \[|u_{m}^{\lambda}(r,X_{r-}^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{\lambda}(r,X_{r-} ^{m})|\leqslant(2\|u_{m}^{\lambda}\|_{L_{T}^{\infty}})\wedge(c_{0}\|\nabla u_ {m}^{\lambda}\|_{L_{T}^{\infty}}|z|),\] which together with Kunita's inequality(cf. [31, Theorem 2.11]) yields that for any \(p\geqslant 2\), \[\mathbb{E}|M_{\tau,\tau+\delta}|^{p} \lesssim\mathbb{E}\Big{[}\Big{(}\int_{\tau}^{\tau+\delta}\int_{ \mathbb{R}^{d}}|u_{m}^{\lambda}(r,X_{r-}^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{ \lambda}(r,X_{r-}^{m})|^{2}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\Big{)}^{p/2} \Big{]}\] \[\quad+\mathbb{E}\int_{\tau}^{\tau+\delta}\int_{\mathbb{R}^{d}}|u _{m}^{\lambda}(r,X_{r-}^{m}+\sigma(r,X_{r-}^{m})z)-u_{m}^{\lambda}(r,X_{r-}^{m })|^{p}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\] \[\lesssim\delta^{p/2}\left(\int_{\mathbb{R}^{d}}\Big{(}\|u_{m}^{ \lambda}\|_{L_{T}^{\infty}}\wedge(\|\nabla u_{m}^{\lambda}\|_{L_{T}^{\infty}} )|z|\Big{)}^{2}\,\nu^{(\alpha)}(\mathrm{d}z)\right)^{p/2}\] \[\quad+\delta\int_{\mathbb{R}^{d}}\Big{(}\|u_{m}^{\lambda}\|_{L_{T }^{\infty}}\wedge(\|\nabla u_{m}^{\lambda}\|_{L_{T}^{\infty}})|z|\Big{)}^{p}\, \nu^{(\alpha)}(\mathrm{d}z).\] Observe that by (2.7) and Bernstein's inequality, \[\|\nabla u_{m}^{\lambda}\|_{L_{T}^{\infty}}\lesssim\|\nabla u_{m}^{\lambda}\| _{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-(\alpha-\beta)+1}}^{1/2}\|\nabla u _{m}^{\lambda}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{(\alpha-\beta)-1}} ^{1/2}\lesssim\|u_{m}^{\lambda}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{ 2-(\alpha-\beta)}}^{1/2}\|u_{m}^{\lambda}\|_{L_{T}^{\infty}\mathbf{B}_{\infty, \infty}^{\alpha-\beta}}^{1/2}.\] Hence, by (3.20) and (3.21), one sees that there is a constant \(\lambda_{0}>1\) such that for any \(\lambda>\lambda_{0}\), \[\|\nabla u_{m}^{\lambda}\|_{L_{T}^{\infty}}\lesssim\lambda^{-\frac{(\alpha- \beta)-1}{\alpha}}\|f\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{\alpha- \beta}}\ \ \text{and}\ \ \|u_{m}^{\lambda}\|_{L_{T}^{\infty}}\leqslant\|u_{m}^{\lambda}\|_{\mathbf{B}_{ \infty,\infty}^{\alpha}}\lesssim\lambda^{-\frac{\alpha-\beta}{\alpha}}\|f\|_{L_ {T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}},\] where we used the fact \(\beta\in(0,\frac{\alpha-1}{2})\). Consequently, from (4.7), we obtain that for any \(\lambda>\lambda_{0}\), \[\mathbb{E}\left|\int_{\tau}^{\tau+\delta}f(r,X_{r}^{m})\mathrm{d}r\right|^{p} \lesssim(2+\lambda\delta)^{p}\|u_{m}^{\lambda}\|_{L_{T}^{\infty}}^{p}+\delta^{p/ 2}\left(\int_{\mathbb{R}^{d}}\Big{(}\|u_{m}^{\lambda}\|_{L_{T}^{\infty}}\wedge( \|\nabla u_{m}^{\lambda}\|_{L_{T}^{\infty}}|z|)\Big{)}^{2}\,\nu^{(\alpha)}( \mathrm{d}z)\right)^{p/2}\] \[+\delta\int_{\mathbb{R}^{d}}\left(\|u_{m}^{\lambda}\|_{L^{\infty}_{T} }\wedge(|\nabla u_{m}^{\lambda}\|_{L^{\infty}_{T}}|z|)\right)^{p}\nu^{(\alpha)}( \mathrm{d}z)\] \[\lesssim\left[\lambda^{-\frac{\alpha-\beta}{\alpha}p}(1+(\lambda \delta)^{p})+\delta^{p/2}\lambda^{-\frac{\alpha-\beta}{\alpha}p}\left(\int_{ \mathbb{R}^{d}}\left(1\wedge(\lambda^{\frac{2}{\alpha}}|z|^{2})\right)\nu^{( \alpha)}(\mathrm{d}z)\right)^{p/2}\right.\] \[\left.\quad+\delta\lambda^{-\frac{\alpha-\beta}{\alpha}p}\int_{ \mathbb{R}^{d}}\left(1\wedge(\lambda^{\frac{p}{\alpha}}|z|^{p})\right)\nu^{( \alpha)}(\mathrm{d}z)\right]\|f\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty, \infty}}^{p}.\] Observe that \(\|b_{m}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\lesssim\|b\|_{L ^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\). Therefore, by (2.12), we have \[\mathbb{E}\left|\int_{\tau}^{\tau+\delta}f(r,X_{r}^{m})\mathrm{d}r\right|^{p} \lesssim\lambda^{-\frac{p(\alpha-\beta)}{\alpha}}(1+(\lambda\delta)^{p}+( \lambda\delta)^{p/2}+\lambda\delta)\|f\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{ \infty,\infty}}^{p}.\] Taking \(\lambda=\lambda_{0}\delta^{-1}\), we get the desired estimates. **Remark 4.4**.: _By (4.5) and Chebyshev's inequality, one sees that for any \(R>0\),_ \[\lim_{R\to\infty}\sup_{m}\mathbb{P}\left(\sup_{t\in[0,T]}\left|\int_{0}^{t}f( r,X_{r}^{m})\mathrm{d}r\right|\geqslant R\right)=0. \tag{4.8}\] Now we give the tightness of \(\{X^{m},m\geqslant 1\}\). **Theorem 4.5**.: _The sequence \(\{X^{m}\}_{m\in\mathbb{N}}\) in \(\mathbf{D}\) is tight._ Proof.: Fix \(T>0\). Let \(\tau\) be a bounded stopping time satisfying \(0\leqslant\tau\leqslant\tau+\delta\leqslant T\) with \(\delta\in(0,1)\). By SDE (4.4), we have \[X_{\tau+\delta}^{m}-X_{\tau}^{m}=\int_{\tau}^{\tau+\delta}\sigma(r,X_{r-}^{m}) \mathrm{d}L_{r}^{(\alpha)}+\int_{\tau}^{\tau+\delta}b_{m}(r,X_{r}^{m})\mathrm{ d}r.\] For \(p\in[1,\alpha)\), by Chebyshev's inequality, Lemma 2.6, and Lemma 4.3, we have that for each \(R>0\), \[\mathbb{P}(|X_{\tau+\delta}^{m}-X_{\tau}^{m}|\geqslant R)\leqslant R^{-p} \mathbb{E}|X_{\tau+\delta}^{m}-X_{\tau}^{m}|^{p}\lesssim\delta^{p/\alpha}(\| \sigma\|_{L^{\infty}_{T}}^{p}+\|b_{m}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{ \infty,\infty}}^{p}), \tag{4.9}\] where the implicit constant in the last inequality is independent of \(m\), \(\tau\), and \(\delta\). Furthermore, \[\lim_{\delta\downarrow 0}\sup_{m}\sup_{\tau\leqslant T}\sup_{h\in[0,\delta]} \mathbb{P}(|X_{\tau+h}^{m}-X_{\tau}^{m}|\geqslant R)=0,\] provided by \(\|b_{m}\|_{L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\leqslant\|b\|_ {L^{\infty}_{T}\mathbf{B}^{-\beta}_{\infty,\infty}}\). Moreover, by Burkholder-Davis-Gundy's inequality for jump processes (see [39, Lemma 2.3] or [36, Theorem 1]) and (2.11), there is a constant \(c>0\) that only depends on \(T,\|\sigma\|_{L^{\infty}_{T}},\alpha,\nu^{(\alpha)}\) such that \[\mathbb{E}\Big{(}\sup_{t\in[0,T]}\left|\int_{0}^{t}\sigma(r,X_{r- }^{m})\mathrm{d}L_{r}^{(\alpha)}\right|\Big{)} \lesssim\Big{[}\mathbb{E}\Big{(}\int_{0}^{T}\int_{|z|\leqslant 1}\left| \sigma(r,X_{r-}^{m})z\right|^{2}\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\Big{)} \Big{]}^{1/2}\] \[\qquad+\mathbb{E}\Big{(}\int_{0}^{T}\int_{|z|>1}\left|\sigma(r,X_{ r-}^{m})z\right|\nu^{(\alpha)}(\mathrm{d}z)\mathrm{d}r\Big{)}\lesssim_{c},\] which together with (4.8), and by Chebyshev's inequality, yields that \[\lim_{R\to\infty}\sup_{m}\mathbb{P}\Big{(}\sup_{t\leqslant T}|X_{t}^{m}|\geqslant R \Big{)}=0. \tag{4.10}\] Hence, by (4.9), (4.10) and Aldous's criterion for tightness (cf. [28, Theorem 4.5 of Chapter VI, p.356] and [29, Lemma 16.12]), we conclude the proof. Now we are in a position to give Proof of Theorem 1.1.: **(Existence)** Since the sequence \(\{X^{m}(x)\}_{m\in\mathbb{N}}\) in \(\mathbf{D}\) is tight, there are a subsequence \(m_{i}\) and a probability measure \(\mathbf{Q}_{x}\) on \(\mathbf{D}\) such that \(\mathbf{Q}_{x}^{m_{i}}:=\mathbb{P}\circ(X^{m_{i}}(x))^{-1}\) converges to \(\mathbf{Q}_{x}\) weakly. Below, for simplicity of notations, we still denote this subsequence by \(\mathbf{Q}_{x}^{m}\). Next, we verify that the limit \(\mathbf{Q}_{x}\) is a solution of the generalized martingale problem in the sense of Definition 4.2, and then the existence is obtained. Fix \(T>0\) and \(f\in C_{c}^{\infty}(\mathbb{R}^{d})\). Recalling PDE (3.1), by Theorem 3.4, there is a unique solution \(u\in L_{T}^{\infty}\mathbf{Q}_{\infty,\infty}^{\alpha-\beta}\) solves the backward nonlocal parabolic equation (4.3). Let \(\pi_{t}:\mathbf{D}\to\mathbb{R}^{d}\) be the coordinate process defined by (4.2). Set \[M_{t}(\omega):=u(t,\pi_{t}(\omega))-u(0,x)+\int_{0}^{t}f(\pi_{r}(\omega)) \mathrm{d}r,\ \ \forall\omega\in\mathbf{D}. \tag{4.11}\] Our aim is to prove that \(M\) is a \((\mathcal{D}_{t})_{t\in[0,T]}\)-martingale under \(\mathbf{Q}_{x}\). Define \[M_{t}^{m}(\omega):=u_{m}(t,\pi_{t}(\omega))-u_{m}(0,x)+\int_{0}^{t}f(\pi_{r}( \omega))\mathrm{d}r,\ \ \forall\omega\in\mathbf{D}, \tag{4.12}\] where \(u_{m}\) is a classical solution of the backward equation: \[\partial_{t}u_{m}+\mathscr{L}_{\sigma}^{(\alpha)}u_{m}+b_{m}\cdot\nabla u_{m} +f=0,\ \ u_{m}(T)=0. \tag{4.13}\] We claim that \(M^{m}\) is a \((\mathcal{D}_{t})_{t\in[0,T]}\)-martingale under \(\mathbf{Q}_{x}^{m}\). Indeed, using Ito's formula (cf. [27, Theorem 5.1 of Chapter II]), one sees that \[u_{m}(t,X_{t}^{m}) -u_{m}(0,x)=\ \ \int_{0}^{t}(\partial_{r}u_{m})(r,X_{r}^{m}) \mathrm{d}r+\int_{0}^{t}b_{m}(r,X_{r}^{m})\cdot\nabla u_{m}(r,X_{r}^{m}) \mathrm{d}r\] \[+\int_{0}^{t}\int_{\mathbb{R}^{d}}\Big{(}u_{m}(r,X_{r-}^{m}+ \sigma(r,X_{r-}^{m})z)-u_{m}(r,X_{r-}^{m})\Big{)}\tilde{\mathcal{N}}(\mathrm{ d}r,\mathrm{d}z)\] \[+\int_{0}^{t}\int_{\mathbb{R}^{d}}\Big{(}u_{m}(r,X_{r}^{m}+ \sigma(r,X_{r-}^{m})z)-u_{n}(r,X_{r}^{m})\] \[-\sigma(r,X_{r-}^{m})z\cdot\nabla u_{n}(r,X_{r}^{m})\Big{)}v^{( \alpha)}(\mathrm{d}z)\mathrm{d}r.\] Thanks to (2.8), we have that the third term of the right hand of the above equality is a square-integrable \((\mathscr{F}_{t})\)-martingale under \(\mathbb{P}\) (cf. [27]). Observe that every path of \(X^{m}\) has at most countably many jumps on \(\mathbb{R}_{+}\). Thus, we obtain that \[M_{t}^{m}(\omega)\stackrel{{\eqref{eq:M_t}}}{{=}}u_{m}(t,\pi_{t }(\omega))-u_{m}(0,x)-\int_{0}^{t}(\partial_{r}+\mathscr{L}_{\sigma}^{(\alpha) }+b_{m}\cdot\nabla)u_{m}(r,\pi_{r}(\omega))\mathrm{d}r\] is an \((\mathcal{D}_{t})_{t\in[0,T]}\)-martingale under \(\mathbf{Q}_{x}^{m}\), which means \(\mathbf{Q}_{x}^{n}\in\mathcal{M}_{b_{m},\sigma}(x)\). _Step 1_. We state that for every \(0\leqslant s\leqslant t\) and any \(\mathcal{D}_{s}\)-measurable bounded continuous function \(\eta:\mathbf{D}\to\mathbb{R}\), \[\lim_{m\to\infty}\mathbf{E}^{\mathbf{Q}_{x}^{m}}[(M_{t}-M_{s})\eta]=0. \tag{4.14}\] In fact, by the definition of martingales, we have that for every \(0\leqslant s\leqslant t\), \[\mathbf{E}^{\mathbf{Q}_{x}^{m}}[(M_{t}^{m}-M_{s}^{m})\eta]=0,\] and then \[\mathbf{E}^{\mathbf{Q}_{x}^{m}}[(M_{t}-M_{s})\eta]=\mathbf{E}^{\mathbf{Q}_{x} ^{m}}[(M_{t}-M_{t}^{m})\eta]-\mathbf{E}^{\mathbf{Q}_{x}^{m}}[(M_{s}-M_{s}^{m}) \eta].\] Since for any \(\alpha-\beta>\varepsilon>0\), \(u_{m}\) converges to \(u\) in \(L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{\alpha-\beta-\varepsilon}\) and \(L^{\infty}([0,T]\times\mathbb{R}^{d})\), by (4.11) and (4.12), we get \[\sup_{\omega\in\mathbf{D}}|M_{t}(\omega)-M_{t}^{m}(\omega)|\leqslant 2\|u-u_{ m}\|_{L_{T}^{\infty}}\to 0,\ \ \text{as}\ \ m\to\infty,\] which implies (4.14). _Step 2_. In this step, we show that \(M\) is an \((\mathcal{D}_{t})_{t\in J(\pi)^{c}\cap[0,T]}\)-martingale under \(\mathbf{Q}_{x}\), where \[J(\pi) :=\{r>0:\mathbf{Q}_{x}(|\pi_{r}-\pi_{r-}|\neq 0)>0\}\] \[=\{r>0:\mathbf{Q}_{x}(\omega:r\in J(\omega))>0\}\] with (the set of discontinuity points of \(\omega\in\mathbf{D}\)) \[J(\omega):=\{t>0:|\omega_{t}-\omega_{t-}|\neq 0\},\ \ \forall\omega\in \mathbf{D}.\] By [28, 2.3 of Chapter VI, p.339], \(\pi_{r}\) is continuous at every point \(\omega\in\mathbf{D}\) such that \(r\notin J(\omega)\). Thus, for any \(t\notin J(\pi)\), \(u(t,\pi_{t})\) are continuous on \(\mathbf{D}\) for almost all \(\omega\in\mathbf{D}\). Moreover, for every \(\omega\in\mathbf{D}\), letting \(\{\omega^{(n)}\}\) be a sequence in \(\mathbf{D}\) and converge to \(\omega\) under Skorokhod \(J_{1}\)-topology, since \(f\in C_{c}^{\infty}(\mathbb{R}^{d})\), we also have \[\lim_{n\to\infty}f(\pi_{r}(\omega^{(n)}))\mathbf{1}_{J(\omega)^{c}}(r)=f(\pi_{ r}(\omega))\mathbf{1}_{J(\omega)^{c}}(r),\] which together with the fact that \(J(\omega)\) is at most countable and the dominated convergence theorem derives that the term \(\int_{0}^{t}f(\pi_{r}(\cdot))\mathrm{d}r\) is continuous on \(\mathbf{D}\) for any fixed \(t\). Therefore, for any \(t\notin J(\pi)\), by (4.12), the random variable \(M_{t}:\mathbf{D}\to\mathbb{R}\) is bounded and almost everywhere continuous on \(\mathbf{D}\). Consequently, by \(\mathbf{Q}_{x}^{m}\overset{w}{\to}\mathbf{Q}_{x}\) and (4.14), we obtain that for every \(s,t\notin J(\pi)\) and \(\mathcal{D}_{s}\)-measurable bounded continuous function \(\eta:\mathbf{D}\to\mathbb{R}\), \[\mathbf{E}^{\mathbf{Q}_{x}}[(M_{t}-M_{s})\eta]=\lim_{m\to\infty}\mathbf{E}^{ \mathbf{Q}_{x}^{m}}[(M_{t}-M_{s})\eta]=0, \tag{4.15}\] which implies our statement by approximation property of functions in \(L^{1}(\mathbf{D},\mathcal{D}_{s},\mathbf{Q}_{x})\) (see [29, Lemma 1.35, p.18]) and the dominated convergence theorem. _Step 3_. Due to the dominated convergence theorem, we obtain that (4.15) holds for all \(s,t\in[0,T]\) since \(M\) is a cadlag process and \(J(\pi)^{c}\) is dense in \([0,T]\) (cf. [28, Lemma 3.12 of Chapter VI] or [16, Lemma 7.7 of Chapter 3] ). Thus, \(\mathbf{Q}_{x}\in\mathcal{M}_{b,\sigma}(x)\). **(Uniqueness)** Let \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\) be two solutions of the generalized martingale problem in the sense of Definition 4.2. Let \(u\) be the unique solution of the backward nonlocal PDE (4.3) with \(f\in C_{c}^{\infty}(\mathbb{R}^{d})\). By the definition of generalized martingale solutions, the processes \[M_{t}(\omega):=u(t,\pi_{t}(\omega))-u(0,x)-\int_{0}^{t}f(\pi_{r}(\omega)) \mathrm{d}r,\ \ \forall\omega\in\mathbf{D},\] are \((\mathcal{D}_{t})\)-martingales under \(\mathbf{Q}_{i},i=1,2\). Combining with PDE (4.3), we have \[\mathbf{E}^{\mathbf{Q}_{1}}\int_{0}^{T}f(\pi_{r})\mathrm{d}r=u(0,x)=\mathbf{E }^{\mathbf{Q}_{2}}\int_{0}^{T}f(\pi_{r})\mathrm{d}r,\] which implies that \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\) have the same one-dimensional marginal distributions. Hence, by [16, Corollary 4.4.3] (or [40, Theorem 6.2.3, p.147]), we get \(\mathbf{Q}_{1}=\mathbf{Q}_{2}\) through the standard induction approach. The proof is finished. ### Stability of SDE with \(\mathbf{B}_{\infty,\infty}^{-\beta}\) drift By Theorem 1.1, for any \(\sigma\) satisfying (\(\mathbf{H}^{\sigma}\)) and \(b_{1},b_{2}\in L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}\) with some \(\beta\in(0,\frac{\alpha-1}{2})\), there are unique \(\mathbf{P}_{1}\in\mathcal{M}_{b_{1},\sigma}(x)\) and \(\mathbf{P}_{2}\in\mathcal{M}_{b_{2},\sigma}(x)\) respectively. Denote by \(\mathbf{P}_{i}(t):=\mathbb{P}_{i}\circ(\pi_{t})^{-1}\). Here is the other main result in this section. **Theorem 4.6** ( Stability estimates).: _Let \(T>0\) and \(\alpha\in(1,2)\). Assume \(\sigma\) satisfies (\(\mathbf{H}^{\sigma}\)) with constant \(c_{0}\). Then, for any \(\beta\in(0,\frac{\alpha-1}{2})\) and \(b_{1},b_{2}\in L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}\), there is a constant \(c=(d,\alpha,\beta,T,c_{0},\|b_{1}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^ {-\beta}},\|b_{2}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}})>0\) such that for any \(t\in[0,T]\),_ \[\|\mathbf{P}_{1}(t)-\mathbf{P}_{2}(t)\|_{\mathrm{var}}\lesssim_{c}\|b_{1}-b_{ 2}\|_{L_{T}^{\infty}\mathbf{B}_{\infty,\infty}^{-\beta}},\] _where \(\|\cdot\|_{\mathrm{var}}\) denotes the total variance of measures._ Proof.: Based on the uniqueness, we can assume \(b_{1},b_{2}\in L_{T}^{\infty}C_{b}^{\infty}\). Then, for each \(i=1,2\), it is well-known (see [13] for example) that, for any \(x\in\mathbb{R}^{d}\), there is a unique solutions \(X^{i}\), to the following classical SDE: \[X^{i}_{t}=x+\int_{0}^{t}b_{i}(r,X^{i}_{r})\mathrm{d}r+\int_{0}^{t}\sigma(r,X_{ r-})\mathrm{d}L_{r}^{(\alpha)}.\] It suffices to estimate \(\left|\mathbb{E}\varphi(X^{2}_{t})-\mathbb{E}\varphi(X^{1}_{t})\right|\) for any \(\varphi\in C_{b}^{\infty}(\mathbb{R}^{d})\) and \(t\in(0,T]\). Using Ito-Tanaka trick, let \(\varphi\in C_{b}^{\infty}(\mathbb{R}^{d})\) be the terminal condition of the following backward PDE: \[\partial_{r}u^{t}+\mathscr{L}_{\sigma}^{(\alpha)}u^{t}+b_{1}\cdot\nabla u^{t} =0,\quad u^{t}(t)=\varphi, \tag{4.16}\] where \(t\in(0,T]\), \(u^{t}\) is the shifted function \(u^{t}(r,x):=u(t-r,x)\), and \(\mathscr{L}_{\sigma}^{(\alpha)}\) is defined by (1.6). It follows (3.27) that for every \(\delta\in[0,\alpha-\beta]\), \[\|u^{t}(r)\|_{\mathbf{B}_{\infty,\infty}^{\delta}}\lesssim(t-r)^{-\frac{\delta }{\delta}}\|\varphi\|_{\infty}. \tag{4.17}\] By Ito's formula (cf. [27, Theorem 5.1 of Chapter II]), we have that for \(i=1,2\), \[u^{t}(t,X^{i}_{t})-u^{t}(0,x)= \int_{0}^{t}(\partial_{r}u^{t})(r,X^{i}_{r})\mathrm{d}r+\int_{0}^ {t}b_{i}(r,X^{i}_{r})\cdot\nabla u^{t}(r,X^{i}_{r})\mathrm{d}r\] \[\big{|}\mathbb{E}\varphi(X_{t}^{2})-\mathbb{E}\varphi(X_{t}^{1})\big{|} \lesssim\|b_{1}-b_{2}\|_{\mathbf{B}^{-\beta}_{\infty,\infty}}\int_{0 }^{t}r^{-\frac{\beta+\delta}{\alpha}}\|u^{t}(r)\|_{\mathbf{B}^{1+\beta+\delta}_ {\infty,\infty}}\mathrm{d}r\] \[\lesssim\|b_{1}-b_{2}\|_{\mathbf{B}^{-\beta}_{\infty,\infty}}\| \varphi\|_{\infty}\int_{0}^{t}r^{-\frac{\beta+\delta}{\alpha}}(t-r)^{-\frac{1 +\beta+\delta}{\alpha}}\mathrm{d}r\] \[=|b_{1}-b_{2}\|_{\mathbf{B}^{-\beta}_{\infty,\infty}}\|\varphi\| _{\infty}t^{1-\frac{1+2\beta+2\delta}{\alpha}}\int_{0}^{1}r^{-\frac{\beta+ \delta}{\alpha}}(1-r)^{-\frac{1+\beta+\delta}{\alpha}}\mathrm{d}r\] \[\leqslant\|b_{1}-b_{2}\|_{\mathbf{B}^{-\beta}_{\infty,\infty}}\|\varphi\|_{ \infty},\] where we used the definition of Beta function (1.7). This completes the proof. ## 5. Euler approximation ### Euler scheme for SDE with bounded drift Fix \(T>0\). In this subsection, we assume \(b(x)\) belongs to \(L^{\infty}(\mathbb{R}^{d})\) and consider the following SDE: \[X_{t}=x+\int_{0}^{t}b(X_{s})\mathrm{d}t+L^{(\alpha)}_{t}, \tag{5.1}\] and its Euler scheme: \(X_{0}^{n}=X_{0}=x\), \[X_{t}^{n}=x+\int_{0}^{t}b(X_{\phi_{n}(s)}^{n})\mathrm{d}s+L^{(\alpha)}_{t}, \tag{5.2}\] where \(n\in\mathbb{N}\), and \(\phi_{n}(t):=k/n\) for \(t\in[k/n,(k+1)/n)\) with \(k=0,1,2,....,n\). Define \(\mathbb{P}(t):=\mathbb{P}\circ(X_{t})^{-1}\), \(\mathbf{P}_{n}(t):=\mathbb{P}\circ(X_{t}^{n})^{-1}\). Note that, for any \(p\in(0,\alpha)\), \[\mathbb{E}[|X_{r}^{n}-X_{\phi_{n}(r)}^{n}|^{p}] \leqslant\mathbb{E}\Big{(}\|b\|_{L^{\infty}}n^{-1}+|L^{(\alpha)} _{r}-L^{(\alpha)}_{\phi_{n}(r)}|\Big{)}^{p}\] \[\leqslant(2^{p-1}\lor 1)(2\|b\|_{\infty}^{p}n^{-p}+\mathbb{E}[|L ^{(\alpha)}_{r}-L^{(\alpha)}_{\phi_{n}(r)}|^{p}])\overset{\eqref{eq:2.1}}{ \lesssim}\|b\|_{\infty}^{p}n^{-p}+n^{-p/\alpha}, \tag{5.3}\] where the implicit constant in inequality only depends on \(d,\alpha,p,T,\nu^{(\alpha)}\). This subsection is devoted to proving the following result. **Theorem 5.1**.: _Suppose that \(T>0\), \(\alpha\in(1,2)\), and \(b\in L^{\infty}(\mathbb{R}^{d})\). Assume \(\sigma\) satisfies \((\mathbf{H}^{\sigma})\) with constant \(c_{0}\). Let \(\beta\in(0,(\alpha-1)/2)\), and \(\varepsilon\in(0,\alpha-1)\), there is a constant \(c=c(d,\alpha,T,c_{0},\varepsilon,\beta,\|b\|_{L^{\infty}_{T}\mathbf{B}^{- \beta}_{\infty,\infty}})>0\) such that for any \(n\in\mathbb{N}\),_ \[\|\mathbf{P}(t)-\mathbf{P}_{n}(t)\|_{\mathrm{var}}\lesssim_{c}\left(\|b\|_{ \infty}^{\alpha-\beta}n^{-(\alpha-1-\beta)}+\|b\|_{\infty}n^{-\frac{\alpha- \beta-1}{\alpha}}+(1+\|b\|_{\infty}^{2})n^{-\frac{\alpha-1}{\alpha}}\right),\] _where \(\|\cdot\|_{\mathrm{var}}\) denotes the total variance of measures._ Proof.: By the weak uniqueness of SDEs (5.1) and (5.2), we assume \(b\in C^{\infty}_{b}(\mathbb{R}^{d})\). It suffices to estimate \[\left|\mathbb{E}\varphi(X_{t}^{n})-\mathbb{E}\varphi(X_{t})\right|\] for any \(\varphi\in C^{\infty}_{b}(\mathbb{R}^{d})\). Use Ito-Tanaka trick by considering the following backward PDE with terminal condition \(\varphi\in C^{\infty}_{b}(\mathbb{R}^{d})\): \[\partial_{s}u^{t}+\mathscr{L}^{(\alpha)}u^{t}+b\cdot\nabla u^{t}=0,\quad u^{t }_{t}=\varphi,\] where \(u^{t}\) is the shifted function \(u^{t}(s,x):=u(t-s,x)\) with \(0\leqslant s<t\leqslant T\), and \(\mathscr{L}^{(\alpha)}\) is the infinitesimal generator of \(L^{(\alpha)}_{t}\) (see (2.19)). It follows (3.27) that for any \(\beta\in(0,(\alpha-1)/2)\) and \(\delta\in[0,\alpha-\beta]\), \[\|u^{t}(s)\|_{\mathbf{B}^{s}_{\infty,\infty}}\lesssim(t-s)^{-\frac{\delta}{ \alpha}}\|\varphi\|_{\infty}, \tag{5.4}\] where the implicit constant in the above inequality on depends on \(d,\alpha,T,\delta,\beta,\|b\|_{L^{\infty}_{T}\mathbb{R}^{-\beta}_{\infty,\infty}}\). By the same argument as the one in the proof of Theorem 4.6, adopting Ito's formula (cf. [27, Theorem 5.1 of Chapter II]) to \(\mathbf{u}^{\prime}(s,X^{n}_{s})\), we have \[\mathbb{E}\varphi(X^{n}_{t})-\mathbb{E}\varphi(X_{t})= \,\mathbb{E}\int_{0}^{t}\Big{(}b(X^{n}_{\phi_{n}(r)})-b(X^{n}_{r} )\Big{)}\cdot\nabla u^{\prime}(r,X^{n}_{r})\mathrm{d}r\] \[= \,\mathbb{E}\int_{0}^{t}b(X^{n}_{\phi_{n}(r)})\cdot\Big{(} \nabla u^{\prime}(r,X^{n}_{r})-\nabla u^{\prime}(r,X^{n}_{\phi_{n}(r)})\Big{)} \,\mathrm{d}r\] \[+\int_{0}^{t}\Big{(}\mathbb{E}(b\cdot\nabla u^{\prime}(r))(X^{n}_ {\phi_{n}(r)})-\mathbb{E}(b\cdot\nabla u^{\prime}(r))(X^{n}_{r}))\Big{)}\, \mathrm{d}r\] \[=: \,\mathcal{I}_{1}+\mathcal{I}_{2}.\] \(\bullet\) For \(\mathcal{I}_{1}\), by Bernstein's inequality, (5.4) and (5.3), one sees that for any \(\delta\in(0,\alpha-1-\beta]\), \[|\mathcal{I}_{1}| \lesssim\|b\|_{\infty}\int_{0}^{t}\|\nabla u^{\prime}(r)\|_{ \mathbb{R}^{\delta}_{\infty,\infty}}\mathbb{E}|X^{n}_{\phi_{n}(r)}-X^{n}_{r}| ^{\delta}\mathrm{d}r\] \[\lesssim\|b\|_{\infty}\int_{0}^{t}(t-r)^{-\frac{1+\delta}{\alpha }}\mathbb{E}|X^{n}_{\phi_{n}(r)}-X^{n}_{r}|^{\delta}\mathrm{d}r\] \[\stackrel{{\eqref{eq:1}}}{{\lesssim}}\|b\|_{\infty}( \|b\|_{\infty}^{\delta}n^{-\delta}+n^{-\delta/\alpha})t^{\frac{\alpha-1-\delta }{\alpha}}\int_{0}^{1}r^{\frac{\alpha-1-\delta}{\alpha}-1}\mathrm{d}r.\] Consequently, we get that for for any \(\delta\in(0,\alpha-1-\beta]\), \[|\mathcal{I}_{1}|\lesssim\|b\|_{\infty}(\|b\|_{\infty}^{\delta}n^{-\delta}+n ^{-\delta/\alpha}). \tag{5.5}\] \(\bullet\) As for \(\mathcal{I}_{2}\), the estimate of \[\left|\mathbb{E}(b\cdot\nabla u^{\prime}(r))(X^{n}_{\phi_{n}(r)})-\mathbb{E} (b\cdot\nabla u^{\prime}(r))(X^{n}_{r}))\right|\] is the key ingredient. Using the Ito-Tanaka trick again, we consider the following equation: \[\partial_{s}w^{r}+\mathcal{L}^{(\alpha)}w^{r}=0,\quad w^{r}(r)=f, \tag{5.6}\] where \(w^{r}(s,x):=w(r-s,x)\) is the shifted function with \(0\leqslant s<r\leqslant T\), \(f\in C^{\infty}_{b}(\mathbb{R}^{d})\), and \(\mathcal{L}^{(\alpha)}\) is the infinitesimal generator of \(L^{(\alpha)}_{t}\) given by (2.19). In the last, readers will see that the terminal condition \(f\) is taken as \(b\cdot\nabla u^{\prime}(r)\). Applying Ito's formula (cf. [27, Theorem 5.1 of Chapter II]) to \(w^{r}(s,X^{n}_{s})\) and by (5.6), we have \[\mathbb{E}f(X^{n}_{r})=\mathbb{E}w^{r}(r,X^{n}_{r})=w^{r}(0,x)+\mathbb{E}\int _{0}^{r}b(X^{n}_{\phi_{n}(s)})\cdot\nabla w^{r}(s,X^{n}_{s})\mathrm{d}s,\] which drives that for any \(r_{2}>r_{1}\), \[\left|\mathbb{E}f(X_{r_{2}}^{n})-\mathbb{E}f(X_{r_{1}}^{n})\right| \leqslant\left|w(r_{2},x)-w(r_{1},x)\right|+\left|\mathbb{E}\int_{r _{1}}^{r_{2}}b(X_{\phi_{n}(s)}^{n})\cdot\nabla w(r_{2}-s,X_{s}^{n})\mathrm{d}s\right|\] \[\quad+\left|\mathbb{E}\int_{0}^{r_{1}}b(X_{\phi_{n}(s)}^{n})\cdot( \nabla w(r_{2}-s,X_{s}^{n})-\nabla w(r_{1}-s,X_{s}^{n}))\mathrm{d}s\right|\] \[\lesssim\] \[\quad+\|b\|_{\infty}\int_{0}^{r_{1}}\|\nabla w(r_{2}-s)-\nabla w( r_{1}-s)\|_{\infty}\mathrm{d}s\] Furthermore, based on (2.21) and (2.20), we get that for all \(0\leqslant r_{1}<r_{2}\leqslant T\), \[\left|\mathbb{E}f(X_{r_{2}}^{n})-\mathbb{E}f(X_{r_{1}}^{n})\right|\] \[\quad+\|b\|_{\infty}\|f\|_{\infty}\int_{0}^{r_{1}}\left[(r_{1}-s) ^{-\frac{1}{\alpha}}\wedge((r_{2}-r_{1})(r_{1}-s)^{-\frac{1+\alpha}{\alpha}}) \right]\mathrm{d}s.\] Noticing that \[\int_{0}^{r_{2}-r_{1}}s^{-\frac{1}{\alpha}}\mathrm{d}s+(r_{2}-r_{1})\int_{(r_ {2}-r_{1})\wedge r_{1}}^{r_{1}}s^{-\frac{1+\alpha}{\alpha}}\mathrm{d}s \lesssim(r_{2}-r_{1})^{-\frac{1}{\alpha}+1},\] one sees that \[\left|\mathbb{E}f(X_{r_{2}}^{n})-\mathbb{E}f(X_{r_{1}}^{n})\right|\lesssim\|f \|_{\infty}\left(\left[1\,\wedge((r_{2}-r_{1}){r_{1}}^{-1})\right]+\|b\|_{ \infty}(r_{2}-r_{1})^{-\frac{1}{\alpha}+1}\right). \tag{5.7}\] Taking the place of \(f(\cdot),r_{1}\), and \(r_{2}\) by \((b\cdot\nabla u^{t}(r))(\cdot),\phi_{n}(r)\), and \(r\) in (5.7) respectively, and noticing that, by (2.7) and Bernstein's inequality, \[\|\nabla u^{t}(r)\|_{\infty} \lesssim\|\nabla u^{t}(r)\|_{\mathbf{B}_{\infty,\infty}^{-(\alpha -\beta)+1}}^{1/2}\|\nabla u^{t}(r)\|_{\mathbf{B}_{\infty,\infty}^{(\alpha- \beta)-1}}^{1/2}\] \[\lesssim\|u^{t}(r)\|_{\mathbf{B}_{\infty,\infty}^{2-(\alpha-\beta )}}^{1/2}\|u^{t}(r)\|_{\mathbf{B}_{\infty,\infty}^{\alpha-\beta}}^{1/2} \overset{\eqref{eq:1}}{\lesssim}(t-r)^{-\frac{1}{\alpha}},\] we infer that \[|\mathscr{I}_{2}| \lesssim\|b\|_{\infty}\int_{0}^{t}\|\nabla u^{t}(r)\|_{\infty} \left(\left[1\,\wedge(n^{-1}r^{-1})\right]+\|b\|_{\infty}n^{-\frac{\alpha-1}{ \alpha}}\right)\mathrm{d}r\] \[\lesssim n^{-\frac{\alpha-1}{\alpha}}\|b\|_{\infty}\left(\int_{0}^{t}(t -r)^{-\frac{1}{\alpha}}r^{-\frac{\alpha-1}{\alpha}}\mathrm{d}r+\|b\|_{\infty}\right)\] \[\lesssim n^{-\frac{\alpha-1}{\alpha}}(\|b\|_{\infty}+\|b\|_{\infty}^{2}),\] where we used the fact \(|\phi_{n}(r)-r|\leqslant 1/n\), and the definitions of Beta functions (1.7). Finally, combining the above calculations and taking \(\delta=\alpha-1-\beta\) in (5.5), we establish the desired estimates. ### Proof of Theorem 1.3 Let \(X_{t}^{m}\) be the solution to the following classical SDE: \[X_{t}^{m}=x+\int_{0}^{t}b_{m}(X_{s}^{m})\mathrm{d}s+L_{t}^{(\alpha)}.\] Denote \(\mathbf{P}_{m}(t):=\mathbb{P}\circ(X_{t}^{m})^{-1}\). According to the stability estimates Theorem 4.6, for any \(\delta\in(0,\frac{\alpha-1-2-\beta}{2})\), we have \[\|\mathbf{P}_{m}(t)-\mathbf{P}(t)\|_{\mathrm{var}}\lesssim\|b-b_{m}\|_{ \mathbf{B}_{\infty,\infty}^{-\frac{\alpha-1}{2}+\delta}}\lesssim m^{-\frac{ \alpha-1-2\beta}{2}+\delta}\|b\|_{\mathbf{B}_{\infty,\infty}^{-\beta}}.\] Noting \(\|b_{m}\|_{\infty}\lesssim m^{\beta}\|b\|_{\mathbf{B}_{\infty,\infty}^{- \beta}}\) and \(m=n^{\gamma}\), by Theorem 5.1, we have \[\|\mathbf{P}_{m,n}(t)-\mathbf{P}(t)\|_{\mathrm{var}} \lesssim\|\mathbf{P}_{m,n}(t)-\mathbf{P}_{m}(t)\|_{\mathrm{var}}+ \|\mathbf{P}_{m}(t)-\mathbf{P}(t)\|_{\mathrm{var}}\] \[\lesssim\|b_{m}\|_{\infty}^{\alpha-\beta}n^{-(\alpha-1-\beta)}+\| b_{m}\|_{\infty}n^{-\frac{\alpha-\beta-1}{\alpha}}+(1+\|b_{m}\|_{\infty}^{2})n^{- \frac{\alpha-1}{\alpha}}+m^{-\frac{\alpha-1-2\beta}{2}+\delta}\] \[\lesssim n^{-(\alpha-1-\beta)+\beta\gamma(\alpha-\beta)}+n^{- \frac{\alpha-\beta-1}{\alpha}+\beta\gamma}+n^{-\frac{\alpha-1}{\alpha}+2\beta \gamma}+n^{\gamma(-\frac{\alpha-1-2\beta}{2}+\delta)}\] which converges to \(0\) as \(n\to\infty\) when \[0<\gamma<\frac{1}{\beta}\left(\frac{\alpha-1-\beta}{\alpha-\beta}\wedge\frac {\alpha-\beta-1}{\alpha}\wedge\frac{\alpha-1}{2\alpha}\right)=\frac{\alpha-1} {2\alpha\beta}.\] This completes the proof. ## Appendix A **Lemma A.1**.: _Let \(T>0\). For any \(0<\beta,\gamma<1<\alpha\), there is a constant \(c>0\) such that for all \(\lambda>0\) and \(t\in(0,T]\),_ \[\int_{0}^{t}\left[(\lambda s^{-\alpha})\wedge s^{-\beta}\right](t-s)^{-\gamma }\mathrm{d}s\lesssim_{c}\lambda^{\frac{1-\beta}{\alpha-\beta}}t^{-\gamma}.\] Proof.: First of all, by a change of variable, we have \[\int_{0}^{t}\left[(\lambda s^{-\alpha})\wedge s^{-\beta}\right](t-s)^{-\gamma }\mathrm{d}s=t^{1-\gamma-\beta}\int_{0}^{1}\left[(\lambda t^{-(\alpha-\beta)} s^{-\alpha})\wedge s^{-\beta}\right](1-s)^{-\gamma}\mathrm{d}s.\] Therefore, it is sufficient to show \[\mathcal{I}:=\int_{0}^{1}\left[(\lambda s^{-\alpha})\wedge s^{-\beta}\right]( 1-s)^{-\gamma}\mathrm{d}s\lesssim\lambda^{\frac{1-\beta}{\alpha-\beta}}.\] Indeed, for any \(0<\beta,\gamma<1<\alpha\), \[\mathcal{I} \lesssim\int_{0}^{\frac{1}{2}}\left[(\lambda s^{-\alpha})\wedge s ^{-\beta}\right]\mathrm{d}s+\left[(\lambda(\tfrac{1}{2})^{-\alpha})\wedge( \tfrac{1}{2})^{-\beta}\right]\int_{\frac{1}{2}}^{1}(1-s)^{-\gamma}\mathrm{d}s\] \[\lesssim\lambda^{\frac{1-\beta}{\alpha-\beta}}\int_{0}^{\frac{1} {2\lambda^{1/(\alpha-\beta)}}}r^{-\beta}\left[r^{-\alpha+\beta}\wedge 1\right] \mathrm{d}r+\lambda\wedge 1\] \[\lesssim\lambda^{\frac{1-\beta}{\alpha-\beta}}\int_{0}^{\infty}r^ {-\beta}\left[r^{-\alpha+\beta}\wedge 1\right]\mathrm{d}r+\lambda\wedge 1\lesssim \lambda^{\frac{1-\beta}{\alpha-\beta}},\] where we used the change of variable \(s=\lambda^{1/(\alpha-\beta)}r\) in the second inequality. This completes the proof. ### Acknowledgments We are deeply grateful to Prof. Fuke Wu for their valuable suggestions and for correcting some errors.
2305.00776
Characterizing Exceptional Points Using Neural Networks
One of the key features of non-Hermitian systems is the occurrence of exceptional points (EPs), spectral degeneracies where the eigenvalues and eigenvectors merge. In this work, we propose applying neural networks to characterize EPs by introducing a new feature -- summed phase rigidity (SPR). We consider different models with varying degrees of complexity to illustrate our approach, and show how to predict EPs for two-site and four-site gain and loss models. Further, we demonstrate an accurate EP prediction in the paradigmatic Hatano-Nelson model for a variable number of sites. Remarkably, we show how SPR enables a prediction of EPs of orders completely unseen by the training data. Our method can be useful to characterize EPs in an automated manner using machine learning approaches.
Md. Afsar Reja, Awadhesh Narayan
2023-05-01T11:39:03Z
http://arxiv.org/abs/2305.00776v3
# Characterizing Exceptional Points Using Neural Networks ###### Abstract One of the key features of non-Hermitian systems is the occurrence of exceptional points (EPs), spectral degeneracies where the eigenvalues and eigenvectors merge. In this work, we propose applying neural networks to characterize EPs by introducing a new feature - _summed phase rigidity_ (SPR). We consider different models with varying degrees of complexity to illustrate our approach, and show how to predict EPs for two-site and four-site gain and loss models. Further, we demonstrate an accurate EP prediction in the paradigmatic Hatano-Nelson model for a variable number of sites. Remarkably, we show how SPR enables a prediction of EPs of orders completely unseen by the training data. Our method can be useful to characterize EPs in an automated manner using machine learning approaches. _Introduction-_ In recent years, the exploration of non-Hermitian systems has been gaining wide interest [1; 2; 3; 4; 5; 6]. This is due to a large number of inherent rich physical phenomena, such as exceptional points (EPs), non-Hermitian skin effect (NHSE), non-Bloch band theory, exotic topological phases, and extended symmetry classes, which have no counterpart in the contemporary Hermitian realm. One of the most intriguing characteristics of non-Hermitian systems is the existence of EPs - spectral degeneracy points at which not only the eigenvalues merge but also the eigenstates coalesce simultaneously [7; 8]. At EPs, the Hamiltonian matrix becomes defective. EPs play an essential role in NHSE and topological phases of non-Hermitian systems [9]. In addition to harbouring numerous fundamentally interesting features, EPs have already given rise to several applications such as enhanced sensing at higher-order EPs [10], in optical microcavities [11], and directional lasing [12]. EPs have been realized in various experimental setups such as optics [13; 14], photonics [15], electric circuits [16; 17] and acoustics [18; 19]. Due to its diverse applications and unique learning capabilities, machine learning (ML) is rapidly being adopted by researchers as a novel tool. In particular, ML has the potential to uncover new physics without prior human assistance. The growing trend of the application of ML techniques is not only restricted to different areas of condensed matter physics [20], but are also being explored in other areas such as particle physics, cosmology, and quantum computing [21]. In past years, ML techniques are being applied with outstanding accuracy to study topological phases [22; 23; 24; 25] and topological invariants [26; 27], classification of topological phases [24; 28], and phase transitions [29] in the Hermitian realm. Very recently, the first applications of ML techniques to non-Hermitian systems have been undertaken. For instance, Narayan _et al._[30], Cheng _et al._[31] and Zhang _et al._[32] have undertaken the study of non-Hermitian topological phases using convolutional neural networks (NNs). Yu _et al._ have used an unsupervised method, namely diffusion maps, to explore such phases [33]. Non-Hermitian topological phases in photonics have been studied using manifold clustering [34]. Furthermore, Araki and co-authors have analysed NHSE in an ML framework [35]. Moreover, ML approaches have been recently explored to investigate various non-Hermitian experimental platforms. Examples include, physics-graph-informed ML to study second-order NHSE in topoelectrical circuits [36], principal component analysis and NNs to explore non-Hermitian photonics [37], and diffusion maps to analyse non-Hermitian knotted phases in solid state spin systems [38]. In this work, we propose and demonstrate the characterization of EPs using NNs. We introduce a new feature, which we term _summed phase rigidity_ (SPR), which allows an unambiguous characterization of higher-order EPs. As an illustration of our approach, starting with simple models, we have categorised the EPs in various systems with increasing complexity with the help of NNs. In particular, we show how EPs can be distinguished in two- and four-site models by means of NNs. Furthermore, our NN construction and SPR enables an accurate prediction of the order of the EPs. Finally, using the celebrated Hatano-Nelson model for a variable number of sites, we demonstrate an accurate prediction of EPs and show how SPR allows a prediction of EPs of orders completely unseen by the training data. Our approach is useful for the characterization of EPs in an automated manner using ML techniques. _Two-site model-_ We first consider the simple two-site non-Hermitian model with on-site gain and loss to illustrate our approach [see Fig. 1(a)]. The system is described by the following Hamiltonian [39] \[H_{2}=\begin{pmatrix}\omega_{0}+i\gamma&J\\ J&\omega_{0}-i\gamma\end{pmatrix}, \tag{1}\] where \(\omega_{0}\) is the onsite potential, \(J\) is the coupling between the sites as shown in Fig. 1(a), and the non-Hermiticity is introduced by the gain and loss term \(\pm i\gamma\). The eigenvalues are given by \(\lambda_{\pm}=\omega_{0}\pm\sqrt{J^{2}-\gamma^{2}}\). For simplicity, we choose \(\omega_{0}=0\), and the EP occurs at \(\gamma=\pm J\). Note that this model can host EPs of order two only. We briefly describe the concept of phase rigidity, based on which we will classify the EPs. The phase rigidity, \(r\), at any point in the parameter space of a given Hamiltonian is defined as [40; 41] \[r=\frac{\langle\Psi_{L}|\Psi_{R}\rangle}{\langle\Psi_{R}|\Psi_{R}\rangle}, \tag{2}\] where \(\Psi_{L}\) and \(\Psi_{R}\) are left and right eigenstates of the non-Hermitian Hamiltonian. Due to bi-orthogonality, \(r\) takes a value of zero at EPs and a value of unity far from the EP. Instead of \(r\), which ranges from zero to one, we propose to take the negative log of \(r\), i.e., \(-\ln|r|\). This change of scale essentially enhances the separation between EPs and non-EPs in the parameter space - this leads to a much-improved characterization of EPs as we will show. So at an EP, this quantity takes a large positive value, and far from an EP, it drops to zero. We choose 40,000 randomly generated points in the \(\gamma-J\) plane for our two-site model and compute \(-ln|r|\). Then, in a similar manner, we produced 10,000 points such that \(\gamma=J\) and combined them with the points from above. Among this generated data, we kept 10% as test data, and the rest are used as training data. We constructed a network consisting of two hidden layers with 16 and 4 neurons per layer, respectively. The total number of trainable parameters is 121. We used the rectified linear unit (ReLU) as the activation function for the hidden layers. The loss (mean squared error loss) curve for 200 epochs with a batch size of 32 is shown in Fig. 1(b). We note that convergence is quite fast (within nearly 50 epochs) and the loss for the test set does not fluctuate too much or deviate from the training set loss curve, indicating the sign of a good fit. After training is performed by optimizing various hyperparameters (see Table 1), we predict the order of EPs for the test set. On the other hand, for this simple illustrative case, we already know the true order of the EPs. We compare the true and the predicted results for the same data points in the \(\gamma-J\) plane, as shown in Fig. 1(c) and Fig. 1(d). We observe that our NN predictions are almost identical to the true values with an accuracy of 99%, based on which we can easily classify whether a given point is an EP or not. We also calculated the performance of the network by means of the \(R^{2}\) score. This is presented in Table 1 and is close to 0.968. _Four-site model_- Next, we consider a slightly more involved scenario in a four-site model, which may host both second and fourth-order EPs. The Hamiltonian reads [42] \[H_{4}=\begin{pmatrix}i\delta&p&0&0\\ p&i\gamma&q&0\\ 0&q&-i\gamma&p\\ 0&0&p&-i\delta\end{pmatrix}. \tag{3}\] Here \(p\) denotes the coupling between sites one and two, and between sites three and four, \(q\) is the coupling between sites two and three, \(\pm i\delta\) and \(\pm i\gamma\) are gain and loss terms, as shown schematically in Fig. 2 (a). The above Hamiltonian can host both second- and fourth-order EPs, depending on values of \(p\), \(q\), \(\delta\), and \(\gamma\) in the parameter space. For the rest of the discussion, we set \(\gamma=1\). The EP-2 lies on a surface in the parameter space satisfying the following condition \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & No. of & Total & \multirow{2}{*}{ \begin{tabular}{c} Performance \\ (\(R^{2}\) score) \\ \end{tabular} } \\ & hidden layers & trained & \\ & (neurons) & parameters & \\ \hline \hline Two-site & 2 (16, 4) & 121 & 0.968 \\ Four-site & 2 (32, 16) & 673 & 0.974 \\ \(N\) sites & 2 (12, 4) & 105 & 0.998 \\ (Hatano-Nelson) & 2 (12, 4) & 105 & 0.998 \\ \hline \hline \end{tabular} \end{table} Table 1: **Details of NN structures.** The details of the constructed networks are presented for the different models. We note that for a fixed number (two) of layers to get a good \(R^{2}\) score (\(\approx 0.95\%\)), one needs to increase the number of total trainable parameters by increasing the number of neurons per layer. This is because of the increasing complexity of models. Figure 1: **Training of NN for the two-site model.** (a) Illustration of the two-site gain and loss non-Hermitian model. Here \(J\) is the coupling between the two sites and \(\pm i\gamma\) denotes the gain and loss terms. (b) A NN with 2 hidden layers was constructed. The hidden layers consist of 16 and 4 neurons, respectively. The total number of trainable parameters is 121. The loss curve with epoch is plotted during training. The blue represents the training dataset and the yellow for validation. It is clear from the loss curve that our model is not over-fitted. (c) \(-\ln(|r|)\) is plotted for the test dataset in \(J-\gamma\) plane on a color scale. A color bar values tending to unity represent an EP and low values denote non-EP. (d) The corresponding predicted value from the NN is shown. We see a good agreement between the actual and predicted values in (c) and (d). \[p^{4}+\delta^{2}+2\delta p^{2}-\delta^{2}q^{2}=0. \tag{4}\] On the other hand, we obtain an EP-4 when both Equation 4 and the following Equation 5 are satisfied simultaneously, \[1+\delta^{2}-2p^{2}-q^{2}=0. \tag{5}\] As compared to the previous case, i.e., a classification between an EP and a non-EP, here we need to distinguish three types of points, EP-2, EP-4, and non-EPs. To do this, we have designed a new feature, which we term the _summed phase rigidity_, SPR. This is defined as \[\text{SPR}=\sum_{k}-\ln|r_{k}|/\text{max}(-\ln|r_{k}|), \tag{6}\] where \(k\) runs over all eigenstates of the given Hamiltonian. For an \(N\)-th order EP, \(\text{SPR}\approx N\) at the EP. The steps for constructing SPR are illustrated in Table 2. We trained the network to classify the points in the parameter space according to the SPR value. As such, we set the following cutoffs for the SPR - an SPR value of nearly zero (\(0<\text{SPR}<1\)) corresponds to the ordinary point (non-EP), SPR between 1 and 2 reflects a second order EP (EP-2), and SPR above three corresponds to a fourth order EP (EP-4). Note that in practical computations, SPR takes a value lower than \(N\) for an \(N\)-th order EP. This is because our generated data points are chosen randomly and phase rigidity depends on the detuning parameters, \(\delta z\), which take the system away from an EP. In general, \(r\varpropto\delta z^{1/N}\). Nevertheless, we show that SPR can classify the corresponding points with an outstanding accuracy, and can thus be a very useful training feature. We generated a data set with 50,000 points, such that it is a mixture of 5000 EP-2 and 5000 EP-4 points, with the rest being non-EPs. As before, we keep aside 10% of data points as test data, and the rest is used as the training data. The data points are generated by randomly picking a point from the \((p,q,\delta)\) parameter space (with \(\gamma=1\)). The conditions for EPs are established using Eq. 4 and Eq. 5. Next, we calculate SPR at these points as described above. So finally the features for each data point become \(p\), \(q\), \(\delta\) and SPR. Our trained NN consists of two hidden layers with 32 and 16 neurons per layer respectively. The total number of trainable parameters is 673. In our training with batch size 32 and 200 epochs, the loss curve in Fig. 2 (b) shows that the NN is not overfitted and converges rapidly. After training, we predict the SPR value for the test data set and plot it alongside the actual value in Fig. 2(c) and (d). We note the excellent match between the actual and predicted data. In particular, there is a clear distinction between EPs of different orders. Overall, in estimating the actual order of EPs in the test data set, we achieved an accuracy of 97.5%. _Hatano-Nelson model with variable sites-_ Next, we present the generalized approach to characterize the EPs based on the concept of SPR. We consider the paradigmatic Hatano-Nelson model [43], which has served as the inception ground for many of the central ideas of non-Hermitian physics. The Hamiltonian for \(L\) sites under open boundary conditions is given by the following matrix \begin{table} \begin{tabular}{c c c c c c c c} Point type & \(J\) & \(\gamma\) & \(-\ln|r_{1}|\) & \(-\ln|r_{2}|\) & \(\frac{\text{rescaled}}{-\ln|r_{1}|\) & \(-\ln|r_{2}|}\) & SPR \\ \hline \hline Non-EP & 0.3 & 0.7 & 0.10 & 0.10 & 0.0036 & 0.0036 & 0.007 \\ EP-2 & 0.5 & 0.5 & 27.34 & 27.34 & 0.99 & 0.99 & 1.98 \\ EP-2 & 0.3 & 0.3 & 27.52 & 27.52 & 1 & 1 & 2 \\ Non-EP & 0.5 & 0.8 & 0.25 & 0.25 & 0.009 &.009 & 0.018 \\ \end{tabular} \end{table} Table 2: **Constructing SPR for two site model.** To construct the SPR, we first calculate \(-\ln|r|\) at the chosen points and then rescale by dividing them by the maximum value. After the rescaling, the sum of all rescaled \(-\ln|r|\) gives the SPR. Note that SPR value is nearly the order of the EP at an EP and nearly zero far from the EPs. Figure 2: **Training of NN for the four-site model.** (a) Schematic of the four-site non-Hermitian model. Here \(p\) and \(q\) are the coupling constants between the sites, and \(\pm i\gamma\) and \(\pm i\delta\) denote the gain and loss terms. (b) An NN with 2 hidden layers was constructed. The hidden layers consist of 32 and 16 neurons, respectively. The total number of trainable parameters is 673. The loss curve with epoch during training is shown as the blue points for the training data set and as the yellow points for validation set. It is clear from the loss curve that our NN is not overfitted. (c) Histogram of test data points from the parameter space \((p,q,\delta,\gamma)\) according to the SPR value. Points near SPR=0 are non-EP, i.e., ordinary points. SPR between one and two represents the second-order EPs and SPR between three and four represents the fourth-order EPs. (d) Same as (c) but now instead the predicted value of SPR from the NN is plotted. We see good agreement between actual and predicted values from (c) and (d). \[H_{L}=\begin{pmatrix}0&t-\gamma/2&0&...&0\\ t+\gamma/2&0&t-\gamma/2&...&0\\ 0&t+\gamma/2&0&...&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\.&.&.&.\end{pmatrix}. \tag{7}\] Here \(t\pm\gamma/2\) denote non-reciprocal hopping terms, i.e., the unequal left and right hopping strengths. In this model, higher order EPs occur at \(t=\pm\gamma/2\), and the order depends on the number of sites, i.e., \(L\). Therefore, if a point is an EP in the \(t-\gamma\) parameter space, then the SPR value will be nearly \(L\), for an \(L\) site model. For the sake of simplicity, we change the variables as \(t_{1}=t-\gamma/2\) and \(t_{2}=t+\gamma/2\). In this \(t_{1}-t_{2}\) parameter space, EPs occur when one of \(t_{1}\) or \(t_{2}\) is nearly zero and other one is nearly one. Here we trained the Hatano-Nelson model with different sites. We have trained the network with \(L=3\) and \(L=7\). We generated the data by choosing random points in the \(t_{1}-t_{2}\) parameter space and calculating the SPR. For \(L=3\), we generated 30,000 points, which are a mixture of 5,000 points such that \(t_{1}\approx 1\) and \(t_{2}\approx 0\), 5,000 points such that \(t_{2}\approx 1\) and \(t_{1}\approx 0\) and the rest are such that \(t_{1}\approx t_{2}\). Similarly, for \(L=7\), 30,000 data points were generated with a similar distribution. Therefore, finally, we have 60,000 data points with different SPR values, i.e., nearly 3, 7 or 0 based on the values of \(t_{1}\), \(t_{2}\) and \(L\). Among the generated data, 10% was kept as test data and the rest is used to train the NN. We construct an NN with two hidden layers with 12 and 4 neurons, respectively. The loss curve in Fig. 3 (b) shows that NN is not over-fitted. After training, we predict the distribution of the test data set with respect to SPR value as shown in Fig. 3 (d), where \(6<\text{SPR}<7\) represents EPs of order seven, \(2<\text{SPR}<3\) denotes EPs of order three, and \(\text{SPR}<1\) are non-EP points. By comparing to the actual SPR distribution of the test data set in Fig. 3 (c), we find good agreement between actual and predicted SPR value distribution with an accuracy close to 99.9%. We, therefore, conclude that EPs of different orders can be successfully classified by using SPR. Whether our NN has learned the generalised property of SPR is a crucial question. In order to understand this, we ask the already trained NN to predict the EPs and their orders in models that were not included during training. Fig. 4 (a) shows the distribution of true SPR data from the \((t_{1},t_{2},L=4)\) parameter space for \(L=4\) site Hatano-Nelson model. Here, \(3<\text{SPR}<4\) represents EPs of order 4, and \(0<\text{SPR}<1\) are non-EPs. Now we feed test this data set of \(L=4\) sites to the already trained NN and the distribution of data with the predicted SPR is shown in Fig. 4 (b). Remarkably, we find that the NN, which was trained for \(L=3\) and \(L=7\) sites is able to identify the EPs as well as their order with an accuracy greater than 99.9%. Figure 3: **Training of NN for variable site Hatano-Nelson model.** (a) Schematic diagram of the Hatano-Nelson model with the left-right asymmetric hopping (\(t\pm\gamma/2\)). (b) A neural network with 2 hidden layers was constructed. The hidden layers consist of 12 and 4 neurons, respectively. The total number of trainable parameters is 105. The loss curve shown as a function of the epoch indicates that NN is not over-fitted or under-fitted. (c) Distribution of test data set with actual SPR values for \(L=3\) and \(L=7\) sites. (d) Distribution of the same data sets as (c) but with predicted SPR values from the network. We note the agreement between actual and predicted SPR values. Figure 4: **SPR prediction of higher-order EPs of un-trained Hatano-Nelson models.** (a) Actual data distribution for \(L=4\) site Hatano-Nelson model. Here, \(3<\)SPR\(<4\) represents fourth-order EPs and SPR below one denotes ordinary points. (b) The corresponding predictions from the NN, which was trained for \(L=3\) and \(L=7\) sites only. (c) Actual data set for a mixture of \(L=4\) and \(L=6\) sites HN models, where \(3<\text{SPR}<4\) denotes an EP of order 4 and \(5<\text{SPR}\)\(<6\) represents an EP of order 6. (d) Corresponding predicted values from the NN. We discover that the NN is able to detect the order of the EP even when there is a mixture of data for EPs of different orders. Moreover, the NN is capable of predicting the EPs and their orders for those cases which were not included in either the training or the test data sets. To further examine the robustness of our NN, we make the scenario more complicated by adding random points from \((t_{1},t_{2},L=6)\) parameter space to the existing \(L=4\) data set. The resulting distribution of the SPR values is shown in Fig. 4 (c), where \(0<\text{SPR}<1\) are non-EPs, \(3<\text{SPR}<4\) are EPs of order 4 and \(5<\text{SPR}<6\) represent the EPs of order 6. The corresponding distribution for the predicted SPR value is shown in Fig. 4 d. We note that our trained NN for \(L=3\) and \(L=7\) sites is not only capable of predicting the EPs and their orders for an untrained model, but it also has the ability to do so for a mixture of data from different non-Hermitian models with EPs of varying orders with an accuracy close to 99.9%. _Discussion and summary-_ Before summarizing, we note here a few points regarding our NN models. First, we have followed a top-up approach to train the models. We started with complicated models, i.e., a large number of hyperparameters (number of layers, number of neurons in each layer, number of epochs, batch sizes and other hyperparameters) and reduced the complexity until the performance decreases significantly. So, the NN models presented here should offer an optimal balance between computational cost and performance. Second, we selected Adam as the adaptive learning rate optimization and ReLU as the activation function. Third, we trained each model for 200 epochs with a batch size of 32. Finally, we emphasize that our ML models are actually regression models which predict the SPR value, and the specific range of SPR is then used to classify different orders of EPs. In summary, we have successfully demonstrated how ML techniques can be used to predict EPs and their orders in various models with outstanding accuracy. For the two-site model, we trained an NN to classify EPs and non-EPs. Next, we proposed a new feature - termed SPR - and used it to distinguish between EPs of different orders. We then generalized this procedure to the celebrated Hatano-Nelson model with variable sites. Remarkably, we found that our NN models were able to predict the true order of EPs with accuracy greater than 99% even in cases with EPs of orders completely unseen by the training data. Looking ahead, our work may open up interesting avenues for future explorations. Our techniques can be useful for studying the parametric dependence of EPs in higher dimensions, which can become quite intricate, especially in cases such as anisotropic EPs [44]. The behavior of EPs can differ based on the symmetries present in the system [45]. We envisage that generalizations of our framework may assist in characterizing nontrivial behavior in such scenarios. Additionally, EPs have intriguing connections to topological phases of non-Hermitian systems, making our method potentially useful for studying their topological properties. We hope our work motivates these promising developments. _Acknowledgments-_ We have used Python [46] and TensorFlow [47] for our computations, and we are grateful to the developers. M. A. R. would like to thank Sourav Mal for useful discussions, and would also like to thank Smoky, his cat, for being a calming presence when he was writing this paper. M. A. R. is supported by a graduate fellowship of the Indian Institute of Science. A. N. acknowledges a start-up grant from the Indian Institute of Science.
2305.16114
Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly Detection with Scale Learning
Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstruction-guided generative models and transformation-based contrastive models, we devise novel data-driven supervision for tabular data by introducing a characteristic -- scale -- as data labels. By representing varied sub-vectors of data instances, we define scale as the relationship between the dimensionality of original sub-vectors and that of representations. Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training. This paper further proposes a scale learning-based anomaly detection method. Supervised by the learning objective of scale distribution alignment, our approach learns the ranking of representations converted from varied subspaces of each data instance. Through this proxy task, our approach models inherent regularities and patterns within data, which well describes data "normality". Abnormal degrees of testing instances are obtained by measuring whether they fit these learned patterns. Extensive experiments show that our approach leads to significant improvement over state-of-the-art generative/contrastive anomaly detection methods.
Hongzuo Xu, Yijie Wang, Juhui Wei, Songlei Jian, Yizhou Li, Ning Liu
2023-05-25T14:48:00Z
http://arxiv.org/abs/2305.16114v1
# Fascinating Supervisory Signals and Where to Find Them: ###### Abstract Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstruction-guided generative models and transformation-based contrastive models, we devise novel data-driven supervision for tabular data by introducing a characteristic - _scale_ - as data labels. By representing varied sub-vectors of data instances, we define _scale_ as the relationship between the dimensionality of original sub-vectors and that of representations. Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training. This paper further proposes a scale learning-based anomaly detection method. Supervised by the learning objective of scale distribution alignment, our approach learns the ranking of representations converted from varied subspaces of each data instance. Through this proxy task, our approach models inherent regularities and patterns within data, which well describes data "normality". Abnormal degrees of testing instances are obtained by measuring whether they fit these learned patterns. Extensive experiments show that our approach leads to significant improvement over state-of-the-art generative/contrastive anomaly detection methods. Machine Learning, ICML ## 1 Introduction Anomaly detection, the task of discovering exceptional data that deviate significantly from the majority (Aggarwal, 2017), has been successfully applied in many real-world domains when there is a need to identify both negative and positive rare events (e.g., diseases, cyberspace intrusions, financial frauds, industrial faults, and marketing opportunities). Deep learning has shown strong modeling capability, which enables deep anomaly detection methods to yield drastic performance improvement over traditional methods (Pang et al., 2021). Due to the unsupervised nature of anomaly detection, _designing deep anomaly detection models is a journey of finding reasonable supervisory signals_. Many deep anomaly detection methods (Xia et al., 2015; Chen et al., 2017; Zhou and Paffenroth, 2017; Liu et al., 2019; 2021) construct various kinds of autoencoders, generative adversarial networks, or prediction models. Their learning objectives are adapted to anomaly detection by treating reconstruction errors of incoming data as abnormal degrees. These generative methods are intuitive and show favorable performance on several popular benchmarks. However, as has been discussed in (Larsen et al., 2016; Ruff et al., 2018; Wang et al., 2022), one imperfection is that their learning target is primarily designed to reconstruct/generate the whole data. That is, they are forced to focus on reducing errors in each fine-grained point, but overemphasizing low-level details may make the model hard to converge when the normal class is complicated. Besides, some underlying hard anomalies can be only identified by investigating high-level pattern information in inlier data. To address this limitation, recent efforts (Golan and El-Yaniv, 2018; Li et al., 2021; Wang et al., 2022; Ristea et al., 2022) have been made to liberate deep anomaly detection from the above reconstruction-based pipeline. They define various transformations and design pretext tasks, learning to classify, compare, or map these transformations. They either use the learned representations for independent abnormality estimators or utilize loss values as anomaly scores. These models can embed high-level semantic information into the learned representations, thus leading to stronger anomaly detectors with promising detection accuracy. However, most popular transformation operations (e.g., rotations, cropping, and flip) can be only applied to image data. As for non-perceptual tabular data, it is still a non-trivial task to define suitable supervisory signals to actuate deep learning models. To this end, we introduce a new data characteristic - _scale_ - to devise a novel kind of data-driven supervision. Generally, scale indicates the ratio between the real size of something and its size on a map, model, or diagram_. This naturally inspires us to define the scale concept in tabular data as: **Definition 1.1** (_Scale in Tabular Data_).: Sub-vectors of tabular data instances are transformed to representations. Scale is defined as the mathematical relationship between the dimensionality of sub-vectors and that of the representations, which indicates the level of detail in representations. Figure 1 delineates a toy example. Scales serve as labels attached to these transformed data, thereby supplying labeled data for driving neural networks on tabular data. However, it is still challenging to harness these labeled data. Due to the feature diversity, some sub-vectors with lower dimensionality are with more details than higher-dimensional sub-vectors. Also, some randomly sampled subspaces might be irrelevant or even noisy. These notorious problems suggest that it is unfeasible to define canonical proxy tasks like classification or simple prediction. Instead, we define scale learning as follows. **Definition 1.2** (_Scale Learning_).: Each individual data sample in scale learning is defined as a group of representations transformed from varied sub-vectors of a data instance. The predictions and corresponding labels are converted to two distributions, and scale learning is defined as a distribution alignment task. Through optimizing distributions, the learning model is forced to focus on the relative ranking of scales rather than raw absolute values, and the increased sampling times also dilute ineffective irrelevant/noisy subspaces. Our model essentially learns the listwise ranking of representations transformed from varied subspaces of each original data instance, during which our model can capture inherent regularities and patterns related to the data structure. This is a novel kind of data-driven supervision that is different from current point-wise generative methods and discriminative models based on classification, comparison, or mapping. Based on this supervision, this paper introduces a Scale Learning-based deep Anomaly Detection method (termed SLAD). Concretely, SLAD first specifies a transformation function to represent sub-vectors and a labeling function to calculate data scales. After creating scale-based labeled data, SLAD performs scale learning, optimized by a distributional divergence-based loss function. Through this proxy task, SLAD embeds high-level information (i.e., underlying regularities and patterns related to the data structure) into the learned scale-based ranking mechanism. It is similar to self-supervised learning models in vision and natural language domains that embed data semantics into representations. Such high-level information helps our model to tame data complexity and reveal hard anomalies. We extend the inlier-priority property proposed in (Wang et al., 2022) to our model. That is, due to the imbalanced nature of anomaly detection, the learning process can prioritize inliers, and the learned regularities reflect "normality" shared in inliers. Anomalies, by definition, show deviated behaviors, and they cannot comply with these learned models. Hence, for identifying anomalies, errors computed by the loss function are directly exploited to indicate abnormal degrees. Our contributions are summarized as follows: * Conceptually, we introduce the scale concept in tabular data. By defining scale learning as a distribution alignment task, we appropriately harness scale-based labeled data to actuate neural networks for tabular data. Essentially, we devise a novel kind of data-driven supervision, and neural networks can model intrinsic regularities pertaining to the data structure. * Methodologically, we propose SLAD, a scale learning-based deep anomaly detection method. The loss values are directly exploited to indicate abnormal degrees, thus allowing SLAD to produce anomaly scores in an end-to-end fashion. Our method also contributes a novel self-supervised strategy to other tasks like representation learning in tabular data. * Theoretically, we analyze the shape of the created data sample in scale learning to ensure its effectiveness in revealing anomalies. To back up the application of scale learning in anomaly detection, we examine gradient magnitude to illustrate the inlier-priority property. * Empirically, extensive experiments validate both the contributions in detection accuracy and the superiority in handling complicated training data. For example, SLAD raised the state-of-the-art AUC-PR from 0.82 to 0.92 (+10 points) on a popular _Thyroid_ benchmark. Figure 1: A toy example of scales in tabular data. For a tabular data instance described by four features, sub-vectors with varied feature subspaces are randomly sampled from the original space and then transformed into \(h\)-dimensional representations. Scale is computed as the mathematical relationship \(G\) between sub-vector length and the representation dimensionality \(h\). ## 2 Related work Deep learning-empowered anomaly detection has garnered much interest recently (Pang et al., 2021; Ruff et al., 2021). Due to the unsupervised nature of anomaly detection, without readily accessible labeled training data, finding supervisory signals becomes one crucial step to fuel deep learning models for anomaly detection. This section reviews how existing studies define their learning tasks. One typical pipeline is based on generative methods. They take reconstruction as the learning objective to construct various kinds of autoencoders, generative adversarial networks, or prediction models and treat reconstruction errors as anomaly scores (Chen et al., 2017; Zhou and Paffenroth, 2017; Liu et al., 2019, 2021; Wang et al., 2021). Albeit intuitive, these methods overemphasize fine-grained reconstruction errors at the point-wise level, and they may fail to access high-level semantic information. An alternative manner is to use one-class classification to obtain a model (e.g., hypersphere or hyperplane) that can accurately describe the "normality". Many anomaly detectors (Ruff et al., 2018; Goyal et al., 2020; Zhang and Deng, 2021; Liznerski et al., 2021) train neural networks to learn a new representation space by posing one-class constraints. However, the underlying one-class assumption might be vulnerable since there is often more than one prototype in inliers. Besides, some methods resort to additional label information. Self-training models exploit iteratively predicted results of training data as supervisory signals while updating the model parameters (Pang et al., 2020; Qiu et al., 2022), whereas this process might be disturbed by mislabeled data. It is also noteworthy that a recent method named outlier exposure introduces labeled data from other datasets, thus forming synthetically labeled data (Hendrycks et al., 2019). The success of contrastive self-supervised learning in vision and natural language domains sheds light on the potential of discriminative models for embedding rich semantic information into representations. By using various transformation operations in image data (e.g., rotations, cropping, flip, cutout, and interpolation), many insightful approaches (Golan and El-Yaniv, 2018; Tack et al., 2020; Sehwag et al., 2021; Li et al., 2021; Wang et al., 2022; Ristea et al., 2022) create different views of initial data and employ classification, comparison, or mapping as pretext tasks. To identify anomalies, these methods perform independent abnormality measurements upon the learned representations or directly leverage the loss function. However, it is still non-trivial to define transformation operations for non-image data. There are very limited attempts that generalize the above contrastive strategy to tabular data. GOAD (Bergman and Hoshen, 2020) is the pioneer transformation-based method that can handle non-image data, which generalizes the spatial transformation to random affine transformation. NeuroL (Qiu et al., 2021) employs learnable neural transformations and proposes a noise-free, deterministic contrastive loss. The literature (Shenkar and Wolf, 2022) learns mappings that maximize the mutual information between each sampled sub-vector and the part that is masked out. We finally review a related field, i.e., self-supervised pretraining for tabular data. These models also perform contrastive learning upon different views created by corrupting random feature sub-spaces based on respective empirical marginal distribution (Bahri et al., 2022) or feature correlations (Yao et al., 2021). In addition to the corruption, the study (Yoon et al., 2020) proposes "mask estimator" and "feature estimator" heads on top of the encoder state. ## 3 Scale Learning for Anomaly Detection **Problem Formulation.** Let \(\mathcal{X}\) be the input tabular data described by a \(D\)-dimensional feature space. Each data instance \(\mathbf{x}\in\mathcal{X}\) is a vector \(\mathbf{x}\in\mathbb{R}^{D}\). By training on \(\mathcal{X}\), a deep anomaly detection model builds a scoring function \(\tau:\mathbb{R}^{D}\mapsto\mathbb{R}\) to quantitatively measure abnormal degrees of incoming data instances. **Overview.** Figure 2 depicts the overall framework of SLAD. We take one data instance \(\mathbf{x}\) as an example to illustrate the procedure of SLAD. There are two main components in SLAD. _The creation of scale-based supervisory signals_ consists of a transformation function \(T\) and a labeling function \(G\), which respectively define how to transform sub-vectors of an original data instance \(\mathbf{x}\) to representations \(\mathbf{U}\) and how to compute scales as labels \(\mathbf{y}\). SLAD treats each \(\mathbf{U}\) matrix that contains \(c\) transformed vectors as an individual training sample, and labeled data \(\mathcal{O}\times\mathcal{Y}=\{(\mathbf{U}_{j},\mathbf{y}_{j})\}_{j=1}^{r}\) offer supervisory signals for neural network training. In terms of _scale learning_, we construct a neural network \(\Phi\), and network parameters are optimized via a distribution alignment loss function \(L\). ### The Creation of Scale-based Supervisory Signals **Transformation Function \(T\).**\(T\) yields representations of sub-vectors. A unified \(h\)-dimensional representation frame is set since these transformed data serve as training samples for downstream scale learning. \(T\) can be also understood as a data preprocessing step. Some popular padding methods or dimensionality reduction techniques may change information contained in original sub-vectors. Instead, SLAD employs neural transformation to define \(T\). Complicated neural transformations with non-linear activation may also modify the intrinsic data structure of the input. Random linear projection is a simple yet effective feature mapping technique, which can achieve dimensionality modification. Thus, \(T\) is defined as simple feed-forward layers that are randomly initialized, and each feature subspace corresponds to a transformation layer. For a \(\nu\)-dimensional sub-vector \(\mathbf{x}_{(\mathcal{S}_{i})}\!\in\!\mathbb{R}^{\nu}\), its representation is obtained via a weight matrix \(\mathbf{W}_{\nu}\in\mathbb{R}^{h\times\nu}\) and bias \(b\in\mathbb{R}^{h}\), i.e., \[T(\mathbf{x}_{(\mathcal{S}_{i})})=\mathbf{W}_{\nu}\mathbf{x}_{(\mathcal{S}_{i} )}+b. \tag{1}\] For \(c\) randomly sampled sub-vectors of a data instance \(\mathbf{x}\), their transformations are denoted in a matrix \(\mathbf{U}\!\in\!\mathbb{R}^{c\times h}\). \(\mathbf{U}\) is treated as an individual data sample for scale learning. Labeling Function \(G\).SLAD further computes scales. Each dimension derived via neural transformation \(T\) is with equal status, so the representation dimensionality can be directly exploited. However, as original tabular features are varied, we intend to weigh each feature to capture this kind of difference. For ease of learning, we also increase the spacing of each scale value via a magnification factor. Therefore, given the representation dimension \(h\) and the feature subspace \(\mathcal{S}_{i}\) of a sub-vector, \(G\) is defined as \[G(\mathcal{S}_{i},h)=\gamma\frac{\sum_{k\in\mathcal{S}_{i}}\omega_{k}}{h}, \tag{2}\] where \(\omega_{k}\) is the weight of the \(k\)th feature and \(\gamma\) is a magnification factor. The \(\gamma\) factor is a hyper-parameter. In terms of the feature weight, a feature is more informative if it has strong interactions with other features. Thus, Pearson product-moment correlation coefficient is employed. Let \(\mathbf{u}_{k}=\{x_{(i,k)}\}_{i=1}^{|\mathcal{X}|}\) denote the values of the \(k\)th feature. The weight of \(k\)th feature is computed as \(\omega_{k}\!=\!\frac{1}{|\mathcal{X}|}\sum_{k^{\prime}=1}^{|\mathcal{F}|}\! \left|\frac{\mathrm{cov}(\mathbf{u}_{k},\mathbf{u}_{k^{\prime}})}{\mathrm{ dev}(\mathbf{u}_{k})\mathrm{dev}(\mathbf{u}_{k^{\prime}})}\right|\), where \(\mathrm{cov}(\cdot,\cdot)\) and \(\mathrm{dev}(\cdot)\) denote the covariance and the standard deviation. \(\omega\) ranges from 0 to 1. High-dimensional feature space is often contaminated by noisy/irrelevant features, and this calculation function might be biased. This function also induces considerably heavy computational overhead when handling high-dimensional data. Therefore, SLAD omits this step and sets \(\omega\!=\!1\) for all features when the dimensionality of the original feature space is high. For the \(\mathbf{U}\) matrix deriving from a group of sub-vectors with feature subspaces \(\{\mathcal{S}_{1},\cdots,\mathcal{S}_{c}\}\), its label is defined as a list of scales, i.e., \(\mathbf{y}=\{G(\mathcal{S}_{1},h),\cdots,G(\mathcal{S}_{c},h)\}\). ### Scale Learning The above process is repeated \(r\) times for an original data instance, creating labeled data \(\mathcal{O}\!\times\!\mathcal{Y}\!\!=\!\{(\mathbf{U}_{j},\mathbf{y}_{j})\}_{ j=1}^{r}\). SLAD constructs a neural network \(\Phi:\mathbb{R}^{c\times h}\mapsto\mathbb{R}^{c}\) that maps each newly created data sample to a scale list, i.e., \(\mathbf{p}=\Phi(\mathbf{U})\). Scale learning is defined as a distribution alignment task to handle the feature diversity and irrelevant/noisy sampled subspaces. Specifically, the predictions are processed by a softmax layer \(\sigma\), i.e., \(\sigma(\mathbf{p})\!=\!\{\frac{\mathrm{exp}(p_{i})}{\sum_{j}\mathrm{exp}(p_{j} )}\}_{i=1}^{c}\), which generates a probability distribution. \(\mathbf{y}\) is also processed by a softmax function \(\sigma\) to produce target distribution. Listwise prediction of \(c\) transformed vectors in \(\mathbf{U}\) can be optimized uniformly. This way allows the optimization to be supervised by the relative values. The distribution alignment task substantially teaches the network to rank representations transformed from different feature subspaces of the original data instance via the predicted scale values. After obtaining the prediction \(\tilde{\mathbf{p}}=\sigma(\Phi(\mathbf{U}))\) and the target \(\tilde{\mathbf{y}}=\sigma(\mathbf{y})\), loss value \(\ell\) is defined by a distributional divergence measure. We employ Jensen-Shannon divergence in our implementation, i.e., \[\ell(\tilde{\mathbf{p}}\|\tilde{\mathbf{y}})\!=\!\frac{1}{2}\sum_{i=1}^{c} \tilde{p}_{i}\log(\frac{\tilde{p}_{i}}{\frac{1}{2}(\tilde{p}_{i}\!+\!\tilde{y} _{i})})\!+\!\frac{1}{2}\sum_{i=1}^{c}\tilde{y}_{i}\log(\frac{\tilde{y}_{i}}{ \frac{1}{2}(\tilde{p}_{i}\!+\!\tilde{y}_{i})}). \tag{3}\] The overall loss function of SLAD can be further defined as \[L=\mathbb{E}_{\mathbf{x}\sim\mathcal{X}}\mathbb{E}_{(\mathbf{U},\mathbf{y}) \sim\mathcal{O}_{\mathbf{x}}\times\mathcal{Y}_{\mathbf{x}}}\Big{[}\ell\Big{(} \sigma(\Phi(\mathbf{U}))\big{\|}\sigma(\mathbf{y})\Big{)}\Big{]}, \tag{4}\] where \(\mathcal{O}_{\mathbf{x}}\) and \(\mathcal{Y}_{\mathbf{x}}\) denote the supervisory signals created by an original data instance \(\mathbf{x}\). Figure 2: Overall framework of SLAD. For an original data instance \(\mathbf{x}\), SLAD first generates a group of \(c\) sub-vectors \(\{\mathbf{x}_{(\mathcal{S}_{i})}\}_{i=1}^{c}\) via random sampling, where \(\mathbf{x}_{(\mathcal{S}_{i})}\) is the sub-vector of \(\mathbf{x}\) on the sub-space \(\mathcal{S}_{i}\subseteq\{1,\cdots,D\}\). These sub-vectors are then transformed to a unified \(h\)-dimensional representation frame by a _Transformation function_\(T\), yielding a matrix \(\mathbf{U}\!\in\!\mathbb{R}^{c\times h}\). _Labeling function_\(G\) measures scales as data labels \(\mathbf{y}\in\!\mathbb{R}^{c}\) of transformed data in \(\mathbf{U}\). Each \(\mathbf{U}\) and corresponding \(\mathbf{y}\) are treated as one training sample, and the above process is repeated \(r\) times, which produces \(\mathcal{O}\in\mathbb{R}^{r\times c\times h}\) attached with labels \(\mathcal{Y}\in\mathbb{R}^{r\times c}\). A neural network \(\Phi:\mathbb{R}^{r\times c\times h}\mapsto\mathbb{R}^{r\times c}\) is trained via the loss function \(L\) to predict scale-based distributions of transformed data. ### Anomaly Detection Scale learning is not directly related to anomaly detection, and this is essentially a surrogate learning task to drive neural network training. SLAD embeds feature interactions, patterns, and inherent regularities related to the data structure into the learned scale-based ranking mechanism. What SLAD leverages for anomaly detection are these high-level data abstractions. We further present an _inlier-priority property_. It suggests that the update of the neural network is inclined to prioritize inliers due to the imbalanced nature of anomaly detection, i.e., what SLAD derives are normal, common regularities in inliers. Anomalies, by definition, are rare events and behave differently, thereby showing deviation from these learned regularities. This property is first proposed by (Wang et al., 2022) in which a classification-based pretext task is defined. We extend this property from the cross-entropy loss to our distributional divergence loss (theoretical analysis and empirical study in Section 3.4 and 4.2), which further supports the application of scale learning in anomaly detection. Therefore, errors computed through the loss function can indicate abnormal degrees of incoming data. For a testing data instance \(\mathbf{x}\), SLAD also creates transformed data samples \(\mathcal{O}\) and corresponding supervisory labels \(\mathcal{Y}\), and its anomaly score is obtained via \[\tau(\mathbf{x})=\sum_{(\mathbf{U},\mathbf{y})\in\mathcal{O}\times\mathcal{Y }}\ell\big{(}\sigma(\Phi(\mathbf{U}))\big{\|}\sigma(\mathbf{y})\big{)}. \tag{5}\] ### Theoretical Analysis We explore two questions: (**Q1**) How to ensure the effectiveness of the created data sample of scale learning in revealing anomalies? (**Q2**) Does scale learning model normal regularities of inliers, thus exposing anomalies? The training sample in scale learning (the \(\mathbf{U}\) matrix) is generated from randomly sampled feature subspaces. Thus, to solve Q1, we consider how to determine _the shape of each transformed data sample_ (the sampling times \(c\)) such that real anomalies can still stand out in the transformed form. As for Q2, we examine the inlier-priority property by analyzing the gradients that determine the neural network optimization. The Shape of the Data Sample of Scale Learning and its Effectiveness in Revealing Anomalies.Let \(\mathbf{x}_{a}\) be an anomaly, and \(\mathbf{U}\) indicates its transformed matrix. We below derive the relationship between the size of \(\mathbf{U}\) and the probability that \(\mathbf{U}\) is useful to reveal \(\mathbf{x}_{a}\) as an anomaly. We assume the abnormality of \(\mathbf{x}_{a}\) is reflected in a subspace \(\mathcal{G}\) of the whole feature space \(\mathcal{F}\), i.e., \(\mathcal{G}\subseteq\mathcal{F}\), and \(|\mathcal{G}|=\beta|\mathcal{F}|\), \(\beta\in(0,1]\). The elements in \(\mathcal{G}\) are effective features. Generally, a subset of \(\mathcal{G}\) is sufficient to discover the anomaly, and we denote this minimum size as \(\alpha|\mathcal{G}|\), \(\alpha\in(0,1]\). Let \(\mathcal{S}\) be one of the sampled subspaces when creating \(\mathbf{U}\). The dimensionality of \(\mathcal{S}\) is uniformly sampled from \(1\) to \(|\mathcal{F}|\). We first give the following Lemma (proof in Appendix A) to show the probability of \(\mathcal{S}\) being effective. **Lemma 3.1**.: _The probability of \(\mathcal{S}\) containing at least \(\alpha|\mathcal{G}|\) effective features of \(\mathcal{G}\) (i.e., \(\mathcal{S}\) is effective) is:_ \[\begin{split} Pr(\mathcal{S}\text{ is useful})=\frac{1}{| \mathcal{F}|}\sum_{j=\alpha|\mathcal{G}|}^{|\mathcal{F}|}\sum_{k=\alpha| \mathcal{G}|}^{j}\binom{j}{k}\big{(}\frac{|\mathcal{G}|}{|\mathcal{F}|}\big{)} ^{k}\big{(}1-\frac{|\mathcal{G}|}{|\mathcal{F}|}\big{)}^{j\!-\!k}.\end{split} \tag{6}\] Based on the above Lemma, we further present an intriguing fact in the following Theorem (proof in Appendix B), which bounds the above probability. **Theorem 3.2**.: _Given the effective feature space \(\mathcal{G}\) and the minimum size \(\alpha|\mathcal{G}|\) to reveal the anomaly, the lower bound of the probability of the randomly sampled subspace \(\mathcal{S}\) being useful is \(\text{int}\big{(}Pr(\mathcal{S}\text{ is useful})\big{)}=1-\alpha\)._ Let the success probability of an individual sampling be the lower bound, i.e., \(1-\alpha\). We assume \(\mathbf{U}\) is useful to disclose the anomaly if it has at least one element that is transformed from the effective sub-vectors. Consequently, similar to Lemma 3.1, the probability of \(\mathbf{U}\) being useful is: \[\begin{split} Pr(\mathbf{U}\text{ is useful})=\sum_{k=1}^{c} \binom{c}{k}(1-\alpha)^{k}(\alpha)^{c-k}.\end{split} \tag{7}\] \(Pr(\mathbf{U}\text{ is useful})\) and \(c\) are positively related, whereas a large number of useless elements in \(\mathbf{U}\) may also disrupt the identification of \(\mathbf{x}_{a}\). Thus, we use a size that is as small as possible while ensuring \(Pr(\mathbf{U}\text{ is useful})\) is large enough. In our default setting, we use \(c=10\), which makes the probability exceed \(0.999\) when \(\alpha\!=\!0.5\) and \(0.99\) when \(\alpha\!=\!0.6\). Inlier-priority Property in Scale Learning.The inlier-priority property indicates that the network optimization is inclined to prioritize inliers. Since the theoretical analysis of DNNs is still intractable, we consider the same analyzable case that has been used in (Wang et al., 2022), i.e., a feed-forward structure with sigmoid activation. The penultimate layer outputs \(u\) units, and the final softmax layer contains \(c\) nodes. Network weights are initialized by a uniform distribution \([-1,1]\). Considering the \(k\)th element (\(1\!\leq\!k\!\leq\!c\)) of the prediction, the gradients w.r.t. the weights (denoted as \(\mathbf{w}_{k}=\{w_{(s,k)}\}_{s=1}^{u}\)) are directly responsible for this output. Let \(L_{k}\) be the \(k\)th position of the loss function of \(N\) training data objects, and we can derive the expectation of gradient magnitude of updating \(\mathbf{w}_{k}\) as follows: \[\begin{split}\mathbb{E}\Big{[}\big{\|}\nabla_{\mathbf{w}_{k}}L_{k} \big{\|}_{2}^{2}\big{]}=&\sum_{s=1}^{u}\mathbb{E}\Big{[}\big{(} \sum_{i=1}^{N}\nabla_{w_{(s,k)}}\ell_{k}^{(i)}\big{)}^{2}\Big{]}\\ =&\sum_{s=1}^{u}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{E} \Big{[}\nabla_{w_{(s,k)}}\ell_{k}^{(i)}\nabla_{w_{(s,k)}}\ell_{k}^{(j)}\Big{]}. \end{split} \tag{8}\] \(\mathbb{E}\big{[}\|\nabla_{\mathbf{w}_{k}}L_{k}\|_{2}^{2}\big{]}\) essentially quantifies the influence of training data on network optimization. We respectively denote the gradients induced by inliers and anomalies as \(\nabla_{\mathbf{w}_{k}}^{\text{inlier}}L_{k}\) and \(\nabla_{\mathbf{w}_{k}}^{\text{nom}}L_{k}\). Based on Taylor series expansion and gradients computation, we derive the following approximation (detailed derivation in Appendix D ): \[\frac{\mathbb{E}\big{[}\|\nabla_{\mathbf{w}_{k}}^{\text{inlier}}L_{k}\|_{2}^{ 2}\big{]}}{\mathbb{E}\big{[}\|\nabla_{\mathbf{w}_{k}}^{\text{nom}}L_{k}\|_{2} ^{2}\big{]}}\approx\frac{N_{\text{inlier}}^{2}}{N_{\text{anom}}^{2}}. \tag{9}\] Due to the imbalanced nature (i.e., \(N_{\text{inlier}}\!\gg\!N_{\text{anom}}\)), inliers govern the optimization process by inducing a significantly larger gradient magnitude. Therefore, the neural network can learn _normal_ regularities and patterns in inliers, thereby exposing anomalies in the inference stage. ## 4 Experiments **Datasets.** Ten datasets are employed in our experiments. _Thyroid_ and _Arrhythmia_ are two medical datasets out of four popular benchmarks used in existing studies of this research line (Bergman and Hoshen, 2020; Qiu et al., 2021). The other two datasets in this suite are from KDD99, while KDD99 is broadly abandoned as virtually all anomalies can be detected via one-dimensional marginal distributions. Instead, a modern intrusion detection dataset, UNSW-NB15, is exploited. Besides, our experiments also base on several datasets from different domains including _Waveform_ (physics), _Bank_ (marketing), _Thrombin_ (biology), and _PageBlocks_ (web). These datasets are commonly used in anomaly detection literature (Pang et al., 2021; Campos et al., 2016). We employ tabular version of three vision/NLP datasets including _MYec (tab)_, _Amazon (tab)_, and _Yelp (tab)_, which are provided by a recent anomaly detection benchmark study (Han et al., 2022). **Evaluation Protocol.** We follow the mainstream experimental setting of this research line (Bergman and Hoshen, 2020; Qiu et al., 2021; Shenkar and Wolf, 2022) by using 50% of normal samples for training, while the testing set contains the other half of normal samples as well as all the anomalies Following (Campos et al., 2016; Pang et al., 2019; Wang et al., 2022; Han et al., 2022; Xu et al., 2019), two evaluation metrics, the area under the Receiver-Operating-Characteristic curve (AUC-ROC) and the area under the Precision-Recall curve (AUC-PR), are employed. These two metrics can impartially evaluate the detection performance, while not posing any assumption on the anomaly threshold. Unless otherwise stated, the reported metrics are averaged results over five independent runs. **Baseline Methods.** Seven state-of-the-art baselines are utilized. ICL (Shenkar and Wolf, 2022), NeuTraL (Qiu et al., 2021), and GOAD (Bergman and Hoshen, 2020) are contrastive self-supervised methods. RCA (Liu et al., 2021) and GAAL (Liu et al., 2019) are reconstruction-based generative methods. We also utilize DSVDD (Ruff et al., 2018), which is a deep anomaly detection method based on one-class classification. iForest (Liu et al., 2008) is a popular traditional (non-deep) anomaly detection baseline. ### Anomaly Detection Performance **Effectiveness in Real-world Datasets.** Table 1 illustrates the detection performance in terms of AUC-ROC and AUC-PR, of our model SLAD and the competing methods. SLAD outperforms its state-of-the-art competing methods on eight out of ten datasets according to both two evaluation metrics. SLAD averagely obtains 7%-21% AUC-ROC improvement and 15%-61% AUC-PR gain over its seven contenders. Par \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & **DATA** & **SLAD** & **ICL** & **NeuTraL** & **GOAD** & **RCA** & **GAAL** & **DSVDD** & **iForest** \\ \hline \multirow{8}{*}{**Datasets**} & Thyroid & **0.995\(\pm\) 0.001** & 0.974 \(\pm\) 0.015 & 0.985 \(\pm\) 0.002 & 0.952 \(\pm\) 0.005 & 0.934 \(\pm\) 0.005 & 0.768 \(\pm\) 0.096 & 0.930 \(\pm\) 0.032 & 0.988 \(\pm\) 0.002 \\ & Arrhythmia & **0.825\(\pm\) 0.007** & 0.784 \(\pm\) 0.048 & 0.805 \(\pm\) 0.025 & 0.806 \(\pm\) 0.008 & 0.767 \(\pm\) 0.009 & 0.704 \(\pm\) 0.082 & 0.807 \(\pm\) 0.008 & 0.814 \(\pm\) 0.007 \\ & Waveform & **0.812\(\pm\) 0.047** & 0.649 \(\pm\) 0.048 & 0.621 \(\pm\) 0.023 & 0.664 \(\pm\) 0.022 & 0.626 \(\pm\) 0.019 & 0.732 \(\pm\) 0.074 & 0.516 \(\pm\) 0.012 & 0.718 \(\pm\) 0.019 \\ & UNSW-NB15 & **0.941\(\pm\) 0.004** & 0.918 \(\pm\) 0.010 & 0.916 \(\pm\) 0.017 & 0.903 \(\pm\) 0.003 & 0.935 \(\pm\) 0.001 & 0.796 \(\pm\) 0.060 & 0.902 \(\pm\) 0.028 & 0.758 \(\pm\) 0.016 \\ & Bank & **0.730\(\pm\) 0.004** & 0.724 \(\pm\) 0.014 & 0.720 \(\pm\) 0.018 & 0.587 \(\pm\) 0.006 & 0.699 \(\pm\) 0.003 & 0.655 \(\pm\) 0.032 & 0.608 \(\pm\) 0.057 & 0.723 \(\pm\) 0.008 \\ & Thrombin & **0.939\(\pm\) 0.007** & DOM & 0.460 \(\pm\) 0.033 & 0.839 \(\pm\) 0.011 & 0.916 \(\pm\) 0.000 & DOM & 0.520 \(\pm\) 0.046 & 0.898 \(\pm\) 0.008 \\ & PageBlocks & **0.972\(\pm\) 0.004** & 0.909 \(\pm\) 0.025 & 0.961 \(\pm\) 0.002 & 0.670 \(\pm\) 0.006 & 0.864 \(\pm\) 0.002 & 0.765 \(\pm\) 0.032 & 0.904 \(\pm\) 0.009 & 0.927 \(\pm\) 0.005 \\ & Amazon (tab) & **0.605\(\pm\) 0.007** & 0.592 \(\pm\) 0.005 & 0.570 \(\pm\) 0.036 & 0.500 \(\pm\) 0.000 & 0.538 \(\pm\) 0.008 & 0.495 \(\pm\) 0.032 & 0.539 \(\pm\) 0.013 & 0.565 \(\pm\) 0.008 \\ & Yelp (tab) & 0.658\(\pm\) 0.014 & **0.664\(\pm\) 0.009** & 0.627 \(\pm\) 0.027 & 0.501 \(\pm\) 0.000 & 0.585 \(\pm\) 0.008 & 0.584 \(\pm\) 0.039 & 0.593 \(\pm\) 0.032 & 0.609 \(\pm\) 0.007 \\ & MVTec (tab) & **0.812\(\pm\) 0.009** & 0.778 \(\pm\) 0.010 & 0.788 \(\pm\) 0.009 & 0.666 \(\pm\) 0.030 & 0.663 \(\pm\) 0.022 & 0.675 \(\pm\) 0.026 & 0.806 \(\pm\) 0.014 & 0.757 \(\pm\) 0.011 \\ \hline \multirow{8}{*}{**Datasets**} & Thyroid & **0.921\(\pm\) 0.012** & 0.726 \(\pm\) 0.070 & 0.824 \(\pm\) 0.018 & 0.778 \(\pm\) 0.008 & 0.654 \(\pm\) 0.012 & 0.429 \(\pm\) 0.133 & 0.470 \(\pm\) 0.030 & 0.783 \(\pm\) 0.037 \\ & Arrhythmia & 0.604 \(\pm\) 0.006 & 0.572 \(\pm\) 0.038 & 0.589 \(\pm\) 0.022 & 0.631 \(\pm\) 0.005 & 0.562 \(\pm\) 0.009 & 0.505 \(\pm\) 0.071 & 0.646 \(\pm\) 0.008 & **0.633 \(\pm\) 0.021** \\ & Waveform & **0.432\(\pm\) 0.132** & 0.123 \(\pm\) 0.040 & 0.095 \(\pm\) 0.014 & 0.079 \(\pm\) 0.004 & 0.088 \(\pm\) 0.008 & 0.148 \(\pm\) 0.060 & 0.059 \( ticularly, on the popular benchmark _Thyroid_, SLAD raises the state-of-the-art AUC-PR by 10 points from 0.82 to 0.92. We also achieve over 190% AUC-PR leap (from 0.15 to 0.43) on _Waveform_. Contrastive self-supervised counterparts also show more competitive performance than reconstruction- or one-class-based methods. The superiority of SLAD validates the effectiveness of our scale learning task in accurately modeling normal regularities of the inherent data structure. Note that SLAD performs less effectively on _Arrhythmia_ that has limited data instances (less than 500) since the success of neural networks generally relies on sufficient training data. Nevertheless, SLAD still obtains the best AUC-ROC performance on _Arrhythmia_. Capability to Handle Complicated Normal Data.This experiment investigates whether discriminative models can better handle complicated normal data than reconstruction- or one-class-based baselines. Following (Qiu et al., 2021), this question is empirically studied by increasing the variability of inliers. We use F-MNIST, a popular multi-class dataset, by treating each pixel as one feature. A suite of datasets is created by sampling data from one class as anomalies and increasing the number of classes considered to be inliers. For each case, we use nine different combinations of selected normal classes and five random seeds per class combination, producing 405 (\(9\times 9\times 5\)) datasets in total. Figure 3 illustrates the AUC-ROC results and the decline proportion w.r.t. the increasing of class numbers in normal data. As each case corresponds to a group of datasets, we also report the 95% confidence interval in addition to the average performance in the left panel. Each detector has a comparably good performance when only one original class appears in inliers. The increased variety in the normal class makes the task more challenging. SLAD downgrades by about 10% and still achieves over 0.8 AUC-ROC when the normal class contains nine prototypes, while over 30% decline is shown in generative models. Reducing errors in point-wise details is hard to converge when the normal class is complicated. The one-class assumption also does not hold when there are more than two original classes considered to be inliers. By contrast, SLAD, ICL, NeuTraL, and GOAD better handle the increased complexity of the normal class. The superiority over contrastive self-supervised counterparts validates the technical advantages of learning to rank subspace-based transformed data compared to only learning the discrimination between transformations. ### A Closer Look at Scale Learning This experiment further investigates why scale learning can be used for anomaly detection by specifically validating the inlier-priority property and examining whether the learned regularities and patterns are class-dependent. Validating the Inlier-priority Property.To look into the optimization process of scale learning, we illustrate loss values of testing inliers and anomalies per training epoch, which empirically examines the inlier-priority property. Training data are contaminated by 2% anomalies. Loss values \(\ell\) of testing inliers and anomalies are respectively calculated, and Figure 4 shows box plots of loss values. Neural network inclines to model the inlier class, and testing inliers generally yield lower errors. Compared to inliers, anomalies present significantly higher distributional divergence between derived predictions and targets, and thus two classes can be gradually separated. On datasets _Amazon (tab)_ and _Yelp (tab)_, anomalies also yield clearly reduced loss values. It might be because we employ the adapted tabular version of these two datasets, and their tabular representations are not embedded with informative features to distinguish anomalies. Other state-of-the-art competitors also fail to produce good detection results on these two datasets as shown in Table 1. Validating the Class-dependency.The anomaly detection performance on the used ten datasets validates that the learned regularities cannot apply to anomalies. We further delve into this question by employing the multi-class F Figure 4: Loss values of testing inliers/anomalies during training. Figure 3: (_Left_) AUC-ROC with 95% confidence intervals on datasets with different numbers of classes considered to be inliers. (_Right_) The proportion of decline compared to only one class appeared in normal data. MNIST dataset. After training on one class, if data instances from new classes do not comply with the learned regularities and patterns embedded in the scale-ranking mechanism, the learned network cannot make expected predictions for data instances from new classes. Therefore, this experiment tests whether the learned network generalizes to other classes by calculating loss values of data instances in new classes compared to the trained class. Two cases are set by employing different original classes, i.e., class 0 (T-shirts) and class 1 (trousers), for training. As shown in Figure 5, loss values \(\ell\) per class are denoted in box plots. The data instances from the trained class can well fit the learned network, deriving obviously lower errors. By contrast, the loss values of other classes are higher. The lower quartile of new classes is much higher than or comparable to the upper quartile of the trained class. These results further validate that our scale learning is class-dependent, thus further supporting its application in anomaly detection. Please note that, in the left panel of Figure 5, loss values in class 3 are much lower than that in other new classes since T-shirts in class 0 are semantically similar to pullovers in class 3. It is more challenging to distinguish this class. Nonetheless, data instances from this new class still show observable higher divergence than the trained class. ### Ablation Study This experiment answers two questions: (**Q1**) Can several designs in the transformation function \(T\) and the labeling function \(G\) be replaced with alternatives? (**Q2**) Is it necessary to define scale learning as a distribution alignment task? We first validate the choice of our random affine transformation function \(T\) by replacing it with a zero padding function in **w/**\(T_{\text{Zero}}\) and deeper feed-forward network structure in **w/**\(T_{\text{MLP}}\), and the feature weight of the labeling function is removed in **w/o**\(G_{\omega}\). We design another three ablated versions (**w/**\(L_{\text{ce}}\), **w/**\(L_{\text{mse}}\), and **w/**\(L_{\text{dcl}}\)), which define scale learning as classification, regression, and contrastive learning using the cross-entropy loss, the mean-squared error, and the deterministic contrastive loss (Qiu et al., 2021), respectively. SLAD is compared with the above six ablated versions. Table 2 reports the AUC-ROC results. SLAD outperforms **w/**\(T_{\text{Zero}}\) and **w/**\(T_{\text{MLP}}\) on most of the used datasets, which validates the choice of random affine transformation when creating representations. Feature weights bring an approximate 5% improvement on the popular _Thyroid_ benchmark. Besides, our distribution alignment-based scale learning illustrates better results than other canonical proxy tasks, which illustrates its superiority. It is interesting to note that **w/**\(L_{\text{mse}}\) obtains superior average performance than **w/**\(L_{\text{ce}}\), which implies that using clear quantitative labels may better teach the network than qualitative learning in classification. **w/**\(L_{\text{dcl}}\) achieves relatively better performance since it also treats a group of transferred data as one training sample and uses the contrastive loss, i.e., it also optimizes predictions in a relative manner. ## 5 Conclusions This paper introduces SLAD, a deep anomaly detection method for tabular data. The core novelty of our work includes the scale concept in tabular data and a new kind of data-driven supervisory signals based on scales. This supervision essentially learns the ranking of representations transformed from varied feature subspaces. It is different from current point-wise generative models and classification-, comparison-, and mapping-based discriminative models, presenting a new manner of self-supervised learning. By harnessing this supervision, our model learns inherent regularities and patterns related to the data structure, which of \begin{table} \begin{tabular}{l c c c c} \hline \hline **Data** & **SLAD** & **w/**\(L_{\text{mse}}\) & **w/**\(T_{\text{MLP}}\) & **w/**\(L_{\text{dcl}}\)** \\ \hline Thyroid & 0.995 & 0.992 (**0.3\%**) & 0.995 (0.0\%) & 0.950 (**4.7\%**) \\ Arrthymia & 0.825 & 0.814 (**1.4\%**) & 0.821 (**0.5\%**) & - \\ Waveform & 0.812 & 0.759 (**0.7\%**) & 0.767 (**5.9\%**) & 0.800 (**1.5\%**) \\ UNSW-NB15 & 0.937 & 0.933 (**0.4\%**) & 0.907 (**3.3\%**) & - \\ Bank & 0.730 & 0.724 (**0.8\%**) & 0.717 (**1.8\%**) & - \\ Thrombin & 0.941 & & 0.698 (**34.8\%**) & - \\ PageBlocks & 0.972 & 0.971 (**0.1\%**) & 0.967 (**0.5\%**) & 0.966 (**0.6\%**) \\ Amazon (tab) & 0.608 & 0.552 (**10.1\%**) & 0.599 (**1.5\%**) & - \\ Yelp (tab) & 0.661 & 0.612 (**8.0\%**) & 0.654 (1.1\%**) & - \\ MVTec (tab) & 0.812 & 0.775 (**4.8\%**) & 0.787 (**3.2\%**) & - \\ \hline \hline \end{tabular} \begin{tabular}{l c c c} \hline \hline **Data** & **SLAD** & **w/**\(L_{\text{mse}}\) & **w/**\(L_{\text{mse}}\) & **w/**\(L_{\text{idcl}}\)** \\ \hline Thyroid & 0.995 & 0.674 (**47.6\%**) & 0.983 (**1.2\%**) & 0.978 (**1.7\%**) \\ Arrthymia & 0.825 & 0.728 (**13.3\%**) & 0.813 (**1.5\%**) & 0.805 (**25.4\%**) \\ Waveform & 0.812 & 0.527 (**54.1\%**) & 0.473 (**71.7\%**) & 0.770 (**5.5\%**) \\ UNSW-NB15 & 0.937 & 0.914 (**2.5\%**) & 0.900 (**4.1\%**) & 0.922 (**1.6\%**) \\ Bank & 0.730 & 0.517 (**41.2\%**) & 0.732 (0.3\%) & 0.714 (**2.2\%**) \\ Thrombin & 0.941 & 0.704 (**33.7\%**) & 0.493 (**0.9\%**) & 0.626 (**50.3\%**) \\ PageBlocks & 0.972 & 0.742 (**31.0\%**) & 0.979 (-0.7\%) & 0.976 (-0.4\%) \\ Amazon (tab) & 0.608 & 0.536 (**13.4\%**) & 0.602 (**1.0\%**) & 0.610 (-0.3\%) \\ Yelp (tab) & 0.661 & 0.556 (**18.9\%**) & 0.666 (**0.3\%**) & 0.676 (2.2\%) \\ MVTec (tab) & 0.812 & 0.646 (**25.7\%**) & 0.764 (**6.3\%**) & 0.776 (**46.4\%**) \\ \hline \hline \end{tabular} \end{table} Table 2: AUC-ROC performance with improvement rates of SLAD over its ablation variants per dataset. Positive rates are boldfaced. **w/**\(T_{\text{Zero}}\) cannot handle the ultra-high-dimensional data _Thrombin_. As SLAD only calculates feature weights on low-dimensional data, **w/o**\(G_{\omega}\) is performed on three datasets. Figure 5: Loss values of data from the trained class and other new classes that only appear in the testing stage. * indicates the trained class. The red dashed line marks the upper quartile loss values of data instances from the trained class. fers valuable high-level information for identifying anomalies. Theoretically, we analyze how to ensure the effectiveness of the created data sample in revealing anomalies by determining its shape, and we also examine the inlier-priority property to support the application of scale learning in anomaly detection. Extensive experiments manifest that SLAD significantly outperforms various kinds of state-of-the-art anomaly detectors (including generative, contrastive, and one-class methods) and shows clear superiority when handling complicated data with highly varied inliers. ## Acknowledgements This work was supported by the National Key R&D Program of China (No.2022ZD0115302), the National Natural Science Foundation of China (No.62002371, No.61379052), the Science Foundation of Ministry of Education of China (No.2018A02002), the Postgraduate Scientific Research Innovation Project of Hunan Province (CX20210049, CX20210028), the Natural Science Foundation for Distinguished Young Scholars of Hunan Province (No.14JJ1026), and the Foundation of National University of Defense Technology (No. ZK21-17).
2304.01741
An Intuitive Visualisation Method for Arbitrary Qutrit (Three Level) States
Visual methods are of great utility in understanding and interpreting quantum mechanics at all levels of understanding. The Bloch sphere, for example, is an invaluable and widely used tool for visualising quantum dynamics of a two level qubit system. In this work we present an `octant' visualisation method for qutrits bearing similarity to the Bloch sphere, that encompasses all eight degrees of freedom necessary to fully describe a three level state whilst remaining intuitive to interpret. Using this framework, a set of typical three level processes are modelled, described and displayed.
Max Z. Festenstein
2023-04-04T12:17:47Z
http://arxiv.org/abs/2304.01741v2
# An Intuitive Visualisation Method for Arbitrary Qutrit (Three Level) States ###### Abstract Visual methods are of great utility in understanding and interpreting quantum mechanics at all levels of understanding. The Bloch sphere, for example, is an invaluable and widely used tool for visualising quantum dynamics of a two level qubit system. In this work we present an 'octant' visualisation method for qutrits bearing similarity to the Bloch sphere, that encompasses all eight degrees of freedom necessary to fully describe a three level state whilst remaining intuitive to interpret. Using this framework, a set of typical three level processes are modelled, described and displayed. ## I Introduction In equivalence to classical computing, Quantum Information Processing focuses primarily on the dynamics of two level systems. Unlike classical computation, however, the quantum bit (qubit) exists not in a discrete space of \(\{0,1\}\subset\mathbb{Z}\) but continuously as a two level wavefunction \(|\psi\rangle=\alpha\,|0\rangle+\beta\mathrm{e}^{-i\phi_{1}}\,|1\rangle\) in \(\mathbb{C}^{2}\) i.e. existing as a complex, continuously varying object rather than a binary real one. This change in properties allows for significant computational speedup for certain tasks such as the seminal Shor's and Grover's algorithms for prime number factorisation [1] and unstructured searches [2], as well as more recent quantum machine learning [3]. Despite the prevalence of two state dynamics, three states are a key feature in many quantum systems. Examples include, Raman transition [4; 5; 6], STImulated Raman Adiabatic Passage (STIRAP) [7; 8; 9; 10], Electromagnetically Induced Transparency (EIT) [11; 12; 13; 14] and non-linear processes such as frequency doubling [15; 16; 17] and Four Wave Mixing (FWM) [18; 19; 20; 21; 22; 23; 24]. Visualisations of quantum systems are highly useful tools to ground abstract dynamics, both for educational purposes when seeing a problem for the first time, or as a framework to illustrate novel results to others. Techniques, such as the Bloch sphere, to visualise two-state processes are well established [25] but extending beyond this, there is growing interest in the visualisation of quantum circuits. Tools such as ZX-calculus allow for clear diagrammatic illustrations of complex processes [26; 27; 28; 29]. Furthermore, with a multi-disciplinary convergence around the development of quantum technologies, visualisation methods can greatly aid in the development of 'quantum literacy' for those without a background in the field [30]. When considering visualising three levels, there is no widely adopted equivalent for three levels although various schemes have been tried [31; 32]. In addition, in the field of quantum computing there is growing interest in three level systems or quantum trits (qutrits) [33; 34; 35; 36; 37]. As the dynamics in a qutrit have the scope to be much richer than their two level counterparts, a framework in which to visually represent a qutrit could provide a useful aid to help intuit the behaviour of a system. Despite the potential use of such a framework, constructing one is a non-trivial undertaking. In this work we build on the representation presented in [38] to allow for a description of an arbitrary qutrit state on an 'octant' plot for use both as an educational tool and one for researchers to illustrate novel results. We start by describing pure states and consider a phase-sensitive interference process in section II. Then in section III we extend to fully express an arbitrary mixed qutrit state. Using this framework, we then model and display the common three level protocols FWM and EIT, illustrating and describing their dynamics. ## II Pure state description In the 2 level case, an arbitrary density matrix \(\rho\) can be expressed as a weighted linear sum of the Pauli matrices, i.e. \(\rho=\sum_{j}\alpha_{j}\sigma_{j}\) for coefficients \(\alpha_{j}\in\mathbb{R}\). These 3 Pauli matrices are the generator matrices for the Special Unitary group in two dimensions (SU(2)) and, for 3 levels, the corresponding set of generator matrices for the SU(3) group are the eight Gell-Mann (GM) matrices [39]. The density matrix for any single particle qutrit state can then be expressed as \(\rho=\sum_{j}a_{j}\lambda_{j}\) with coefficients \(\mathrm{a}_{j}\in\mathbb{R}\). Considering 2 levels, the SU(2) group has an isomorphic double-cover mapping onto the SO(3) group, letting each of the generator matrices be expressed along 3 orthogonal axes and the qubit state a vector in this space; i.e. a vector on or within the Bloch sphere. Thus for SU(2) it is straightforward to interpret the information being displayed due to the near-direct correspondence of state in \(\mathbb{C}^{2}\) to a position in \(\mathbb{R}^{3}\). Futhermore, this representation offers the ability to distinguish similarity between states; i.e. two similar (dissimilar) states \(|\psi\rangle\) and \(|\phi\rangle\), where \(\langle\phi|\psi\rangle\approx 1(0)\), will lie close (on opposite poles) to each other on the Bloch sphere. In the 3 level case, however, displaying the full parameter space in a format that is straightforward to interpret becomes a non trivial task due to the 8 independent parameters of the object being described. As such, to reduce the complexity of the initial task, pure states are first considered due to the more straightforward dynamics and reduced dimensionality. ### Theory To begin with, we consider a pure state of the form \[|\psi\rangle=\begin{pmatrix}\alpha\\ \beta\mathrm{e}^{-i\phi_{1}}\\ \gamma\mathrm{e}^{-i\phi_{2}}\end{pmatrix} \tag{1}\] where \(\alpha,\beta,\gamma\in\mathbb{R}\) and \(\alpha^{2}+\beta^{2}+\gamma^{2}=1\). In this case, the description illustrated in [38] is sufficient to fully describe the system. The model used there relied on two plots side by side to show phase and state population separately. The first adjustment made here is to condense the information to a single plot by projecting the phase information as rotated lines centred at the end of the state vector, akin to hands on a clock. An example of a state represented this way is shown in figure 1. There are two exceptions to the general requirement of needing 2 hands to represent a pure state. The first is the trivial case of being entirely in a single eigenstate where there would be no hands present at all as there is superposition where phase can accumulate; the second is when a superposition is present between only two states, resulting in only one non-vanishing coherence term and correspondingly one clock hand for a vector along any of 3 quadrants between eigenstates. To properly describe a superposition between \(|1\rangle\) and \(|2\rangle\), the phase difference \(\phi_{12}=\phi_{2}-\phi_{1}\) acts as this single phase term. To provide a more dynamic example of the evolution of a pure state, a test case using the Hamiltonian \[\hat{\mathrm{H}}=\begin{pmatrix}0&\frac{\Omega_{1}(t)}{2}(t)&0\\ \frac{\Omega_{1}(t)}{2}&0&\frac{\Omega_{2}(t)}{2}\\ 0&\frac{\Omega_{2}(t)}{2}&0\end{pmatrix}, \tag{2}\] with time dependent Rabi frequencies \(\Omega_{1}\) and \(\Omega_{2}\) where \(0\leq t<75:\Omega_{1}=\frac{0.02}{2\pi},\Omega_{2}=0\) and \(75\leq t\leq 150:\Omega_{1}=0,\Omega_{2}=\frac{0.02}{2\pi}\), is shown in figure 2. In this case, oscillation of population between states appears as harmonic motion of the green state vector between the two addressed states akin to a pendulum swing, unlike the 2D case where the state precesses around the surface of the Bloch sphere. Initially, when the transition between \(|0\rangle\rightarrow|1\rangle\) is addressed (A-E) this induces Rabi oscillations between these states, which is visually represented by the state vector oscillating in the \(x-y\) plane. When \(t>75\) this Rabi oscillation occurs between \(|1\rangle\rightarrow|2\rangle\) (F-I), which corresponds to oscillation in the \(y-z\) plane. As the population transfer to \(|1\rangle\) is incomplete when this oscillation occurs, the state vector retains an \(x\) axis (\(|0\rangle\)) component and doesn't experience a full transfer into \(|2\rangle\) at the peak of oscillation. Figure 2: **Upper**: Octant plots displaying Rabi oscillations in a 3 level system. The green line and point marker display the eigenstate populations and the dashed orange line shows the path traced out by this vector. No clock hands are included here due to the lack of any oscillation or decay in the resonant case free of decay modes. **Lower**: A more traditional state population vs time evolution of the qutrit as a reference. The times where octant plots are drawn are marked with lettered black dashed lines. ### Two Pulse Sequence In order to show the effect of phase on a sequence, and how the octant plot can be illustrative in understanding it, a simple two pulse sequence was modelled using the Hamiltonian in (2) where the values of \(\Omega_{1}\) and \(\Omega_{2}\) are pulsed according to the sequence in figure 3. This straightforward sequence consists of two pulses: a constant pulse addressing the \(|0\rangle\rightarrow|1\rangle\) transition that acts as a \(\pi\) pulse over the entire duration of the sequence, and a stronger pulse addressing the \(|1\rangle\rightarrow|2\rangle\) transition with a Gaussian profile. Using these pulses, two cases were considered. The first was for the case where no \(\Omega_{2}\) pulse is applied, and the qutrit is allowed to evolve stimulated only by the \(\Omega_{1}\) pulse. The second is when this second \(\Omega_{2}\) pulse is applied. The results for the cases of \(\Omega_{2}=0\) and \(\Omega_{2}\neq 0\) are shown in the left and right hand columns of figure 4 respectively. The sequence in the left column proceeds with a straightforward transfer of population from \(|0\rangle\) to \(|1\rangle\). As soon as the qutrit leaves the eigenstate \(|0\rangle\) and the phasor \(\phi_{1}\) becomes well defined it is immediately set to \(\phi_{1}=\frac{\pi}{2}\). This is occurs because of the factor of \(-i\) that is imprinted on the qutrit by the Schrodinger equation which dictates the time evolution of the system. As the Hamiltonian being applied is entirely real and with no diagonal terms, no further phase evolution occurs and \(\phi_{1}\) remains unchanged for the duration of the sequence. This process is illustrative of the two cases where one would expect a reduced number of hands. For \(t<0.5\), the state remains in a superposition between two states, leaving only one non-vanishing phasor. At \(t=0.5\) all population is in a single state, resulting in no non-vanishing phase terms. For the \(\Omega_{2}\neq 0\) case in the right hand column, the stimulation of the \(|1\rangle\rightarrow|2\rangle\) transition causes all population to transfer out of \(|1\rangle\) and into a superposition of \(|0\rangle\) and \(|2\rangle\). As \(\phi_{1}\) is no longer defined at the point where this occurs, when population reenters \(|1\rangle\) the phase is derived from \(\phi_{2}\) with an additional \(\frac{\pi}{2}\) radians again imprinted by the Schrodinger equation. Thus, the overall phase shift in \(\phi_{1}\) is \(\pi\) radians from the start of the sequence. This phase shift results in the qutrit evolving in antiphase with the qutrit in the \(\Omega_{2}=0\) case. Thus, when it interacts with the driving field, it does so in opposition to the \(\Omega_{2}=0\) qutrit and finishes the sequence with no population in \(|1\rangle\). Figure 3: Pulse sequence for a simple simultaneous excitation scheme. A \(\pi\) pulse is applied with Rabi frequency \(\Omega_{1}\) to address the \(|0\rangle\rightarrow|1\rangle\) transition (blue), and is applied weakly such that the pulse take the entire duration of the sequence to execute. Simultaneously to this, a stronger \(2\pi\) pulse with Rabi frequency \(\Omega_{2}\) addresses the \(|1\rangle\rightarrow|2\rangle\) transition (gold) with a Gaussian intensity profile. The grey dashed lines mark the times with corresponding octant plots in figure 4. Figure 4: Octant plot for the sequence shown in figure 3 for \(\Omega_{2}=0\) (left column) and \(\Omega_{2}\neq 0\) (right column). The green line denoting the state population has been removed in favour of a point marker to reduce visual clutter in the diagram. **Left**: The sequence proceeds with a consistent transfer of population from \(|0\rangle\rightarrow|1\rangle\), ending with total transfer into \(|1\rangle\). **Right**: The sequence proceeds with a transfer of population into \(|2\rangle\), before partially de-exciting down to \(|1\rangle\) and returning to no population in the intermediate \(|1\rangle\) state by the end of the sequence. In visualising this sequence, the octant allows us to see the pivotal factor (the change in phase of \(\phi_{1}\)) that distinguishes the two cases, leaving one with total population transfer into \(|1\rangle\) whilst leaving the other with no population in \(|1\rangle\), despite both seeing the same \(\Omega_{1}\) driving Rabi frequency throughout. ## III Mixed state description In an effectively noiseless, decay-free setting, the description in the previous section is adequate to fully describe any pure 3 level state. This, however, cannot generally be assumed. It is possible to engineer a system to be effectively pure by, for example, utilising either dressed [8] or long-lived atomic states [19], but to create a visual description that fully encapsulates an arbitrary state we must look to extend the description in section II to mixed states. ### Theory To extend to mixed states, we naturally have to adapt the description from pure state vectors to mixed state density matrices. The simplest of these is that of the pure state from (1) \[\rho=|\psi\rangle\langle\psi|=\begin{pmatrix}|\alpha|^{2}&\alpha\beta\mathrm{e }^{i\phi_{1}}&\alpha\gamma\mathrm{e}^{i\phi_{2}}\\ \alpha\beta\mathrm{e}^{-i\phi_{1}}&|\beta|^{2}&\beta\gamma\mathrm{e}^{i(\phi_ {2}-\phi_{1})}\\ \alpha\gamma\mathrm{e}^{-i\phi_{2}}&\beta\gamma\mathrm{e}^{-i(\phi_{2}-\phi_ {1})}&|\gamma|^{2}\end{pmatrix}. \tag{3}\] To see where we might gain additional understanding over the pure state description, we compare this to a more arbitrary density matrix, \[\rho=\begin{pmatrix}|\alpha|^{2}&\mathrm{A}\mathrm{e}^{i\phi_{1}}&\mathrm{B} \mathrm{e}^{i\phi_{2}}\\ \mathrm{A}\mathrm{e}^{-i\phi_{1}}&|\beta|^{2}&\mathrm{C}\mathrm{e}^{i\phi_{1 }}\\ \mathrm{B}\mathrm{e}^{i\phi_{2}}&\mathrm{C}\mathrm{e}^{-i\phi_{12}}&|\gamma|^{ 2}\end{pmatrix}. \tag{4}\] Considering these matrices, we see that the phase term of the pure state coherence \(\rho_{21}\) is only indicative of relative phase between \(|1\rangle\) and \(|2\rangle\) such that \(\phi_{12}=\phi_{2}-\phi_{1}\). In the cases of standard atomic radiative decay mechanisms and optical driving fields, which are the focus of the examples shown here, this relation holds in mixed states. Outside of such systems where extraneous decay mechanisms or 3 driving fields may be possible, this relation is not generally the case. Though such conditions are outside the scope of this work, displaying \(\phi_{12}\) can reveal non-trivial information and warrants adding a third clock hand to the description. Additionally, assuming the presence of decay modes between states, the magnitude of the off-diagonal coherence terms (A, B and C) are also no longer trivial. To ensure that no information about the system is lost, the magnitudes of the coherence terms are encoded into the description as the length of the clock hands. In order to remain clear to the reader regardless of the absolute size of populations in the coherent states, these magnitudes (R\({}_{jk}\)) are normalised as \[\mathrm{R}_{jk}=\frac{|\rho_{jk}|}{\sqrt{|\rho_{jj}||\rho_{kk}|}} \tag{5}\] giving \(0\leq\mathrm{R}_{jk}\leq 1\). In the pure state case \(|\rho_{01}|=\sqrt{\rho_{00}\rho_{11}}=\alpha\beta\) giving \(\mathrm{R}_{01}\) and in a perfect statistical mixture \(\rho_{\mathrm{mix}}=\mathrm{D}_{0}\ket{0}\bra{0}+\mathrm{D}_{1}\ket{1}\bra{1} +\mathrm{D}_{2}\ket{2}\bra{2}\) this coherence term becomes \(|\rho_{01}|=0\ \forall\ \mathrm{D}_{j}\in\mathbb{R}\). The same can also be applied to the other coherence terms \(\rho_{j\neq k}\). This means that when a density matrix represents a complete statistical mixture as in \(\rho_{\mathrm{mix}}\), this loss of coherence is represented not as a change in length of the octant plot's vector, but as a vanishing of the clock hands from the diagram. With the addition of the description of the state coherences, the diagram possesses the necessary 8 degrees of freedom (accounting for the trace condition \(\rho_{00}+\rho_{11}+\rho_{22}=1\) removing a degree of freedom) required to fully express the SU(3) generator matrices that constitute any arbitrary qutrit state. A final adjustment to this description for cases where population decays out of the three levels (such as when coupled to a heat bath) would be to shorten the length of the state vector to account for the reduction of overall population in the system being considered. Examples of this effect will not be shown here and the discussion will be kept to decay between levels \(\ket{0}\), \(\ket{1}\) and \(\ket{2}\). Thus this description allows a reader to easily interpret: the populations in each eigenstate, the relative phases between them and the degree of state purity via the size of the magnitudes \(\mathrm{R}_{jk}\). An example for the mixed state \[\rho=\frac{1}{3}\begin{pmatrix}1&\frac{3}{4}\mathrm{e}^{i\frac{\pi}{2}}&\frac {1}{2}\mathrm{e}^{i\frac{2\pi}{4}}\\ \frac{3}{4}\mathrm{e}^{-i\frac{\pi}{2}}&1&\mathrm{e}^{i\frac{\pi}{4}}\\ \frac{1}{2}\mathrm{e}^{-i\frac{3\pi}{4}}&\mathrm{e}^{-i\frac{\pi}{2}}&1\end{pmatrix}. \tag{6}\] is shown in figure 5. Though this visualisation method shows an individual qutrit state clearly, one shortcoming of the method is the lack of obvious distinction between orthogonal or distant states as measured by trace distance. Indeed, it is possible for the representations of two orthogonal states to share the same state vector on an octant plot; as is the case for \(\ket{\psi}_{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\) and \(\ket{\psi}_{-}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})\). Another shortcoming of this visualisation method is that by displaying the relative sizes of the coherent state magnitudes, we lack a measure of the absolute sizes of these off-diagonal terms. Thus the absolute sizes require deduction based on the position of the state vector of the octant and the sizes of the hands \(\mathrm{R}_{jk}\). For the remainder of this work, the three level system under consideration is that of a three level atom, and is shown in figure 6. In this system, state mixing is introduced via radiative decay between neighbouring states, causing a pure state density matrix to evolve into a statistical mixture over time. Mathematically, this is implemented by shifting from a time evolution of a state vector governed by the Schrodinger equation to a Lindbladian master equation of the form \[\frac{\mathrm{d}\rho}{\mathrm{d}t}=-i[\hat{\mathrm{H}},\rho]+\sum_{j=1}^{2}\Gamma _{j,j-1}\left(\mathrm{C}_{j}\rho\mathrm{C}_{j}^{\dagger}-\frac{1}{2}\{\rho, \mathrm{C}_{j}\mathrm{C}_{j}^{\dagger}\}\right) \tag{7}\] with collapse operators \(\mathrm{C}_{j}=\ket{j-1}\bra{j}\) and \(\hbar\equiv 1\). As in figure 6, the mixed state processes are performed with a strong decay mode between \(\ket{1}\rightarrow\ket{0}\) with strength \(\Gamma_{10}\), corresponding to decay modes present in atomic systems between neighbouring states. As this decay mode is the key defining feature of the system, values of other frequency parameters are quoted in terms of \(\Gamma_{10}\), with times in \(\tau_{10}=\frac{1}{\Gamma_{10}}\). the decay term \(\Gamma_{21}\) is modelled to be a small fraction (\(\chi\)) of \(\Gamma_{10}\) to simulate a long lived upper state such as a Rydberg state. In line with atomic physics, the Rabi frequencies \(\Omega_{1}\) and \(\Omega_{2}\) in figure 6 correspond to probe and pump laser radiation fields addressing an atom respectively. By introducing a detuning of a laser field to the simulation, detuning terms (\(\Delta_{1}\) and \(\Delta_{2}\)) can be introduced to the diagonals of the system Hamiltonian \[\hat{\mathrm{H}}=\begin{pmatrix}0&\frac{\Omega_{1}}{2}&0\\ \frac{\Omega_{1}}{2}&-\Delta_{1}&\frac{\Omega_{2}}{2}\\ 0&\frac{\Omega_{2}}{2}&-\Delta_{12}\end{pmatrix} \tag{8}\] where \(\Delta_{12}=\Delta_{1}+\Delta_{2}\). The direct connection to atomic physics is made here such that the following examples of EIT and FWM can be accurately described in better context. ### Electromagnetically Induced Transparency The EIT process is characterised by a sharp increase in the transmission of a weak probe beam through a medium under certain resonance conditions. This is present in the response of the medium as a reduced (or zero) excitation via the transition the probe stimulates. The Hamiltonian for this system is given in (8). In the simplest case where \(\Delta_{1}=\Delta_{2}=0\), the system forms an eigenstate, the so-called 'dark state' only in terms of \(\ket{0}\) and \(\ket{2}\), and retains only a coherence in \(\phi_{2}\). For driving fields \(\Omega_{1}\) and \(\Omega_{2}\) as shown in figure 6 and (8), the dark state \(\ket{\psi}_{\mathrm{D}}\) is given by \[\ket{\psi}_{\mathrm{D}}=\frac{1}{\sqrt{(\frac{\Omega_{2}}{\Omega_{1}})^{2}+1} }\begin{pmatrix}\frac{\Omega_{2}}{\Omega_{1}}\\ 0\\ -1\end{pmatrix}. \tag{9}\] This is in line with the values of \(\rho_{01}\) and \(\rho_{12}\) given in [40] for the case of zero detuning. When \(\Delta_{1,2}\neq 0\) we see that the eigenstates, though still analytically solvable, become much more complex and all contain a non-zero population in \(\ket{1}\). The time evolution for a resonant (left column) and detuned (right column) EIT sequence are shown in figure 7 for \(\Omega_{1}=\Omega_{2}=2\)\(\Gamma_{10}\). In these cases, a strong probe is modelled such that the dark state later described is visually distant from \(\ket{0}\) for the sake of clear visualisation, but this is not typical in physical implementations. Similarly to the case in section II.2, the coherence states are initially populated with a \(\frac{\pi}{2}\) phase shift imprinted by the Schrodinger equation. This results in the \(\rho_{02}\) term immediately acquiring the necessary phase for the dark state rather than tending towards it over time. As the Figure 6: Atomic level scheme showing the variables, states and laser fields being considered in modelling. These are: the probe beam (\(\Omega_{1}\)) addressing the \(\ket{0}\rightarrow\ket{1}\) transition, the coupling beam (\(\Omega_{2}\)) addressing the \(\ket{1}\rightarrow\ket{2}\) transition, a pair of detuning terms (\(\Delta_{1}\) and \(\Delta_{2}\)) and the decay rates \(\Gamma_{10}\) and \(\Gamma_{21}\). The coefficient \(\chi\) is varied between simulations to adjust the extent of the loss of coherence in \(\rho_{12}+\mathrm{c.c.}\). sequence progresses for the resonant case, an initial transient population transfer occurs into \(\left|1\right\rangle\), with all three coherences present. As the sequence continues, this is transferred into \(\left|2\right\rangle\) with any residual population in \(\left|1\right\rangle\) decaying back to \(\left|0\right\rangle\). The fact that this transfer back to \(\left|0\right\rangle\) is a decay rather than coherent population transfer is shown by the loss of the \(\phi_{1}\) clock hand before all population is fully transferred out of \(\left|1\right\rangle\). Alongside this decay of state population out of \(\left|1\right\rangle\) and loss of \(\rho_{10}\), the coherence term \(\rho_{12}\) also decays to zero (shown by a vanishing \(\phi_{2}\) hand) due to the loss of coherent population in \(\left|1\right\rangle\), leaving only the \(\rho_{02}\) coherence. This coherence remains at the value \(\phi_{02}=\pi\), leaving the overall qutrit state in the EIT dark state. For the case of \(\Omega_{1}=\Omega_{2}=2\)\(\Gamma_{10}\), the dark state (9) takes the specific form \[\left|\psi\right\rangle_{\text{D}}=\frac{1}{\sqrt{2}}(\left|0\right\rangle- \left|2\right\rangle). \tag{10}\] This state is shown as the black diamond and brown clock hand in figure 7. For the off resonant case, the phases on the coherences \(\phi_{1}\) and \(\phi_{2}\) both tend towards \(\phi_{1}=\phi_{2}=\frac{\pi}{4}\), leaving the third phase difference term \(\phi_{12}\) tending toward \(0\) throughout the simulation. As this becomes increasingly phase matched with the driving field, the response of the coherent state to the driving field decreases, preventing population from being driven into \(\left|2\right\rangle\). Furthermore, in a similar vein to the resonant case, the coherences decrease over time albeit more gradually due to the comparatively small population being excited from \(\left|0\right\rangle\). Using the octant plot, the dynamics of the EIT process can be clearly visualised. In particular, the coherences can be seen to decay without needing to interpret the small magnitudes of the coherence terms in a density matrix which, if not displayed analytically but instead with floating point variables (as is common for outputs in numerical simulations), may be hard to effectively and easily quantify at a glance. Not only is this magnitude easier to interpret than a numerically displayed complex number in any given off-diagonal, but the phase information is as well; the way that the \(\phi_{2}\) phasor immediately becomes set to \(\phi_{2}=\pi\) once it is well defined, and that the phasor \(\phi_{12}\to 0\) as \(t\rightarrow\infty\) exemplify this in particular. ### Four Wave Mixing The final sequence of Four Wave Mixing (FWM) presented here again takes advantage of the decoherence effect of the decay modes in the system in figure 6 to clearly display the rich dynamics at play. FWM is a process of great experimental relevance, with applications such as heralded single photon generation [18] and coherent read-out of stored photons [19; 20]. FWM can in principle be performed with four states as in a diamond configuration as in [21], but 3 level ladder systems remain of significant interest [22; 23; 24] and are thus worth discussion here. The pulse scheme considered here is to simulate the storage and retrieval of a photon in an atom and is split into three parts each of equal time t = \(\tau_{10}\). In the initial storage (write) time of \(0\leq t\leq\tau_{10}\), both driving fields are present, with the aim being to transfer population into the long-lived \(\left|2\right\rangle\) state. In the second stage (hold) in the range \(\tau_{10}<t<2\tau_{10}\), no fields are present and the atom is free to decay via the decay modes \(\Gamma_{10}\) and \(\Gamma_{21}\). In the final stage (read) at times \(2\tau_{10}<t<3\tau_{10}\), driving by the Rabi frequency \(\Omega_{2}\) is resumed, allowing the state to de-populate \(\left|2\right\rangle\) and decay back to \(\left|0\right\rangle\) via the strong \(\Gamma_{10}\) decay mode. The values of \(\Omega_{1}\) and \(\Omega_{2}\) during this trio of write, hold and read steps are shown in figure 8. Throughout this sequence, the \(\left|2\right\rangle\rightarrow\left|1\right\rangle\) decay mode is set to \(\Gamma_{21}=1\times 10^{-3}\)\(\Gamma_{10}\) such that a slow decay occurs Figure 7: Time evolution of a qutrit state for resonant (left) and off-resonant (right) EIT. Here, \(\Omega_{1}=\Omega_{2}=2\)\(\Gamma_{10}\) and \(\Gamma_{21}=1\times 10^{-5}\)\(\Gamma_{10}\) which was chosen to make the \(\left|2\right\rangle\rightarrow\left|1\right\rangle\) decay negligible on the timescale of the simulation. The solid black and pink lines shows the state vector and \(\phi_{2}\) for the EIT dark state \(\left|\psi\right\rangle_{\text{D}}\) in (10). from \(|2\rangle\rightarrow|1\rangle\) on the timescale of the simulation. The dashed grey lines in 8 during the write stage where both transitions are driven show the times for which octant plots are displayed in figure 9. For the second \(\tau_{10}\), as neither transition is stimulated and the system is allowed to freely evolve, no plots are shown due to the trivial dynamics of simple \(|1\rangle\rightarrow|0\rangle\) decay being the only dynamics of note. In the final \(\tau_{10}\), where the \(\Omega_{2}\) drives the upper \(|1\rangle\rightarrow|2\rangle\) transition, the dashed grey lines show the times for which octant plots are rendered in figure 10. As the coloured elements of the plot correspond to those in figure 7 (excluding the dark state components), no additional legend is included in either of these plots. In the first two octants in figure 9, an arc is swept out by the state vector due to the simultaneous driving of both transitions. An interesting feature present in these two, as well as the third plot, is the loss in coherence in the \(\rho_{20}\) state as indicated by the shrinking blue hand despite the lack of a \(\Gamma_{20}\). This feature can be accounted for by considering the \(\rho_{20}\) term in (7) for no detunings and \(\Omega_{1}=\Omega_{2}=10\Gamma_{10}\) \[\frac{\mathrm{d}\rho_{20}}{\mathrm{d}t}=-i\cdot 10\Gamma_{10}(\rho_{10}-\rho_{ 21}). \tag{11}\] Like the cases considered in both sections II.2 and III.2, we have that \(\Re(\rho_{10})=\Re(\rho_{21})=0\). This results in a time evolution of the form \[\frac{\mathrm{d}\rho_{20}}{\mathrm{d}t}=-10\Gamma_{10}(|\rho_{10}|-|\rho_{21}|) \tag{12}\] thus causing a decay in coherence dependent on the difference between the other coherence terms. Throughout this write sequence the coherence terms with decay modes also oscillate, albeit \(\frac{\pi}{2}\) out of phase with the \(\rho_{20}\) coherence and decaying in size throughout. In the \(\tau_{10}\) time between figures 9 and 10 the only dynamic of note is the decay of population in the \(|1\rangle\) state back to \(|0\rangle\), resulting in the red \(\phi_{1}\) hand vanishing entirely. The octant plots in figure 10 begin at the end (\(t=2\tau_{10}\)) of this hold stage. During the final decay in figure 10, population oscillates between the upper two states while decaying back to ground via state \(|1\rangle\). Note that the \(\phi_{1}\) hand returns when population is coherently recovered from the \(|2\rangle\) between the first and second plots. The sudden changes in phase Figure 8: Rabi frequencies of the probe and coupling beams in a resonant FWM process. The grey dashed lines show the times during the write (read) stage that are shown in figure 9 (10). Figure 10: End of the FWM process described. Here, the qutrit de-excites from \(|2\rangle\) state to the ground state via a strongly decaying intermediate state. The starting state shown here is different from the end state in figure 9 as an additional \(\tau_{10}\) has passed between the two figures, causing the residual population in \(|1\rangle\) to decay. The dotted orange line shows the path traced out by the state vector throughout the read sequence. Figure 9: Initial dynamics of a FWM process with Rabi frequencies as indicated in the ’Write’ stage of figure 8. The system begins with an excitation of the qutrit to a long-lived upper state \(|2\rangle\). The dotted orange line shows the path traced out by the state vector throughout the write sequence. as the state oscillates towards \(|0\rangle\) are again due to the depopulation of coherence terms resulting in only one phasor being well defined and the other phase terms thus acquiring a \(\frac{\pi}{2}\) phase shift from this remaining coherence. The plots in the write sequence elucidate the dynamics of the \(\rho_{20}\) term which may not initially be obvious when considering the level scheme shown in figure 6, and show that even though the other terms decay, the populations oscillate throughout the sequence. Again, these dynamics are not easy to intuit by considering the density matrix alone, and are only made clear by the octant plots for the process. For the latter read stage, the exact path that the qutrit takes back to ground state is shown alongside the phase jumps as the qutrit decays. Though these phase changes are not initially obvious, they have been discussed in the previous examples. ## IV Summary and conclusions In this work, we have presented an intuitive formalism with which to visualise any arbitrary pure or mixed qutrit state for the purposes of education or for one to visually relay quantum dynamics in novel work. Test cases were examined to explore and illustrate their internal mechanics with non-trivial phase changes and coherence decays clearly illustrated by the octant plots. Though it is relatively straightforward to interpret the constituent elements of a 3 level density matrices, the aforementioned limitations of this visualisation method are not well explored here and could be presented in future work. ## V Acknowledgements Firstly, the author would like to thank Stuart Adams enormously for ideas of illustrative sequences and insights into the unfolding dynamics, many of which are shown here. Secondly, the advice from Nicholas Chancellor on key points worth highlighting and general guidance on narrative were enormously helpful in shaping this paper and also warrant a great deal of thanks. They gratefully thank Oliver Hughes, Karen Wadenpfuhl, Lucy Downes and Kevin Weatherill for countless conversations and pieces of useful feedback in preparation of this work. The author is also immensely grateful to Drs. Rodney and Frances Stubbs for the funding that made this work possible.
2305.09034
Blizzard: Adding True Persistence to Main Memory Data Structures
Persistent memory (PMEM) devices present an opportunity to retain the flexibility of main memory data structures and algorithms, but augment them with reliability and persistence. The challenge in doing this is to combine replication (for reliability) and failure atomicity (for persistence) with concurrency (for fully utilizing persistent memory bandwidth). These requirements are at odds due to the sequential nature of replicating a log of updates versus concurrent updates that are necessary for fully leveraging the path from CPU to memory. We present Blizzard -- a fault-tolerant, PMEM-optimized persistent programming runtime. Blizzard addresses the fundamental tradeoff by combining (1) a coupled operations log that permits tight integration of a PMEM-specialized user-level replication stack with a PMEM-based persistence stack, and (2) explicit control over the commutativity among concurrent operations. We demonstrate the generality and potential of Blizzard with three illustrative applications with very different data structure requirements for their persistent state. These use cases demonstrate that with Blizzard, PMEM native data structures can deliver up to 3.6x performance benefit over the alternative purpose-build persistent application runtimes, while being simpler and safer (by providing failure atomicity and replication).
Pradeep Fernando, Daniel Zahka, Ada Gavrilovska, Amitabha Roy, Subramanya R. Dulloor
2023-05-15T21:41:47Z
http://arxiv.org/abs/2305.09034v1
# Blizzard: Adding True Persistence to Main Memory Data Structures ###### Abstract. Persistent memory (PMEM) devices present an opportunity to retain the flexibility of main memory data structures and algorithms, but augment them with reliability and persistence. The challenge in doing this is to combine replication (for reliability) and failure atomicity (for persistence) with concurrency (for fully utilizing persistent memory bandwidth). These requirements are at odds due to the sequential nature of replicating a log of updates versus concurrent updates that are necessary for fully leveraging the path from CPU to memory. We present Blizzard - a fault-tolerant, PMEM-optimized persistent programming runtime. Blizzard addresses the fundamental tradeoff by combining (1) a coupled operations log that permits tight integration of a PMEM-specialized user-level replication stack with a PMEM-based persistence stack, and (2) explicit control over the commutativity among concurrent operations. We demonstrate the generality and potential of Blizzard with three illustrative applications with very different data structure requirements for their persistent state. These use cases demonstrate that with Blizzard, PMEM native data structures can deliver up to 3.6\(\times\) performance benefit over the alternative purpose-build persistent application runtimes, while being simpler and safer (by providing failure atomicity _and_ replication). persistent data structures, log replication, PMEM Data Structures + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates ## 1. Introduction Persistent memory (PMEM) hardware provides a way around the cumbersome block storage abstraction. We can now access persistent memory at byte granularity directly from the CPU, using the same instructions used to access volatile main memory. This in turn means that main memory data structures using pointers, such as hash tables, graphs and priority queues, need no longer be volatile. However, existing enterprise and web applications using such data structures demand **true persistence**, ensuring the availability of their state even when persistence media fails. The advance in memory hardware still leaves the worrying question of fault tolerance and the right software interface to persistent memory that hides the complexity of failure atomicity adequately. Combining the much faster new persistent devices with existing replication protocols (Rajaj et al., 2017) shifts much of the end-to-end bottlenecks into the data transport and copying overheads of the replication stack. It also raises the question how to couple the execution of the persistence and replication engines, while providing for concurrent operations, needed for performance, and maintaining ordering, needed for correctness. This is non-trivial because the sequential nature of replicating a log of updates is at odds with the concurrent updates needed to maximize the performance that can be achieved on the new persistent media. In response to these questions, we present Blizzard - a fault-tolerant, PMEM-optimized persistent programming runtime. Blizzard is a software stack that lets programmers build sophisticated _truly persistent data structures as a service_ with only modest software modification requirements. Truly persistent data structures are exposed to client application through an RPC interface, while Blizzard _ensures the performance and correctness of the data structures' access, durability and fault-tolerance operations_. To achieve this, Blizzard relies (1) on a _coupled operations log_ that permits the tight integration of a PMEM-specialized user-level replication stack with a PMEM-based persistence stack, and (2) on _explicit control over the commutativity_ among concurrent operations. To realize (1), Blizzard leverages the byte-addressability of new persistent memories and the direct memory access capabilities of commodity I/O devices, and provides for end-to-end use of zero-copy and batching. This allows for sufficient concurrency to keep the network interface well utilized. It also allows the persistence and replication engines to operate on fine granularity, editing the need for block storage abstractions, and making it possible to natively support diverse data structures and their access APIs. To ensure ordering and correctness guarantees, Blizzard relies on (2) by exposing APIs to the programmer (or to upper level software stacks) to specify when updates in the replication log are commutative. Blizzard utilizes this information to relax the strict ordering specified in the replication log and to execute update operations in parallel, while maintaining serializability guarantees. We also provide a small (but growing) set of Blizzard implementations of popular data structures, that allow performant implementations of applications to benefit from the simplicity and functionality of main memory data structures, while providing performance and true persistence. We demonstrate the generality and potential of Blizzard with three illustrative applications with very different data structure requirements for their persistent state: * a persistent key value store (such as the PMEM-optimized NoveLSM (NoveLSM, 2017)) that can be intuitively created with under 100 lines of extensions to an existing in-memory unordered map; * a persistent graph database specialized for streaming updates (modeled after GraphOne (Gan et al., 2017)) built with an in-memory, persistent adjacency list, also in under 100 lines of code; and * a modern web-application backend (Lobsters (Becker et al., 2016)) built in 600 lines of code with in-memory persistent data structures that match the application processing requirements (priority queues and hashmaps) vs. a specialized relational DB backend such as Noria (Noria, 2017). These use cases demonstrate that with Blizzard, PMEM native data structures can deliver up to 3.6\(\times\) performance benefit over the alternative purpose-build persistent application runtimes, while being simpler and safer (by providing failure atomicity _and_ replication). In summary, this paper makes the following contributions: * We design a PMEM-specialized replication stack that addresses the challenges of integrating replication, persistence and concurrency when combining PMEM-based persistence with high-speed networking. * We provide the implementation of the Blizzard system for commodity Ethernet networks with DPDK high-speed packet delivery, and Intel Optane DC PMEM, the first-generation of directly-attached persistent memories1. Footnote 1: We will opensource Blizzard prior to publishing this paper. * The evaluation of Blizzard with different applications demonstrates its generality and flexibility for creating, with modest effort, different persistent and fault-tolerant data structures, that can be further combined as needed by sophisticated applications, while delivering to applications performance benefits and stronger availability guarantees. ## 2. Motivation PMEM allows persistence at byte-granularity and provides low latency, high throughput data reads/writes. The systems community has built on new PMEM device capabilities and has developed software primitives for representing durable application state using intuitive persistent memory data-structures. One examples is the Intel-provided Persistent Memory Development Kid (PMDK) (Becker et al., 2016). PMDK is a popular PMEM programming library that supports PMEM programming primitives such as persistent memory allocators and durable transactions. Other recent programming systems, such as Pronto (Pronto, 2017) and Persimmon (Persimmon, 2018), provide support for creating persistent versions from volatile data structures, including via compiler support and dynamic instrumentation to capture the log of updates and automate the insertion of flush and fence operations needed for crash-consistent persistent state. However, _supporting reliability_ and _persistence together for in-memory data-structures remains an open challenge_. Reliability and fault-tolerance of persistent data is often achieved using data replication. As shown in the left graph in Figure 1, with existing persistent technologies such as SSDs, the dominant component of the end-to-end costs associated with replicated persistent state remains in the storage layer. For SSD systems, the network-related replication overheads are easily made negligible with direct use of current commodity high-speed interconnects, such as with support for DPDK (the Data Plane Development Kit which provides libraries for fast packet processing) (Becker et al., 2016) or RDMA (Becker et al., 2016; NoveLSM, 2017). This has allowed _much of the past research to focus their efforts on improving the performance of existing application storage backends, without considering opportunities to co-optimize it with network replication_. Simply achieving performance by porting or re-designing storage abstractions for PMEM (NoveLSM, 2017; NoveLSM, 2017) and leveraging existing replication techniques, but without supporting persistent data-structures as a first class citizen. As also showb in recent research on programming persistent or disaggregated memory systems (Becker et al., 2016; Pranran et al., 2017; Pranran et al., 2017; Pranran et al., 2017), _real application use diverse and often multiple data-structures, requiring performance together with native support for properties such as persistence and reliability for all such application state_. For the new persistent memory technologies, use of high-speed communications stacks in the replication solution is critical for end-to-end performance, since the network overheads of traditional kernel-based TCP stacks dominate. However, simply integrating fast networking, specifically userspace networking stacks such as DPDK or RDMA, with standard PMEM data-structures programming libraries (Pranranran et al., 2017; Pranran et al., 2017). Figure 1. Relative costs of the network and storage stacks in a replication setup with different technologies (left). Impact of concurrent accesses on PMEM-achieved throughput for Intel Optane DC (right). 51], will fall short on performance [52]. _Both the network replication_ and _the persistence media storage components remain significant contributors to the end-to-end application performance and seeking reductions in both of these components remains equally important._ This is because data and metadata has to move across the network and durable replication layers, exposing significant overheads due to both the network and the costly memory management overheads in the form of allocate, copy and garbage-collection, as shown in SS8. Use of well-known mechanisms such as zero-copy and batching can reduce and amortize these overheads. Moreover, these techniques promote the concurrent use of the network transport, opening possibilities to maximize the concurrent access to the network attached persistent devices. As shown in the right graph in Figure 1, this is particularly important for PMEM, whose performance is generally maximized with increased concurrency. However, _the concurrency required for high-throughput in-memory data structure design and low-overhead persistence, is at odds with the serial execution needed to ensure ordering and correctness guarantees for how data structure updates propagate through the replication stack._ The solution developed in this paper addresses these gaps by building on mechanisms that integrate best-practice principles for performant persistence _and_ replication, namely zero-copy, batching and concurrency, for platforms combining high-speed networking and byte-addressable persistent memory technologies, to deliver true persistence to native in-memory data structures. We demonstrate this for a range of application with different data structure requirements. ## 3. Overview We propose Blizzard - a fault-tolerant, persistent memory programming runtime with native support for in-memory durable data-structures. Blizzard achieves low-latency, high-throughput replication of in-memory state through use of a PMEM-specialized user-level replication stack. Key to achieving performance is one of a coupled operations log that allows for tight integration of the persistence and replications engines with end-to-end zero-copy, realized by leveraging the byte-addressability of the PMEM hardware and the direct I/O capabilities of the network fabric (SS5). Performance in the replication path is further enabled by maximizing concurrency, while continuing to maintain application-specific ordering and correctness guarantees (SS6). The outcome is that Blizzard provides for reliable and persistent in-memory data-structures, capable of replacing existing enterprise application backends while providing improved performance, reliability and additional functionality (SS4). Blizzard has two key components, libds and liblogrep, shown in Figure 2. libds implements a growing number of persistent memory data structures modeled after familiar C++ STL library counterparts, and liblogrep handles their persistent state replication among node replicas. **libds** is the application developer facing component of Blizzard. libds supports a rich set of commonly used reliable, persistent memory data-structures (e.g. maps, queues, etc). We implement libds data-structures using PMDK [2] - a popular persistent memory programming library. The operations to create and manipulate each of the data-structures are exposed as a network call and made available to client-side applications via associated data-structure libds proxy. libds allows these data-structures to be combined to form complex backend data models required by real enterprise applications. At the heart of libds functionality, is the core Blizzard programming APIs supporting _RPC-style persistent memory programming_, similar to [35; 51], with additional _flexibility to describe operations commutativity and (re-)ordering requirements_. **liblogrep** is a PMEM-aware, fast log-replication runtime that extends the RAFT log replication protocol. liblogrep logic replication is durable and replicates libds's operations across replica nodes. It carefully integrates userspace networking and the byte-addressability of PMEMs to realize end-to-end data zero-copy and batching, and to achieve low latency/high-throughput operation replication. libds and liblogrep are combined via a common operations log in the Blizzard execution layer in a manner that retains the zero-copy benefits while ensuring correct end-to-end execution of incoming data-structure operations, both within a single node (crash-consistency semantics) and across node replicas (distributed data consistency). To increase system throughput, Blizzard maximizes parallelism while preserving ordering constraints and correctness. It does this by _integrating in the execution layer operations scheduling that allows concurrent execution of operations on replica nodes without compromising distributed consistency and violating ordering constraints_ specified by the libds APIs. Figure 2. Blizzard consists of three main components. libds, liblogrep and Execution protocols that couple former two. ## 4. Creating Data Structures with Blizzard The core operational model in Blizzard is a client-server one: application developers write services that receive remote procedure calls from clients, lookup and manipulate persistent state, and then return a response. The listing below shows the Blizzard API available to programmers; we elide some setup details for reasons of space and focus on the core APIs. In order to provide maximum flexibility, when invoking operations on Blizzard data structures from a client, the developer specifies the RPC call as a binary blob. Blizzard takes care of service discovery (primarily, locating the RAFT leader) and sends the server the RPC call. It returns the response as another binary blob and a Status object detailing whether the RPC call could be made successfully. Application level errors, if any, are encoded in the return blob by the application. We distinguish between read and update RPCs as separate calls. This is because reads are not replicated but updates need to be. ``` 1//ClientsideAPI 2StatusMakeUpdateRPC(conststring&request,string*response); 3StatusMakeReadRPC(conststring&request,string*response); 4 5//ServesideAPI 6classLock(){ 7public: 8virtualvoidReleaseLock()=0;//Overridetoreleasyourlock 9} 10voidHandleRPC(conststring&request,vector<Lock>*delayed_locks,string*response); 11boolCommutes(conststring&requestA,conststring&requestB); 12 13 14 15 16 17 18 19 20voidHandleRPC(conststring&request,vector<Lock>*delayed_locks,string*response){ 21} 22 23 24 25//Lock in the Blizzard API and to _not_ release them when executing the RPC callback. Instead, these must be returned in the vector object provided in the API. Blizzard releases all locks _after_ updates to persistent memory have been committed (see discussion in SS6). Finally, we expect the programmer to implement a callback to determine the commutativity of various RPC calls. A programmer declares two RPC calls as commutative if either order of execution leads to the same result for both calls, and therefore, it is safe if Blizzard executes them in different order on different replicas. A simple example of this is incrementing a counter. Commutative RPC calls are executed concurrently, with better performance then when following the sequential execution specified by the replication log. To make the server side API more concrete, the following listing provides an abbreviated benchmark from Lobsters (Losos, 2013) that we use in this paper. It maintains a hash table, mapping news story identifiers to vote counts. The data structure is provided by Blizzard and for this example we assume it is _not_ thread-safe, requiring the programmer to implement their own locking. The RPC call handler adds a vote to a story. ``` 1blizzard::map<string,int>votes; 2pthread_mutex_lockbig_lock;//involatilememory! 3 4classMyLockWrapper:blizzard::Lock{ 5public: 6MyLockWrapper(pthread_mutex_lock*lock) 7:saved_lock_(lock){ 8virtualvoidReleaseLock(){ 9pthread_mutex_unlock(saved_lock_); 10} 11private: 12pthread_mutex_lock*saved_lock_; 13} 14voidHandleRPC(conststring&request,vector<Lock>*delayed_locks,string*response){ 15//Lockeverythingandstashawaythelockforblizzard 16pthread_mutex_lock(&big_lock); 17delayed_locks->push_back(MyLockWrapper(&big_lock)); 18votes[StoryId(request)]++;//Blizzardauto-undogosthehashbucket! 19} 20boolCommutes(conststring&requestA,conststring&requestB){ 21//Allowvoteincrementsforastorytocommute. 22returntrue; 23} ``` Listing 2Bizzard sample code for top-K voted entries. The most interesting aspect of this example is the decision (by the programmer) to declare all vote increments to commute. Viewed from the perspective of a single story, increments and reads of the vote count are serialized. From the perspective of multiple stories however increments are not serialized as the updates to different stories are executed in different orders on different replicas. This reflects the fact that _Blizzard provides strong consistency in terms of state machine replication in the underlying layers, but allows programmers to relax that ordering for better concurrency_. As we show in the following sections, we address the performance of RAFT replication while maintaining the serial order of log updates, thereby providing a strongly consistent and performant substrate for programmers to build their persistent memory applications as they see fit. ## 5. Replication Replication is necessary in Blizzard to ensure in-memory data structures are truly available even when the underlying persistent memory fails or the machine goes down. However, log replication is a synchronous operation. The latency components of accessing durable storage and network hops to replicas add to the latency of operation completion. Although other research has focused on network overheads for replication protocols (Han et al., 2011; Krizard et al., 2012; Krizard et al., 2013; Krizard et al., 2013), _they have examined replication without persisting any state_. This is not an accident - Flash storage comes at a significant latency and throughput cost compared to network performance. In contrast, Blizzard is designed ground up for persistent memory that is at least an order of magnitude faster than Flash. Even recent works which do use log replication in conjunction with persistent data structures, do so in a different context: In Persimmon (Pesimmon, 2014) an operations log of updates is replicated from a primary in-DRAM node to a local secondary in-PMEM shadow copy. In AsymNVM (Pesimmon, 2014) node-local logs of updates to DRAM data structures are flushed via RDMA operations to a remote PMEM location. The actual replication to mirror nodes is outside of the critical path of the persistent memory update, as it can involve arbitrary persistent devices, including SSDs, using Zookeeper. Importantly, this decision exposes applications to potential data loss if the original persistent device fails. These observations motivate us to focus on the network component of a fully functional replication stack that _includes_ persistence. A key building block for Blizzard is userspace network access. Blizzard uses RAFT (Krizard et al., 2013) to replicate a durable log of updates to persistent memory data structures. We use the Data Plane Development Kit (DPDK (DPDK, 2014)) for fast access to the network from userspace. For our specific setup, this leads to a 3\(\times\) reduction in latency for a single hop on the network from 28 us down to 8 us. We then exploit the direct addressability of persistent memory to build a high performance PMEM-based replication stack using two simple principles: _zero copy_ and _batching_. Copying log entries in various parts of the replication state machine - from receiving client requests to sending out copies to replicas - is expensive. This is even more so since persistent memory is still slower than volatile RAM, and its performance, particularly for write operations, further degrades with increase in thread counts. The fact that persistent memory allows one data structure to simply point to another (rather than indirecting through a block address on Flash) makes it possible to avoid copies by reusing the operations log entries _in place_ across all portions of the Blizzard stack. Figure 3 shows how in the RAFT-based implementation of Blizzard, log entries are organized relative to DPDK's memory buffers. A DPDK memory buffer holding an incoming request (an Ethernet frame) is placed in an aligned block of memory together with external metadata pointing to the start and finish of the block, all in persistent memory. We keep the client request in its DPDK memory buffer for its lifetime - spanning replication and execution. To start with, the leader prepends a RAFT control block (with information such as term and index) to the buffer adjusting the external metadata to compensate. The leader is now ready to replicate the request. We exploit the fact that DPDK allows multiple memory buffers to be chained together. To do so it simply creates an Ethernet header for _each replica_ and chains the same log entry packet to each of them. It hands off all the headers to DPDK. The NIC then does the heavy lifting of assembling the Ethernet frames and sending them out. We underline that this is only possible because the logs are not in block storage and persistent memory is accessible from all connected agents, including I/O devices, in the system. In contrast, directly accessing block storage from the NIC involves serious complications (Krizard et al., 2013) and this design illustrates how persistent memory can simplify the design of distributed system primitives that need persistence. Although this design lifts most of the load off the CPU, the consensus protocol still represents an overhead for each log entry. More pertinently, this overhead is sequential, since each log entry needs to be processed before moving on to the next one. A simple way to further improve performance under load is therefore to batch process log entries. DPDK already provides an efficient vector interface to receive multiple packets waiting in the NIC queue. We chain these packet Figure 3. RAFT log entry in Blizzard. buffers together in userspace and treat them as a single RAFT log entry. Note that this form of chaining forms a batch from concurrent operations issued from multiple threads, and need not impact individual operation latencies. This effectively amortizes the CPU cost of running the RAFT protocol state machine over multiple log entries. Blizzard's RPC layer, together with the totally ordered semantics of RAFT replication, means that we provide serializability in terms of the distributed consistency model, if we execute RPC operations in the RAFT log order and read operations as soon as they are received at the leader. Although, we do not replicate reads, the RPC layer always directs reads to the leader replica and thus, we provide read your own writes consistency in addition to serializability. We _do not_ provide linearizability as that would require us to replicate reads to ensure that a leader does not become partitioned without realizing it and responds to reads without taking into account concurrent writes in the majority quorum. We believe serializability with read your own writes consistency is an adequately strong distributed consensus model for programmers to be largely oblivious to replication under the hood. ## 6. Execution Layer The goal of the execution layer is to concurrently execute _committed_ operations in the RAFT's execution log. Blizzard depends on the application programmer to specify commutativity among operations (Section 4). The execution layer couples to the replication layer via a set of queues to receive operations and uses flags in the RAFT log entries to track and update the state of each operation - replicating, replicated (or replication failed), executing and complete. The most complex part of the execution layer is the scheduler that aims to schedule ready operations as soon as possible, while respecting commutativity. The actual execution leverages PMDK's persistent memory transaction library to enforce failure atomicity. We discuss each of these components below. Finally, we also discuss the implications of declaring operations as commutative and how the programmer can control departure from serial execution order for better performance. ### Coupling Figure 4 illustrates the design and interfaces between the replication and execution layers in Blizzard. Every read and write operation received by the replication layer, is added to a queue (Q) of operations, implemented as a persistent circular log. Each entry in the circular log is a pointer to the actual DPDK memory buffer holding the RAFT log entry. Each RAFT log entry includes a set of flags read from and written to by both the execution and replication layers, so as to communicate the state of the operation. These flags are persistent and survive restarts, forming the basis for recovery. All currently executing or ready to execute set (E) of operations (a subset of Q) are maintained in volatile memory. A scheduler picks operations from Q and adds them to E when ready to execute. Executor threads pick operations from E to execute and update the flags in the RAFT log when execution is complete. As part of this post completion operation, the execution thread marks the operation for garbage collection by setting the gc_flag. The replication sub-system uses this information to decide when to garbage collect the RAFT log and move the tail of the persistent circular log forward. ### Scheduling The scheduler runs as part of the continuous event loop in Blizzard, executing the scheduling algorithm shown in Algorithm 1. It considers for execution operations at the head of Q, either immediately - for reads, or only once they are successfully replicated - for updates. The scheduler checks each operation against all currently executing operations in the set E. If it commutes with all operations in E, the operation is added to E for execution. Operations at the head of Q that have failed replication (perhaps due to a RAFT leadership change after the operation was received) are removed from consideration by the scheduler. ``` input:1. queue Q of updates and reads. 2. Set E of operations that are ready-to-execute/executing 3repeat 4ifQ.head().state == FAILED_REPLICATIONthen 5Q.dequeue() 6elseifQ.head().state!= REPLICATION andQ.head() commutes with all ops in Ethen 7op = Q.dequeue() 8E.insert(op) 9untilserver-shutdown; ``` **Algorithm 1**Blizzard operations scheduler algorithm. ### Execution The execution of operations in Blizzard is done by dedicated executor threads. Each thread repeatedly selects an operation from the set E, executes it, and after the execution is completed and any of its effects persisted to memory, it is removed from E. The execution follows the steps detailed in Algorithm 2. ``` input:1. queue Q of updates and reads. 2. Set E of operations that are ready-to-execute/executing 3repeat 4ifQ.head().state == FAILED_REPLICATIONthen 5Q.dequeue() 6elseifQ.head().state!= REPLICATION andQ.head() commutes with all ops in Ethen 7op = Q.dequeue() 8E.insert(op) 9untilserver-shutdown; ``` **Algorithm 2**Blizzard operations scheduler algorithm. We note that although the scheduler ensures only commutative operations execute simultaneously, that does not mean those operations will not conflict. For example, increments to a counter are commutative but one still needs to synchronize on access to the counter to avoid two updates reading the same initial value of the counter. In addition, total order on simultaneously executing concurrent updates is needed to make recovery possible. Therefore we need to ensure commutative operations serialize with each other as a whole when accessing the same memory location. We enforce this via delayed release of locks - a fairly standard technique borrowed from databases. The execution algorithm ensures that _any locks acquired during execution (by user code) are released only after execution and persistence are complete._ We draw particular attention to recovery. The persistent circular log and state flags in the RAFT log entry form the foundation for recovery. We process all undo logs and then start the scheduler. The delayed lock release ensures that any operation that had completed does not see any change to its input data. If an operation begins execution but fails before executing, its persistent state flag remains set at REPLICTED when the system restarts. Any persistent changes made by the previous execution are automatically undone by PMDK. It then proceeds as usual through the current attempt till completion. On the other hand, if an operation finishes execution and manages to move its persistent state flag to COMPLETED, we do not execute it again by checking for this condition, thereby ensuring that operations are executed exactly once with respect to changes to persistent memory. ### Commutativity We now consider how commutativity impacts the distributed consistency model. The precise definition of commutativity that we provide to Blizzard programmers is: _Two operations commute if the result returned by each of them, when executing one after the other, does not depend on the order of execution._ This relation is transitive. When no two operations are defined as commutative, execution occurs in RAFT log order and therefore we provide serializability with read-after-write consistency with respect to the data structure operations. If two operations are declared to commute, Blizzard might execute them in different orders on different replicas but, by definition, this cannot change their results and therefore does not cause a non-serial schedule to become visible. To illustrate this, consider the case of two locations, A and B both initialized to 0 and both set to 1 by concurrent writes. The writes are commutative and can be reordered. If two different clients try to read A and B, one can see the state A=1 and B==0, while the other sees the state A ==0 and B==1. This is consistent with the following serialized schedule: Read1(A)=0; A:=1; Read2(A)=1; Read2(B)=0; B:=1; Read1(B)=1. We reemphasize here that we provide serializability and not linearizability. In addition, commutativity constraints prevent reads from bypassing writes to the same location. This means on a leader failover, committed writes in the log must be executed at the new leader before reads to the same location are allowed to execute at it. Commutativity can be a powerful tool to extract parallelism from the sequential order specified in the log. As an example of allowing some commutativity, consider a single container in persistent memory with a dictionary (implemented as a persistent hash table) interface. Most such APIs (e.g., in C++ STL containers) disallow operations to multiple keys. Therefore a natural setting for commutativity is to allow operations (reads or writes) to different keys to commute, since reads by clients cannot reveal out of order application of operations to different keys at different replicas. In such a situation, the programmer can set Commutes to return true if and only if the operations are made to different keys. The result is serializability with read your own writes consistency when the data structure API is _restricted_ to a single key. Figure 4. Coupling replication to execution in Blizzard. As an example of a more complex commutativity specification consider an example of a graph stored in persistent memory, as an adjacency list: a map of vertices to a list of neighboring vertices. Adding or deleting edges can be tricky due to the need to update both source and destination vertices. We need to ensure that reads see a consistent state of the graph: reading attributes of an edge specified as \((u,v)\) should succeed and return the same result regardless of whether we lookup vertex \(u\) or \(v\) to retrieve edge information. An intuitive setup here is to allow edge changes to commute if they do not touch the same vertex: \(Commutes((u,v),(x,y))\) should return true if and only if \(\{u,v\}\cap\{x,y\}=\emptyset\). We show in the evaluation that judicious settings of commutativity allows more concurrency to be extracted from the single serial order in the RAFT log and therefore better performance. We note that the assertion that APIs that allow more calls to commute lead to more concurrency is in fact a _general_ notion (Kal run the same benchmark with a regular network transport by replacing our userspace networking stack with TCP/IP, and also plot the theoretical peak performance if we were to batch-write 32 log entries at a time to Flash storage. Figure 5 demonstrates that the presence of PMEM removes storage bottlenecks and allows replication to be unhindered by the cost of persistence. This necessitates an optimized implementation that reduces network overheads, so as to prevent the network stack from becoming a bottleneck. Blizzard is able to achieve this and provides a raw replication rate of \(\sim\)365\(K\) log entries a second (3 ways with full persistence). In Figure 6, we show the gains of the mechanisms integrated in Blizzard, using the same replication microbenchmark in a system 1) without batching optimization (no-batching), where we process one operation at a time during log-appends and replication and 2) without zero-copy optimization (copy), where we make copies of incoming RPC payload, both during RAFT log-appends and data preparation during network multicast, as our baselines. Then we introduce zero-copy (z-copy) and batching optimizations into the system separately as well as both together. Zero-copy reduces the Blizzard latency at the clients by \(\sim\)36% at peak system throughput of 216K ops/sec. Zero-copying eliminates extra memory management - allocation, data-copy and free, during Blizzard's log persist and replication steps. The optimization contributes to modest performance gains under current DPDK based network stack, but is likely to have even more significant impact on faster network stacks such as RDMA, with an order of magnitude lower network latencies (\(\sim\)1us latency for EDR InfiniBand RDMA vs. 10+us on our 10Gbps Ethernet testbed). The effects of zero-copy become more significant at higher loads, because they eliminate contention for PMEM accesses: PMEM throughput, particularly write throughput, is known to degrade drastically with concurrent operations (Bizzard, 2018). Use of batching in Blizzard leads to 40% higher throughput, compared to the baseline. We attribute the performance gains in batching to the increased memory parallelism during RAFT log appends, since a batched RAFT log entry append (for 32 ops) requires only a single store fence instruction after subsequent CPU cache line flushes, whereas the no-batching version requires 32 of them, and to the reduced network multicast cost, as the RAFT metadata appends and control path operations are amortized over the number of batched operations. Zero-copy and batching combined together enable Blizzard to handle peak throughput of 328Kops and per replicated operation latency \(\sim\)36\(\,\upmu\)s. Zero-copy and batching improve replication throughput, but also shift the system bottleneck from the replication layer, back into the persistent data-structures. The commute scheduler in Blizzard enables concurrent operation execution on data-structures without compromising correctness. We use the microbenchmark to measure the overheads of the scheduler invocation (+scheduler), without actually changing the behavior of the executed operations. The result is close to one with no scheduler, demonstrating that the dominant cost in checking commutativity is on the side of the callback provided by the user, rather than the scheduler implementation. Finally, we show how Blizzard handles replica failures to provide applications with crucial availability guarantees beyond raw performance improvements. In Figure 7 we show the failover timeline of a Blizzard cluster with 3 nodes under echo/no-op workload, where we kill the leader replica midway. After a timeout-based failed leader detection, the Blizzard client probes the other replicas for a new leader. For a failure detection timeout of 12ms, Blizzard fails over to a new leader within 24 ms in the worst case, for a 3 node replica cluster. ### Key Value store Persistent key-value (KV) stores such as RocksDB (Bizzard, 2018), as an example of general-purpose application often used as a persistence layer, use complex data structures such as LSM trees (K which is the in-memory data-structure realization for the KV abstraction. We ask the following question: Is it feasible implement a truly persistent, replicated hashmap using the Blizzard programming model? We implement a hashmap-based key-value store that supports point queries. With 96 lines of C++ code, we port the concurrent hashmap implementation from PMDK as a Blizzard data-structure by extending it with Blizzard's crash-consistent update protocol and a commute handler that honors a serialized read-your-own writes consistency model, as outlined in SS6.4. The implementation supports arbitrary strings of characters as both keys and values. We select as a baseline NoveLSM (NoveLSM, 2017) - a KV store optimized for PMEM. NoveLSM introduces to traditional LSM-tree based KV store designs PMEM-optimized mutable mematables, in-place updates, memory optimized storage transactions, and parallel reads. We place the mutable memtables and lower-level SSTs of NoveLSM on PMEM and run it on Blizzard with replication and commutative scheduling turned off, thus Blizzard only serves as an RPC transport for NoveLSM. We use 8 byte key/value strings for implementations, and a Facebook-like (Krishnan et al., 2017; NoveLSM, 2017) workload with 50% writes and a uniform distribution of keys. We use a write-intensive workload in the evaluation since this is when it is critical to provide for scalable replication and persistence. The results in Figure 8 show that the replicated and crash-consistent concurrent hashmap-based key-value store with Blizzard (blz hashmap serial) outperforms NoveLSM's (single replica) peak throughput by as much as 7\(\times\), merely due to not being constrained to use block interfaces and LSM trees. Importantly, the throughput of Blizzard further improves by a factor of two, up to a throughput of 270Kops, once we mark operations to different keys as commutable (blz hashmap commute) thereby removing the constraint of serial execution imposed by the RAFT log. We did not see a significant increase in operation latency of our hashmap, where it remained <44 us across the throughput range. This result shows that Blizzard allows programmers to exploit this without being burdened by implementing their own crash consistency or replication. It also underlines the importance to leverage the byte-addressability of PMEM and the network stack in the design of the replication layer, and of the support to extract increased parallelism during replication via the use of the commute API. ### Graphs Next, we evaluate Blizzard in the context of graph databases - an important class of applications that presents unique durable data management challenges. The most natural representation of graphs for search and traversal uses pointers - a source of great difficulty with block interfaces, that has spawned a whole genre of research into batch processing of graphs from secondary storage (Shi et al., 2017; Soh et al., 2018; Soh et al., 2018). PMEM, natively allows pointers and persistence to co-exist and presents a unique opportunity. The key challenge is to do this without letting persistence or replication add programming complexity or unduly affecting performance. We examine whether Blizzard is up to this task. An adjacency-list is the natural data-structure for representing graphs on memory. We first implement a persistent adjacency list data-structure by putting together already available PMDK building blocks. Persistent list structures contain neighbor lists and a hashmap structure maps a vertex's node_id to corresponding list_entry of the row. We extend the implemented graph-structure with Blizzard's crash-consistency semantics. Finally, we implement the handler for parallelizing commutative operations. The implementation only took 110 lines of C++ code, as the bulk of the building blocks were already available as open source libraries. We compare our implementation with the GraphOne (Krishnan et al., 2017), an in-memory graph processing system. GraphOne (Krishnan et al., 2017) models a graph using combination of an in-memory edge-log (memory-buffer), an in-memory versioned adjacency-list and a persistent edge-log. The incoming graph updates are first buffered in the memory-buffer before simultaneously moved/archived into the versioned, in-memory adjacency-list and the file-backed persistent edge-list. Archive happens after configurable number of edge updates and read requests on a graph are served from the in-memory adjacency-list that is up-to-date till the last archival epoch. The configuration is known as static-view with stale-reads in GraphOne terminology and trades off the read freshness for speed. We run GraphOne as a network service, using Blizzard's RPC transport. We place the GraphOne durable edge-log on a PMEM backed file-system and use static-view API with stale-reads. The archival epoch is set to be once per every \(2^{12}\) edge inserts. We use the Twitter data-set (Krishnan et al., 2017) as our streaming graph benchmark. The data-set includes a subset (up to 15M nodes, 46M edges) of Twitter social network in the form of who-follows-who. We use half of the data-set to pre-load our graph database and use the rest of the updates to form a read/write (50/50) streaming workload similar to (Krishnan et al., 2017). We Figure 8. Blizzard’s key-value map performance comparison (replicated and persistent) against similar software stacks read the out-degree for a given node, as our read operation in this workload. Figure 9 shows that the Blizzard graph representation based on a persistent adjacency list (blz adj-list serial) can handle up to 150Kops, at \(<\)38 us, all while providing strong (read your own writes) reads and full fault tolerance semantics. GraphOne only manages a peak throughput of 80Kops and takes as much as 50 us for the same workload. GraphOne integrates extra versioning techniques into its in-memory adjacency list to decouple in-memory operations from persistent write overheads. However, the software overheads introduced by these extra steps dominate when presented with PMEM based fast persistence. We disabled the persistent semantics of the GraphOne engine and ran the same experiment and the results (not shown for clarity) confirmed our reasoning. Parallelizing the adjacency-list representation by specifying commutativity rules further improves the Blizzard graph operation throughput to 291Kops, a \(\sim\)2\(\times\) improvement over a strictly serial execution schedule without significant increase in operation latency. Overall, the Blizzard replicated and fault-tolerant adjacency-list based native graph representation outperforms the GraphOne graph engine by \(\sim\)3.6\(\times\), even without exotic data-structure level optimizations. ### Lobsters Finally, we use Blizzard to implement and evaluate a persistent data storage backend for a popular web-application, Lobsters [(1)]. Lobsters is a community based news aggregation site where users vote for submitted web-links. They display the top-K voted web-links on their home page. The original site manages web-links and their vote counts using a relational DB backend [(19)]. They model durable state using article(article_id, web-link,...) and vote(article_id, vote_count) relations. An article submit inserts a new entry into article relation and an upvote/downvote updates the vote relation. The top-K voted article list is maintained as application logic. We use a persistent priority queue as the memory native data-structure to serve top-K requests. Such a data-structure is both intuitive and removes auxiliary book keeping in the form of application's runtime state as the data-structure itself maintains ordering with updates in logarithmic time. We use a max-heap to maintain the top-K voted stories and a min heap for the remaining stories. An upvote potentially causes the most voted story in the min-heap to move to the max-heap displacing the story with the minimum votes in the current top-k. A downvote can cause the opposite to happen. The hashmap maps article_ids to min/max-heap entries as the incoming vote requests are indexed using article_id. For parallelism, we use a sharding scheme with multiple min-max heap pairs and shard the story keyspace across them. Determining the top-k becomes slightly more expensive due to the need to combine the results of the union of the min-heaps. We combine k entries from each sharded priority queue to form the final result. We re-used a persistent hashmap implementation from PMDK [(2)] and implemented a persistent top-k priority queue using newly written min/max-heap code. Implementing the core persistent data-structure and operations took \(\sim\)600 lines of C++ code. We only allow update operations to different shards to commute with each other. We compare Blizzard's fault tolerant priority-queue data-structure performance against a Noria [(19)] based Lobsters backend that represents that bestcase Noria performance. The authors of Noria obtain their bestcase performance using RocksDB. We upgrade the choice of KV store to Nov-eLSM, as it is optimized for PMEM [(28)]. Both article_id to title and article_id to vote_count mappings are encoded as key-value strings. For a key-value store backend a top-K operation would be very costly as a join routine needs to be performed external to the data store (e.g., in application logic). Therefore, following the original Noria benchmarking setup [(19)] for KV-stores, we convert the top-K requests to simple reads, thus pushing the Noria-like backend to its best case performance. We run the Vote benchmark from [(19)]: a zipfan load generator that is modeled after actual Lobster website traffic. We preload the data store with 1M articles and run the experiment with 19/1, read/update traffic. The update operations consist of up-votes/down-votes of articles. During a read, the Noria based Lobster backend simply returns the article information for a given article_id, whereas the priority-queue Figure 10. Lobster vote benchmark performance numbers for Noria and Blizzard persistent priority queue based storage backends. Figure 9. Twitter benchmark against GraphOne and Blizzard’s adjacency-list based graph engines -based Blizzard backend returns the top-K articles at a given instance. Therefore a top-K read request with the Blizzard implementation on average moves K\(\times\) more data than the Noria counterpart. We use K=8 over a 4-way sharded priority queue and issue one write operation for every 20 request. Figure 10 shows that the Noria-based Lobsters backend serves up to 150Kops at \(<\)30 us. The Blizzard priority-queue based backend (blz prio. queue serial) handles peak throughput of \(\sim\)200Kops but incurs \(<\)55 us per operation due to 3-way replication. It is important to note that the Noria best case performance numbers benefit from the relatively simple read workloads (no top-K) and lack of fault tolerance (no-replication) over Blizzard. The sharded and parallelized version of the Blizzard persistent priority queue (bliz prio. queue commute) manages a maximum throughput of \(\sim\)257Kops while keeping operation latency under \(<\)45 us. Data structure sharding along with proper parallelization of incoming operations using commutativity helps Blizzard data structure to handle 70% more traffic compared to Noria, while maintaining the same intuitive memory APIs. ## 9. Related Work Over the years the systems community has worked on several key software components for PMEM. PM-aware file systems (Kal to leverage log replication and integrate correctness protocols for failure atomicity and commutativity (i.e., ordering). Using three real world application workloads with Blizzard-powered backends, we show Blizzard can realize persistence and fault tolerance with only modest software changes and with significant performance gains.
2306.10035
Generalized FDTD Scheme for Moving Electromagnetic Structures with Arbitrary Space-Time Configurations
We present a generalized FDTD scheme to simulate moving electromagnetic structures with arbitrary space-time configurations. This scheme is a local adaptation and 2+1-dimensional extension of the uniform and 1+1-dimensional scheme recently reported in [1]. The local adaptation, which is allowed by the inherently matched nature of the generalized Yee cell to the conventional Yee cell, extends the range of applicability of the scheme in [1] to moving structures that involve multiple and arbitrary velocity profiles while being fully compatible with conventional absorbing boundary conditions and standard treatments of medium dispersion. We show that a direct application of the conventional FDTD scheme predicts qualitatively correct spectral transitions but quantitatively erroneous scattering amplitudes, we infer from this observation generalized, hybrid-physical and auxiliary (non-physical) - fields that automatically satisfy moving boundary conditions in the laboratory frame, and accordingly establish local update equations based on the related Maxwell's equations and constitutive relations. We subsequently provide a detailed stability analysis with a generalization of the Courant criterion to the dynamic regime. We finally validate and illustrate the proposed method by several representative examples. The proposed scheme fills an important gap in the open literature on computational electromagnetics and offers an unprecedented, direct solution for moving structures in commercial software platforms.
Amir Bahrami, Zoé-Lise Deck-Léger, Zhiyu Li, Christophe Caloz
2023-06-08T10:02:10Z
http://arxiv.org/abs/2306.10035v2
# Generalized FDTD Scheme ###### Abstract We present a generalized FDTD scheme to simulate moving electromagnetic structures with arbitrary space-time configurations. This scheme is a local adaptation and 2+1-dimensional extension of the uniform and 1+1-dimensional scheme recently reported in [1]. The local adaptation, which is allowed by the inherently matched nature of the generalized Yee cell to the conventional Yee cell, extends the range of applicability of the scheme in [1] to moving structures that involve _multiple and arbitrary velocity profiles_ while being fully compatible with conventional absorbing boundary conditions and standard treatments of medium dispersion. We show that a direct application of the conventional FDTD scheme predicts qualitatively correct spectral transitions but quantitatively erroneous scattering amplitudes, we infer from this observation generalized, hybrid - physical and auxiliary (non-physical) - fields that automatically satisfy moving boundary conditions in the laboratory frame, and accordingly establish local update equations based on the related Maxwell's equations and constitutive relations. We finally validate and illustrate the proposed method by three canonical examples - a space-time interface, a space-time wedge and a space-time accelerated interface - whose combination represent arbitrary space-time configurations. The proposed scheme fills an important gap in the open literature on computational electromagnetics and offers an unprecedented, direct solution for moving structures in commercial software platforms. Finite-Difference Time-Domain (FDTD) method, moving electromagnetic structures, moving boundary conditions, generalized Yee cell, hybrid fields, auxiliary fields, space-time discontinuities, Generalized Space-Time Engineered-Modulation (GSTEM) metamaterials. ## I Introduction The introduction of motion into electromagnetic structures represents a fundamental extension of stationary electromagnetics. The motion may involve either moving matter, such as rotating dielectrics and accelerated charges, or moving perturbations, such as fluid and elastic waves. The structures involving the latter type of motion have been recently generalized to Space-Time Engineered-Modulation (GSTEM) metamaterials, or GSTEMs for short, which encompass a virtually unlimited diversity of space-time configurations (e.g., co- or contra-directional, single- or multiple-interface, harmonic, step or gradient, 1+1, 2+1 or 3+1 dimensional (D), uniform or accelerated, classical or quantum, etc.) [2], and hence dramatically extend the physics diversity and application potential of previous moving electromagnetic structures [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 261, 25, 25, 256, 257, 259, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 285, 286, 288, 287, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 174, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 193, 195, 196, 197, 198, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 214, 215, 216, 217, 218, 229, 230, 226, 227, 228, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 281, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 300, 312, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 109, 121, 123, 124, 125, 126, 127, 128, 129, 130, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 145, 146, 147, 148, 149, 150, the features of arbitrary space-time configurations (Sec. VI). Finally, we close the paper with a few concluding remarks (Sec. VII). ## II Failure of the Conventional FDTD Scheme A moving structure, as shown in [2], may always be decomposed into a succession or mixture of moving discontinuities. A _moving discontinuity_ - or _space-time discontinuity_ - which may also be seen as a _moving interface between two media_, is therefore the building block of any moving structure. We shall show here that the conventional FDTD scheme fails to model such a moving discontinuity. It may _a priori_ seem, under the perspective of the space-time (or Minkowski) diagram represented in Fig. 1, that the conventional FDTD scheme, with its standard update equations and absorbing boundary conditions, could straightforwardly apply to the problem of a space-time discontinuity. In that perspective, the space-time discontinuity is indeed just a rotated, oblique version of the routine pure-space (or stationary), vertical discontinuity between two media, and it can easily be specified as such in the parametric setup of the simulation. Note that both the space step (\(\Delta z\)) and the time step (\(\Delta t\)) may have to be decreased compared to the pure-space discontinuity case for maintaining the same level of accuracy due to wavelength or/and period compression induced by Doppler or/and index contrast effects [28]. Let us test the strategy outlined in the previous paragraph. Discretizing Maxwell's equations, \[\nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t}\] (1a) and \[\nabla\times\mathbf{H}=\frac{\partial\mathbf{D}}{\partial t} \tag{1b}\] with the constitutive relations \[\mathbf{D}=\epsilon\mathbf{E}\] (2a) and \[\mathbf{B}=\mu\mathbf{H} \tag{2b}\] in the usual way [35] yields, in two dimensions (\(y\) and \(z\), assuming \(\partial/\partial x=0\)) and for s-polarization2, Footnote 2: The sequel of the paper is restricted to s-polarization. The case of p-polarization can be treated in an analogous manner. \[B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^ {n-1}-\frac{\Delta t}{\Delta z}\left(E_{x}|_{k+1,i}^{n-\frac{1}{2}}-E_{x}|_{k, i}^{n-\frac{1}{2}}\right), \tag{3a}\] \[H_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=\frac{B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^ {n}}{\mu|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}},\] (3b) \[B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^ {n-1}+\frac{\Delta t}{\Delta y}\left(E_{x}|_{k,i+1}^{n-\frac{1}{2}}-E_{x}|_{k, i}^{n-\frac{1}{2}}\right),\] (3c) \[H_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=\frac{B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^ {n}}{\mu|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}},\] (3d) \[D_{x}|_{k,i}^{n+\frac{1}{2}}=D_{x}|_{k,i}^{n-\frac{1}{2}}+\frac {\Delta t}{\Delta y}\left(H_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}-H_{z}|_{k+ \frac{1}{2},i-\frac{1}{2}}^{n}\right)\] \[-\frac{\Delta t}{\Delta z}\left(H_{y}|_{k+\frac{1}{2},i+\frac{1}{2}} ^{n}-H_{y}|_{k-\frac{1}{2},i+\frac{1}{2}}^{n}\right) \tag{3e}\] and \[E_{x}|_{k,i}^{n+\frac{1}{2}}=\frac{D_{x}|_{k,i}^{n+\frac{1}{2}}}{\epsilon|_{k,i}^{n+\frac{1}{2}}}, \tag{3f}\] where the permittivity may be written, according to Fig. 1, as \[\epsilon|_{k,i}^{n+\frac{1}{2}}=\epsilon(k\Delta z,n\Delta t)=\begin{cases} \epsilon_{1}\ \mathrm{if}\ k\Delta z\leq(k\Delta z)_{0}+vn\Delta t,\\ \epsilon_{2}\ \mathrm{if}\ k\Delta z>(k\Delta z)_{0}+vn\Delta t,\end{cases} \tag{4}\] where \(k\), \(i\), \(n\) and \((k\Delta z)_{0}\) are the spatial index along the \(z\) direction, the spatial index along the \(y\) direction, the temporal index and the initial position of the interface on the spatial grid, respectively. Figure 2 compares results obtained by the just described approach with exact results given by analytical formulas that are provided in Appendix A [Eqs. (16)]. Figure 2(a) shows that the FDTD-computed scattered pulse waveform strongly deviates from the exact result, which points to a basic malfunction of the scheme. To check whether this discrepancy might not just be an artifact of under-sampling, Fig. 2(b) plots the scattering (reflection and transmission) coefficients for the pulse versus mesh density. It shows that the result in Fig. 2(a) had reached proper convergence and quantitatively confirms the error (\(24.98\%\) for transmission and \(50.31\%\) for reflection) that was qualitatively observed in Fig. 2(a), hence revealing a fundamental failure of the scheme to model a space-time discontinuity. Strangely, the coefficients in Fig. 2 converge to the _station Fig. 1: Naive application of the conventional FDTD scheme, with its standard update equations and absorbing boundary conditions, to model a space-time discontinuity (moving at the velocity \(v\)) between two media of different permittivities (\(\epsilon_{1}\) and \(\epsilon_{2}\)): simple rotation of the pure-space, vertical boundary into an oblique boundary in the simulation setup. ary_ instead of the space-time exact values! Why would this be the case? This question might be answered by considering that, as shown in the inset of Fig. 1, the fields common to the two media, i.e., the fields that are exactly positioned at the interface between these media and are hence the fields that are forced to be continuous there, are the tangential \(\mathbf{E}\) and \(\mathbf{H}\) fields. Since the continuity of such fields is the proper boundary condition for a stationary interface (and not to a space-time interface!) [44], the erroneous result is eventually not surprising. The scheme properly delineates the space-time discontinuity in the computational space (Fig. 1), which produces qualitatively correct spectral transformations (red-shift, wider-pulse reflection, due to contra-directional scattering and blue-shift, narrower-pulse, transmission, due to transmission from a rarer to a denser medium) [28], but, enforcing stationary (or instantaneous) instead of moving field continuity conditions, it fails to provide the correct scattering coefficients. An important conclusion to be drawn from the observations done in this section is that currently available _commercial FDTD software platforms_ are not capable of simulating space-time discontinuities, and hence electromagnetic moving media. Such tools can, of course, handle pure-space discontinuities (which correspond to the stationary limit of space-time discontinuities [2]), as they routinely do, since the related required continuity of the tangential \(\mathbf{E}\) and \(\mathbf{H}\) fields [28] is embedded in the spatial evolution of discretized Maxwell's curl equations (3a), (3c) and (3e), as shown in Appendix B. In fact, they can _also_ handle _pure-time discontinuities_ (which correspond the instantaneous limit of space-time discontinuities [2]) (e.g., [45, 46]), because the related required continuity of the tangential \(\mathbf{D}\) and \(\mathbf{B}\) fields [28] is embedded in the _temporal_ evolution of the same equations, as also shown in Appendix B. However, the discretized Maxwell's equations include no provision for satisfying the continuity conditions corresponding to space-time discontinuities beyond these two particular cases. ## III Hybrid-Field Maxwell's Equations A correct scheme for modeling space-time discontinuities must naturally enforce _the continuity of the corresponding fields_ at the interface between the two media that forms the discontinuity, consistently with the well-known moving boundary conditions [47, 48] \[\hat{\mathbf{n}}\times(\mathbf{E}_{2}^{*}-\mathbf{E}_{1}^{*}) =0, \tag{5a}\] \[\hat{\mathbf{n}}\times(\mathbf{H}_{2}^{*}-\mathbf{H}_{1}^{*}) =\mathbf{J}_{\mathrm{s}},\] (5b) \[\hat{\mathbf{n}}\cdot(\mathbf{D}_{2}-\mathbf{D}_{1}) =\rho_{\mathrm{s}},\] (5c) \[\hat{\mathbf{n}}\cdot(\mathbf{B}_{2}-\mathbf{B}_{1}) =0, \tag{5d}\] with \[\mathbf{E}^{*}=\mathbf{E}+\mathbf{v}\times\mathbf{B}\quad\text{and}\quad \mathbf{H}^{*}=\mathbf{H}-\mathbf{v}\times\mathbf{D}, \tag{6}\] where \(1\) and \(2\) label the media at the two sides the interface, \(\mathbf{J}_{\mathrm{s}}\) and \(\rho_{\mathrm{s}}\) are the usual surface current and charge densities, respectively, \(\hat{\mathbf{n}}\) is the unit vector normal the interface and pointing towards medium \(1\), and \(\mathbf{v}\) is the velocity of the interface, which is typically but not necessarily perpendicular to it [1]. Equations (5) and (6) reveal that the fields that are continuous at a (charge/current-less) space-time discontinuity are neither the \(\mathbf{E}\) and \(\mathbf{H}\) fields, nor the \(\mathbf{D}\) and \(\mathbf{B}\) fields, but the starred fields in (6). This consideration inspired us to establish in [1] a _generalized Yee cell_ with the usual, physical fields \(\mathbf{E}\) and \(\mathbf{H}\) being replaced by the _hybrid fields_\(\mathbf{E}^{*}=\mathbf{E}+\mathbf{v}\times\mathbf{B}\) and \(\mathbf{H}^{*}=\mathbf{H}-\mathbf{v}\times\mathbf{D}\), which include the auxiliary, unphysical terms \(\mathbf{v}\times\mathbf{B}\) and \(\mathbf{v}\times\mathbf{D}\) in addition to the usual, physical fields, for an automatic satisfaction of the moving boundary conditions. The generalized Yee cell corresponds then to the _hybrid-field Maxwell's equations_ obtained by inserting Eqs. (6) into Eqs. (1), viz., \[\nabla\times\mathbf{E}^{*}=-\frac{\partial\mathbf{B}}{\partial t}+\nabla \times(\mathbf{v}\times\mathbf{B}) \tag{7a}\] Fig. 2: Failure of the conventional FDTD scheme [Fig. 1 and Eqs. (3) with (4)] to model a space-time discontinuity, here for the parameters \(\epsilon_{1}=1\), \(\epsilon_{2}=4\), \(v=0.2c\) and \(\Delta t=\Delta z/(2c)\), and for the electric field Gaussian pulse excitation \(E=E_{0}e^{-(t-T_{0})^{2}/r^{2}}\) with \(T_{0}=200\Delta t\) and \(\tau=40\Delta t\). (a) Pulse evolution in space (\(z\)) and time (\(t\)). (b) Reflection (\(\Gamma\)) and transmission (\(T\)) coefficients, measured from the peaks of the scattered pulses, versus increasing mesh density, \(1/\Delta z\). and \[\nabla\times\mathbf{H}^{*}=\frac{\partial\mathbf{D}}{\partial t}-\nabla\times( \mathbf{v}\times\mathbf{D}), \tag{7b}\] and to the _hybrid-field constitutive relations_ obtained by inserting Eqs. (6) into (2), \[\mathbf{E}^{*}=\epsilon^{-1}\cdot\mathbf{D}+\mathbf{v}\times\mathbf{B}\] (8a) and \[\mathbf{H}^{*}=\mu^{-1}\cdot\mathbf{B}-\mathbf{v}\times\mathbf{D}, \tag{8b}\] which may be straightforwardly extended to bianisotropic relations3. Footnote 3: Such an extension is necessary for _moving-matter_ – as opposed to _moving-perturbation_ – structures, because matter, even when isotropic at rest, takes a particular form of anisotropy when moving, due to related magneto-electric coupling [47, 48, 49, 50]. In this paper, we restrict ourselves, for the sake of simplicity, to moving-perturbation structures. However, the proposed scheme straightforwardly applies to the case of moving-matter structures, which is just unessentially complicated by the tensorial nature of the bianisotropic parameters. ## IV Local Treatment of Moving Boundaries The FDTD scheme presented in [1] consists in first specifying the space-time constitutive parameters in the parametric setup of the simulation (as in the diagram of Fig. 1), then running the FDTD leapfrog algorithm with the aforementioned generalized Yee cell, which involves the generalized, hybrid (globally unphysical) fields \(\mathbf{E}^{*}\) and \(\mathbf{H}^{*}\), and finally computing the physical fields \(\mathbf{E}\) and \(\mathbf{H}\) by inverting Eq. (8) as \(\mathbf{E}=\epsilon^{-1}\cdot\mathbf{D}=\mathbf{E}^{*}-\mathbf{v}\times\mathbf{B}\) and \(\mathbf{H}=\mu^{-1}\cdot\mathbf{B}=\mathbf{H}^{*}+\mathbf{v}\times\mathbf{D}\). It allows to simulate _single-velocity_ space-time interface, slab, crystal and gradient structures. However, that scheme uses a generalized Yee cell that is, as the conventional Yee cell, _uniform_, i.e., that is the same across the entire computational domain, and that uniformity restricts it to moving structures that involve a unique velocity, namely the velocity \(\mathbf{v}\) in Eqs. (7) and (8). This represents a major limitation, which prevents, for instance, the simulation of moving structures involving multiple discontinuities of different velocities, such as space-time wedges [51, 52, 53] (example in Sec. VI-B with velocities \(\mathbf{v}_{\text{I}}\) and \(\mathbf{v}_{\text{II}}\)) and time-varying discontinuities, such as accelerated interfaces (example in Sec. VI-C with time-varying velocity, \(\mathbf{v}(t)\)) [33]. Moreover, the non-physicality of the hybrid fields across the entire computational domain implies a general incompatibility with standard absorbing conditions, such as Perfectly Matched Layers [43], and the impossibility to properly account for (physical) medium dispersion using standard related techniques [35]. We introduce here an alternative scheme that overcomes all of these limitations by performing a _local treatment_ of the space-time discontinuities involved in moving structures. The idea is simply to break the uniformity of the generalized Yee-cell scheme in [1] by applying it _only around the positions of the space-time discontinuities_ while using the conventional Yee scheme (its \(\mathbf{v}=0\) particular) everywhere else. This is globally illustrated in Fig. 3 for the 2+1D problem of oblique light incidence on a contra-directionally moving interface, while Fig. 4 provides related details using \(zt\)-plane (1+1D) projection of the 2+1D problem for a more general moving structure involving two interfaces having arbitrary and hence generally different velocities, with Fig. 4(a) showing the global two-interface three-media problem and Fig. 4(b) depicting the interconnections between the conventional field regions and the generalized, hybrid-field regions, corresponding to the staircase bands around the physical discontinuities, in terms of electromagnetic field samples around the interfaces. Obviously, such a treatment provides a straightforward approach to model moving structures with arbitrary multiple or/and varying velocities, as for instance the double-interface structure in Fig. 4 and as will be further illustrated in the examples of Sec. VI. The applicability of the "localization" represented in Figs. 3 and 4 is a priori not trivial. How can one be sure that spurious numerical scattering will not occur at the numerical, stair-case interfaces between the two types of Yee cells [Fig. 4(b)]? It turns out that, as shown in Appendix C [Eq. (30)], such a scattering issue does fortunately not occur, because the generalized Yee cell is inherently matched to its conventional counterpart. The only limitation of the proposed scheme is that it does not allow to model scattering at space-time point singularities between more than two media, such as the tip of the triangular region at the bottom of Fig. 4(a). At such locations, the different generalized Yee cells would indeed overlap and clash. However, that is not a major problem insofar as the space-time resolution of the simulation can always be increased to get as close as desired to the tip of the structure. The proposed localized generalized Yee-cell scheme does not only extend the uniform scheme in [1] to arbitrary space-time profiles. It also automatically offers, in contrast to the Fig. 3: Local treatment of the generalized Yee-cell scheme, overcoming the limitations of the uniform generalized Yee-cell scheme in [1], for oblique light incidence (2+1D problem) on a contra-directionally moving interface, with obliquity being represented by the oblique space-time trajectory of a light pulse (with its incident, reflected and transmitted parts). uniform scheme, immediate applicability of standard absorbing conditions, since the edges of the computational domain, assumed to not include space-time discontinuities, are then associated with the conventional Yee cell, as well as immediate applicability of standard medium dispersion modeling, since the fields in the bulk regions of the different regions are now the real, physical electromagnetic fields4. Footnote 4: The forthcoming update equations (Sec. V) do _not_ include dispersion because that would make the paper excessively long without bringing substantial benefits: standard FDTD methods to model dispersion, such as the Piecewise-Linear Recursive Convolution (PLRC) and Auxiliary Differential Equation (ADE) methods, may be straightforwardly applied in the proposed scheme. ## V Update Equations The final task is to establish the update equations corresponding to the local FDTD scheme elaborated in the previous section, specifically the generalized, hybrid-field update equations to be used in the discontinuity transition regions (the two staircase bands in the case Fig. 4). This implies the 2+1D discretization of the generalized, hybrid-field Maxwell's equations (7) and constitutive relations (8), which gives, under the assumption of s-polarization, \[\begin{split} B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=& B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n-1}-\frac{\Delta t}{\Delta z} \left(E_{x}^{*\,n-\frac{1}{2}}_{k+1,i}-E_{x}^{*\,n-\frac{1}{2}}\right)\\ &-v\Delta t\ \frac{\partial B_{y}}{\partial z}\bigg{|}_{k+\frac{1}{2},i+ \frac{1}{2}}^{n-\frac{1}{2}},\end{split} \tag{9a}\] \[\begin{split} H_{y}^{*}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}= &\frac{B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}}{\mu|_{k+\frac{1}{2 },i+\frac{1}{2}}^{n}}-vD_{x}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n},\end{split}\] (9b) \[\begin{split} B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=& \begin{split} B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n-1}+\frac{ \Delta t}{\Delta y}\left(E_{x}^{*\,n-\frac{1}{2}}_{k,i+1}-E_{x}^{*\,n-\frac{1} {2}}\right)\\ &+v\Delta t\ \frac{\partial B_{y}}{\partial y}\bigg{|}_{k+\frac{1}{2},i+ \frac{1}{2}}^{n-\frac{1}{2}},\end{split}\] (9c) \[\begin{split} H_{z}^{*}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}= &\begin{split} B_{z}|_{k+\frac{1}{2},i+\frac{1}{2}}^{ n}+\frac{\Delta t}{\Delta y}\left(H_{z}^{*}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}-H_{z}^{* \,n}|_{k+\frac{1}{2},i-\frac{1}{2}}^{n}\right)\\ &-\frac{\Delta t}{\Delta z}\left(H_{y}^{*}|_{k+\frac{1}{2},i+ \frac{1}{2}}^{n}-H_{y}^{*}|_{k-\frac{1}{2},i+\frac{1}{2}}^{n}\right)-v\Delta t \ \frac{\partial D_{x}}{\partial z}\bigg{|}_{k,i}^{n}\end{split} \tag{9e}\] and \[E_{x}^{*\,n+\frac{1}{2}}=\frac{D_{x}|_{k,i}^{n+\frac{1}{2}}}{\epsilon|_{k,i}^{ n+\frac{1}{2}}}-vB_{y}|_{k,i}^{n+\frac{1}{2}}, \tag{9f}\] where the hybrid (starred) fields (6) are discretized as \[E_{x}^{*}|_{k,i}^{n+\frac{1}{2}}=E_{x}|_{k,i}^{n+\frac{1}{2}}-v\frac{\left(B_{ y}|_{k-\frac{1}{2},i+\frac{1}{2}}^{n-1}+B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{ n-1}\right)}{2}, \tag{10a}\] \[\begin{split} H_{y}^{*}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=H_{y}|_{k+ \frac{1}{2},i+\frac{1}{2}}^{n}-v\frac{\left(D_{x}|_{k,i}^{n+\frac{1}{2}}+D_{x} |_{k+1,i}^{n+\frac{1}{2}}\right)}{2}\end{split} \tag{10b}\] and \[H_{z}^{*}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}=H_{z}|_{k+\frac{1}{2},i+\frac{1}{ 2}}^{n}. \tag{10c}\] In Eqs. (9), we have explicitly discretized only the velocity-independent (\(v=0\)) parts of Eqs. (7) and (8)5. The discretization of the velocity-dependent (\(v\neq 0\)) parts must be done by establishing stencils that provide numerical stability. We have found empirically, following [1], that appropriate generalized difference and average update equations are Footnote 5: The corresponding (discretized) parts of Eqs. (9) naturally reduce to the conventional update equations (3) upon explicitly setting \(v\) to zero in Eqs. (10). \[\begin{split}\frac{\partial B_{y}}{\partial z}\bigg{|}_{k+\frac{1 }{2},i+\frac{1}{2}}^{n-\frac{1}{2}}&=\frac{1+\mathrm{sgn}(v)}{2} \frac{B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n-1}-B_{y}|_{k-\frac{1}{2},i+\frac{1 }{2}}^{n-1}}{\Delta z}\\ &+\frac{1-\mathrm{sgn}(v)}{2}\frac{B_{y}|_{k+\frac{1}{2},i+\frac{1 }{2}}^{n-1}-B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n-1}}{\Delta z},\end{split} \tag{11a}\] Fig. 4: \(zt\)-plane (1+1D) projection of Fig. 3 with an additional interface for greater space-time structural generality. (a) Global view of the structure, with its two interfaces, of velocities \(v_{\mathrm{I}}\) and \(v_{\mathrm{II}}\), and three media, of permittivities \(\epsilon_{1}\), \(\epsilon_{2}\) and \(\epsilon_{3}\). (b) Interconnections between the conventional field regions and the generalized, hybrid-field regions around the physical discontinuity. \[D_{x}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n} =\frac{1+\text{sgn}(v)}{2}\frac{D_{x}|_{k+1,i}^{n-\frac{1}{2}}+D_{x} |_{k,i}^{n-\frac{1}{2}}}{2} \tag{11b}\] \[+\frac{1-\text{sgn}(v)}{2}\frac{D_{x}|_{k,i}^{n-\frac{1}{2}}+D_{x} |_{k-1,i}^{n-\frac{1}{2}}}{2},\] \[\frac{\partial B_{y}}{\partial y}\Big{|}_{k+\frac{1}{2},i+\frac{1 }{2}}^{n-\frac{1}{2}} =\frac{1+\text{sgn}(v)}{2}\frac{B_{y}|_{k+\frac{1}{2},i+\frac{1}{2 }}^{n}-B_{y}|_{k+\frac{1}{2},i-\frac{1}{2}}^{n}}{\Delta y}\] \[+\frac{1-\text{sgn}(v)}{2}\frac{B_{y}|_{k+\frac{1}{2},i+\frac{1}{ 2}}^{n-\frac{1}{2}}-B_{y}|_{k+\frac{1}{2},i+\frac{1}{2}}^{n}}{\Delta y},\] (11c) \[\frac{\partial D_{x}}{\partial z}\Big{|}_{k,i}^{n} =\frac{1+\text{sgn}(v)}{2}\frac{D_{x}|_{k,i}^{n-\frac{1}{2}}-D_{x }|_{k-1,i}^{n-\frac{1}{2}}}{\Delta z}\] (11d) \[+\frac{1-\text{sgn}(v)}{2}\frac{D_{x}|_{k+1,i}^{n-\frac{1}{2}}-D_{ x}|_{k,i}^{n-\frac{1}{2}}}{\Delta z}\] and \[\begin{split} B_{y}|_{k,i}^{n+\frac{1}{2}}&=\frac{1 +\text{sgn}(v)}{2}\frac{B_{y}|_{k-\frac{1}{2},i-\frac{1}{2}}^{n}+B_{y}|_{k- \frac{1}{2},i-\frac{1}{2}}^{n}}{2}\\ &+\frac{1-\text{sgn}(v)}{2}\frac{B_{y}|_{k+\frac{1}{2},i-\frac{1} {2}}^{n}+B_{y}|_{k+\frac{3}{2},i-\frac{1}{2}}^{n}}{2},\end{split} \tag{11e}\] where the signum function conveniently toggles between the first-row (\(v>0\)) and second-row (\(v<0\)) expressions for each equation6. According to Eqs. (11), the width of the generalized Yee cell region(s) varies between four cells or five points and five cells or six points, as illustrated in Fig. 4. That width is determined by the largest stencil in Eqs. (11), namely the stencil of Eq. (11e), which is of five points (the points \(k-3/2\), \(k-1/2\), \(k\), \(k+1/2\), \(k+3/2\)), or four cells, or six points or five cells at the half-integer time steps corresponding to a temporal variation (of one cell) of the discretized interface (see Fig. 4). Those width numbers correspond to the global form of Eqs. (11), before the assignation of the velocity; once the velocity has been assigned, the effective width is reduced to numbers corresponding to only one side of the numerical interface, i.e., only one side of the transition regions in Fig. 4, since the signum toggles in Eqs. (11) keep only one of the two rows in the right-hand side terms the equations7. Footnote 6: Also note that although they have been checked empirically – No general synthesis technique for determining stable stencils in the discretization of given partial differential equations seems to exist! – Eqs. (11) are based on stencils involving differences that are shifted to the direction of motion (sign of \(v\)), which makes intuitive sense considering “numerical advection”. Footnote 7: Thus, in the case Fig. 4, the transition region for the negative-velocity interface (zone A) may be reduced to the region at the right of the numerical interface, while the transition region for the positive-velocity interface (zone A) may be reduced to the region at the left of the numerical interface. The following further notes are in order about numerical stability: * The stencils in Eqs. (11) provide conditional stability, with the condition being that \(\Delta t/\Delta z\) ratios be taken small enough for actual stability [1]. * In the proposed local scheme (Sec. IV), the moving structure (e.g., two-interface structure in Fig. 4) is composed of two types of regions, bulk regions, described by the conventional stencil, and interface regions, described by the generalized stencil in Eqs. (11), with both types having specific stability features. * The bulk regions are naturally subject to the Courant-Friedrichs-Lewy (CFL) criterion [54]. In contrast, the interface regions might _a priori_ follow a _different_ stability criterion, since the related partial differential equations [Eqs. (7) and (8)] are different from the conventional partial differential equations [Eqs. (1) and (2)] when \(\mathbf{v}\neq 0\). * In fact, it turns out that the CFL criterion is _more restrictive_ (i.e., requires smaller \(\Delta t/\Delta z\) ratios) than the criterion associated with the generalized stencil. This is because the numerical light cone (stencil width) of the conventional Yee cell is narrower than that of the generalized Yee cell, and the level of stability is proportional to the width of the cone (maximal numerical speed and hence minimal \(\Delta t/\Delta z\)) [55]. Therefore, the stability-wise most constraining regions are the bulk regions and the corresponding CFL criterion is thus the criterion that is to be used globally in the proposed local scheme. ## VI Validation and Illustrative Examples We validate and illustrate here the proposed scheme via three canonical examples of moving-perturbation structures: a space-time interface under oblique incidence, a space-time wedge and a space-time accelerated interface. We chose these structures for the following two reasons. First, they are fundamental space-time structures, from which other structures with arbitrary space-time configurations (space-time uniform and nonuniform slabs, stacks, crystals, gradients, edges, prisms, lenses, complex media with arbitrary geometries) can be formed. Second, they admit exact (closed-form) solutions, given in the appendices, that provide ideal bench-marking for validating the proposed scheme. ### _Space-Time Interface_ Figure 5 presents the results for the space-time interface, under oblique incidence (scenario represented in Fig. 3). Figures 5(a) and (b) qualitatively show the scattering of a space-time pulse into reflected and transmitted parts, while Figs. 5(c) and (d) provide corresponding quantitative spectral information. All the simulation results perfectly agree with the exact results, given by Eq. (22)8. Note the spatial compression of the scattered pulses in Fig. 5(b) and corresponding (k) spectral expansion, which are due to contra-directional Doppler scattering for the reflected pulse and increasing-permittivity contrast for the transmitted pulse. Footnote 8: The scattering angles (\(\theta_{x,t}\)) may be found by inserting the corresponding \(k\)-values (\(k_{x,t,y,z}\)) taken at the maxima of the spectral pulses in Figs. 5(c) and (d) into Eqs. (18), and isolating the desired angle. ### _Space-Time Wedge_ Figure 6 presents the results for the space-time wedge, which consists of two interfaces of different velocities, meeting at a specific point of spacetime, and which is assumed to be normally illuminated. Figure 6(a) qualitatively shows the multiple scattering of a pulse in space and time9 on the wedge, while Fig. 6(b) provides corresponding quantitative spectral information. Again, all the simulation results perfectly agree with the exact results, given by a successive application of Eq. (22) at each of the two interfaces. Note that the pulse width within the wedge progressively decreases at each reflection event, as expected from the contra-moving nature of the corresponding scattering. Footnote 9: While Figs. 5(a) and (b) use a 2D space representation to emphasize the obliqueness of scattering, this figure, pertaining to normal incidence, uses a 1+1D space-time representation to emphasize multiple scattering in space-time. ### _Space-Time Accelerated Interface_ Figure 7 presents the results for the space-time accelerated interface, again under normal incidence. Figure 7(a) qualitatively shows the scattering of a pulse in the space-time diagram, while Fig. 7(b) provides corresponding quantitative spectral information. As in the previous two examples, all the simulation results perfectly agree with the exact results, given this time by Eqs. (38), with the results in Fig. 7(b) being obtained by numerical Fourier transformation of these equations. Note the space-time chirping effect induced by the nonuniformity (acceleration) of the interface, which is apparent in the variation of the separation between the trajectory crests (or troughs) of the scattered pulse in time and space10. Footnote 10: This chirping effect is much more pronounced in the reflected pulse than in the transmitted pulse. This is because, for the prevailing case contra-directional scattering (\(\beta=-|\beta|\)), the Doppler frequency shift is much greater in reflection [\(\omega_{\rm tr}=\omega(1+n_{1}|\beta|)/(1-n_{1}|\beta|)\)] than transmission \([\omega_{\rm tr}=\omega_{\rm t}(1+n_{1}|\beta|)/(1+n_{2}|\beta|)]\). The opposite would be true for co-directional scattering. ## VII Conclusion We have presented a generalized FDTD scheme to simulate moving electromagnetic structures with arbitrary space-time configurations and demonstrated the validity of this scheme against exact results for a few canonical moving-perturbation space-time structures. The proposed scheme fills a gap in both the open literature, whose related attempts were restricted to impenetrable, single and uniform velocity moving structures, and in commercial software capabilities, which previously included the simulation of pure-space and pure-time but not space-time structures. ## Appendix A Scattering from Moving Perturbation Interface We derive here the scattering coefficients or fields and deflection angles pertaining to plane wave and Gaussian pulse oblique incidence on a (uniformly) moving perturbation interface for bench-marking some results in Secs. II and VI. The basic formulas for the plane wave case are already given in [56], but we re-derive them here for the sake of completeness and notational convenience, along with new formulas for the pulse wave case. Under the paper's assumption of s-polarization and for the coordinate system selected in the paper (e.g., Fig. 5), the incident (i), reflected (r) and transmitted (t) electric and magnetic fields have the forms \[\mathbf{E}_{\mathrm{i}} =\hat{x}A_{\mathrm{i}}e^{i\phi_{\mathrm{i}}}, \tag{12a}\] \[\mathbf{E}_{\mathrm{r}} =\hat{x}A_{\mathrm{r}}e^{i\phi_{\mathrm{r}}},\] (12b) \[\mathbf{E}_{\mathrm{t}} =\hat{x}A_{\mathrm{t}}e^{i\phi_{\mathrm{t}}}, \tag{12c}\] and \[\mathbf{H}_{\mathrm{i}}=\left(\hat{y}\cos\theta_{\mathrm{i}}-\hat{z}\sin \theta_{\mathrm{i}}\right)\frac{A_{\mathrm{i}}}{\eta_{\mathrm{i}}}e^{i\phi_{ \mathrm{i}}}, \tag{13a}\] \[\mathbf{H}_{\mathrm{r}}=\left(-\hat{y}\cos\theta_{\mathrm{r}}-\hat{z}\sin \theta_{\mathrm{r}}\right)\frac{A_{\mathrm{r}}}{\eta_{\mathrm{i}}}e^{i\phi_{ \mathrm{r}}},\] (13b) \[\mathbf{H}_{\mathrm{t}}=\left(\hat{y}\cos\theta_{\mathrm{t}}-\hat{z}\sin \theta_{\mathrm{t}}\right)\frac{A_{\mathrm{t}}}{\eta_{\mathrm{2}}}e^{i\phi_{ \mathrm{t}}}, \tag{13c}\] where \(\eta_{1}\) and \(\eta_{2}\) are the impedances of the incidence/reflection and transmission media, respectively, \(\theta_{\mathrm{i,r,t}}\) are the angles with respect to the \(z\) axis, and where the space-time dependent phases are \[\phi_{\mathrm{i}} =k_{\mathrm{i},y}y+k_{\mathrm{i},z}z-\omega_{\mathrm{i}}t, \tag{14a}\] \[\phi_{\mathrm{r}} =k_{\mathrm{r},y}y-k_{\mathrm{r},z}z-\omega_{\mathrm{r}}t,\] (14b) \[\phi_{\mathrm{t}} =k_{\mathrm{t},y}y+k_{\mathrm{t},z}z-\omega_{\mathrm{t}}t. \tag{14c}\] The problem is greatly simplified by applying the frame hopping technique [57], i.e., by transforming the electric and magnetic fields into corresponding expressions in the comoving frame, where the interface is stationary, applying there the usual (stationary) boundary conditions, and inverse-Lorentz transforming the resulting fields. Lorentz-transforming the fields in Eqs. (12) and (13) yield [56, 57] \[\mathbf{E}_{\mathrm{i}}^{\prime} =\hat{x}\gamma A_{\mathrm{i}}(1-n_{1}\beta\cos\theta_{\mathrm{i} })e^{i\phi_{\mathrm{i}}^{\prime}}, \tag{15a}\] \[\mathbf{E}_{\mathrm{r}}^{\prime} =\hat{x}\gamma A_{\mathrm{r}}(1+n_{1}\beta\cos\theta_{\mathrm{r} })e^{i\phi_{\mathrm{r}}^{\prime}},\] (15b) \[\mathbf{E}_{\mathrm{t}}^{\prime} =\hat{x}\gamma A_{\mathrm{t}}(1-n_{2}\beta\cos\theta_{\mathrm{t} })e^{i\phi_{\mathrm{t}}^{\prime}}, \tag{15c}\] and \[\mathbf{H}_{\mathrm{i}}^{\prime} =\hat{y}\gamma\frac{A_{\mathrm{i}}}{\eta_{\mathrm{i}}}(\cos \theta_{\mathrm{i}}-n_{1}\beta)e^{i\phi_{\mathrm{i}}^{\prime}}-\hat{z}\frac{A_ {\mathrm{i}}}{\eta_{\mathrm{i}}}\sin\theta_{\mathrm{i}}e^{i\phi_{\mathrm{i}}^{ \prime}}, \tag{15d}\] \[\mathbf{H}_{\mathrm{r}}^{\prime} =-\hat{y}\gamma\frac{A_{\mathrm{r}}}{\eta_{\mathrm{i}}}(\cos \theta_{\mathrm{r}}+n_{1}\beta)e^{i\phi_{\mathrm{r}}^{\prime}}-\hat{z}\frac{A_ {\mathrm{r}}}{\eta_{\mathrm{i}}}\sin\theta_{\mathrm{r}}e^{i\phi_{\mathrm{r}}^ {\prime}},\] (15e) \[\mathbf{H}_{\mathrm{t}}^{\prime} =\hat{y}\gamma\frac{A_{\mathrm{t}}}{\eta_{\mathrm{2}}}(\cos \theta_{\mathrm{t}}-n_{2}\beta)e^{i\phi_{\mathrm{t}}^{\prime}}-\hat{z}\frac{A_ {\mathrm{t}}}{\eta_{\mathrm{2}}}\sin\theta_{\mathrm{t}}e^{i\phi_{\mathrm{t}}^ {\prime}}, \tag{15f}\] where \(\beta=v/c\) is the normalized velocity of the interface, \(\gamma=1/\sqrt{1-\beta^{2}}\) is the corresponding Lorentz factor, and the primed phase expressions are identical to those in Eqs. (14) but with primes added everywhere. Applying the stationary boundary conditions at the interface in the comoving frame, namely enforcing there the continuity of the primed tangential components of the electric and magnetic fields, and inverse-Lorentz transforming the results back to the laboratory frame leads to the reflection and transmission (or scattering) coefficients \[\Gamma=\frac{A_{\mathrm{r}}}{A_{\mathrm{i}}}=a_{\mathrm{r}}\frac{Z_{\mathrm{2} }-Z_{\mathrm{1}}}{Z_{\mathrm{2}}+Z_{\mathrm{1}}}\] (16a) and \[T=\frac{A_{\mathrm{r}}}{A_{\mathrm{i}}}=a_{\mathrm{t}}\frac{2Z_{\mathrm{2}}}{Z_ {\mathrm{2}}+Z_{\mathrm{1}}}, \tag{16b}\] where \[a_{\mathrm{r}}=\frac{1-n_{1}\beta\cos\theta_{\mathrm{i}}}{1+n_{1}\beta\cos \theta_{\mathrm{r}}}, \tag{16c}\] \[a_{\mathrm{t}}=\frac{1-n_{1}\beta\cos\theta_{\mathrm{i}}}{1-n_{2}\beta\cos \theta_{\mathrm{t}}},\] (16d) \[Z_{\mathrm{1}}=\frac{1-n_{1}\beta\cos\theta_{\mathrm{i}}}{\cos\theta_{\mathrm{i} }-n_{1}\beta}\eta_{\mathrm{1}}\] (16e) and \[Z_{\mathrm{2}}=\frac{1-n_{2}\beta\cos\theta_{\mathrm{t}}}{\cos\theta_{\mathrm{t} }-n_{2}\beta}\eta_{\mathrm{2}}, \tag{16f}\] and to the scattered temporal and spatial frequencies \[\omega_{\rm r}=\frac{1-n_{1}\beta\cos\theta_{\rm i}}{1+n_{1}\beta\cos\theta_{\rm r} }\omega_{\rm i}, \tag{17a}\] \[\omega_{\rm t}=\frac{1-n_{1}\beta\cos\theta_{\rm i}}{1-n_{2}\beta\cos\theta_{\rm t }}\omega_{\rm i}, \tag{17b}\] and \[k_{{\rm r},z}=\frac{1/n_{1}-\beta\cos\theta_{\rm i}}{1/n_{1}+\beta\cos\theta_{ \rm r}}k_{{\rm i},z},\quad k_{{\rm r},y}=k_{{\rm i},y}, \tag{18a}\] \[k_{{\rm t},z}=\frac{1/n_{1}-\beta\cos\theta_{\rm i}}{1/n_{2}-\beta\cos\theta_{ \rm t}}k_{{\rm i},z},\quad k_{{\rm t},y}=k_{{\rm i},y}, \tag{18b}\] corresponding to reflection and transmission deflection (or scattered) angles \[\cos\theta_{\rm r}=\frac{(n_{1}^{2}\beta^{2}+1)\cos\theta_{\rm i}-2n_{1}\beta }{(n_{1}\beta-\cos\theta_{\rm i})^{2}+\sin^{2}\theta_{\rm i}}\] (19a) and \[\cos\theta_{\rm t}=\frac{n_{1}^{2}\beta\sin^{2}\theta_{\rm i}}{n_ {2}(n_{1}\beta-\cos\theta_{\rm i})^{2}+n_{2}\sin^{2}\theta_{\rm i}}+\] \[\frac{(1+n_{1}\beta\cos\theta_{\rm i})\sqrt{n_{2}^{2}(n_{1}\beta- \cos\theta_{\rm i})^{2}+(n_{2}^{2}-n_{1}^{2})\sin^{2}\theta_{\rm i}}}{n_{2}(n _{1}\beta-\cos\theta_{\rm i})^{2}+n_{2}\sin^{2}\theta_{\rm i}}. \tag{19b}\] We next wish to determine how the interface a Gaussian pulsed version of the plane wave in (12a), namely \[E_{\rm i}(y,z)=\hat{x}A_{\rm i}e^{ik_{{\rm i},y}y+ik_{{\rm i},z}z}e^{-(y/\sigma _{y})^{2}}e^{-(z/\sigma_{z})^{2}}. \tag{20}\] which is illustrated in Fig. 5. Since the oscillatory part of this pulse is just the plane wave in (12a), the corresponding scattered angles are also given by Eqs. (19). In contrast, the corresponding scattering field magnitudes are given by \[E_{{\rm r},{\rm t}}(y,a_{{\rm r},t}z)=\hat{x}\{\bar{\Gamma}(z),\bar{T}(z)\}*E _{\rm i}(y,z), \tag{21}\] where \(\bar{\Gamma}(z)\) and \(\bar{T}(z)\) are impulse responses (the medium is linear time invariant in terms of space at any given time) whose \(z\) dependence are related to the arrival and scattering timing of the pulse on the interface. The functions \(\hat{\Gamma}(z)\) and \(\bar{T}(z)\) are not known a priori. However, Fourier-transforming Eq. (21) yields \[\tilde{E}_{{\rm r},{\rm t}}(k_{y},a_{{\rm r},t}k_{z})=\hat{x}\{\Gamma(k_{z}), T(k_{z})\}\tilde{E}_{\rm i}(k_{y},k_{z})/a_{{\rm r},{\rm t}}, \tag{22}\] where \[\tilde{E}_{\rm i}(k_{y},k_{z})=\hat{x}\frac{\sigma_{y}\sigma_{z}}{16\pi}A_{\rm i }e^{-(\sigma_{y}(k_{y}-k_{{\rm i},y}))^{2}/4}e^{-(\sigma_{z}(k_{z}-k_{{\rm i}, z}))^{2}/4} \tag{23}\] is the Fourier transform of Eq. (20) and where the transfer functions \(\Gamma(k_{z})\) and \(T(k_{z})\) are given by Eqs. (16a) with \(k_{z}=k_{{\rm r},z}=n_{1}k_{0}\cos\theta_{\rm r}\) and (16b) with \(k_{z}=k_{{\rm t},z}=n_{2}k_{0}\cos\theta_{\rm t}\). The scattered field is then obtained by inverse Fourier-transforming this relation. ## Appendix B Natural Enforcement of Pure-Space and Pure-Time Boundary Conditions in Conventional FDTD We show here that the conventional FDTD scheme [35] automatically enforces the continuity of the (tangential) \({\bf E}\) and \({\bf H}\) fields at a pure-space (or stationary) interface, as well-known from routine FDTD simulations, and the continuity of the \({\bf D}\) and \({\bf B}\) fields at a pure-time (or instantaneous) interface [28, 58], which has been much less studied and is hence much less known, as claimed in Sec. II. Figure 8 shows the two interfaces in the conventional, staggered Yee-cell FDTD computational grid for the case of an interface between two different dielectric media (of permittivities \(\epsilon_{1}\) and \(\epsilon_{2}\)), with Figs. 8(a) and 8(b) respectively pertaining to the pure-space and pure-time cases. Notice that in the present case of a dielectric discontinuity, the interfaces are positioned across electric (\({\bf E}\),\({\bf D}\) and \(\epsilon\) space integer and time half-integer) grid points, whereas in the case of a magnetic discontinuity (between media of permeabilities \(\mu_{1}\) and \(\mu_{2}\)), they would be positioned across magnetic (\({\bf H}\), \({\bf B}\) and \(\mu\) space half-integer and time integer) grid points. In the case of the pure-space discontinuity [Fig. 8(a)], we shall restrict our attention, without loss of generality, to the case of normal incidence (\(z\)-direction), whose spatial evolution for a dielectric discontinuity involves the fields \(E_{x}\) and \(B_{y}\) according to Eqs. (1a) and (2a) and corresponds to the update equation (3a). The red and blue squares in the inset of the figure highlight the fields involved in that equation at two successive spatial iterations and show that the \(E_{x}\) field sample at the interface is common to the two iterations, which demonstrates the continuity of (tangential) \({\bf E}\). The case of the pure-time discontinuity [Fig. 8(b)] whose temporal evolution for a dielectric discontinuity involves the fields \({\bf D}\) and \({\bf H}\) according to Eqs. (1b) and again (2a) corresponds to the update equation (3e). The red and blue squares in the inset of the figure highlight the fields involved in that equation at two successive temporal iterations and show that the \({\bf D}\) field sample at the interface is common to the two iterations, which demonstrates the continuity of \({\bf D}\). An analogous argument with Eq. (3a) replacing Eq. (3e) would demonstrate the continuity of \({\bf B}\). ## Appendix C Inherent Matching of the Generalized Yee cell with its Conventional Counterpart We show here that, as announced in Sec. IV, the generalized Yee cell is inherently matched with the conventional Yee cell, which ensures the applicability of the localized scheme. This demonstration is most easily done in the spectral domain. Fourier-transforming the generalized, hybrid Maxwell's equations (7) and corresponding constitutive relations (8) yields, for s-polarization, \[\omega\tilde{B}_{y}=k_{z}\tilde{E}_{x}^{*}+k_{z}v\tilde{B}_{y}, \tag{24a}\] \[\omega\tilde{B}_{z}=-k_{y}\tilde{E}_{x}^{*}-k_{y}v\tilde{B}_{y}, \tag{24b}\] \[\omega\tilde{D}_{x}=k_{z}\tilde{H}_{y}^{*}-k_{y}\tilde{H}_{z}^{*}+k_{z}v\tilde{D} _{x}, \tag{24c}\] and \[\tilde{B}_{y}=\mu\tilde{H}_{y}^{*}+\mu v\tilde{D}_{x}, \tag{25a}\] \[\tilde{D}_{x}=\epsilon\tilde{E}_{y}^{*}+\mu v\tilde{B}_{y}. \tag{25b}\] The related impedance may be found first eliminating \(\tilde{D}_{x}\) upon inserting Eq. (25b) into Eq. (24c), which gives \[(\omega-k_{z}v)(\epsilon\tilde{E}_{y}^{*}+\mu v\tilde{B}_{y})=k_{z}\tilde{H}_{ y}^{*}-k_{y}\tilde{H}_{z}^{*}, \tag{26}\] and further eliminating \(\tilde{B}_{y}\) upon substituting \(\tilde{B}_{y}\) from Eq. (24a) into that relation [Eq. (26)], which leads to \[\omega\epsilon\tilde{E}_{x}^{*}=k_{z}\tilde{H}_{y}^{*}-k_{y}\tilde{H}_{z}^{*}. \tag{27}\] Dividing then both sides of this equation by \(E_{x}^{*}\) yields then the following expression involving the ratios of the electric field to the magnetic fields: \[\omega\epsilon=k_{z}\frac{\tilde{H}_{y}^{*}}{\tilde{E}_{x}^{*}}-k_{y}\frac{ \tilde{H}_{z}^{*}}{\tilde{E}_{x}^{*}}. \tag{28}\] Finally, substituting \(k_{z}=nk_{0}\cos\theta\), \(k_{y}=nk_{0}\sin\theta\), \(\tilde{H}_{y}^{*}=\tilde{H}_{0}^{*}\cos\theta\) and \(\tilde{H}_{z}^{*}=-\tilde{H}_{0}^{*}\sin\theta\), with \(\theta\) being the direction of wave propagation in the \(yz\)-plane, in this equation results into \[\omega\epsilon=\left(\frac{\cos^{2}\theta\tilde{H}_{0}^{*}}{\tilde{E}_{x}^{*} }+\frac{\sin^{2}\theta\tilde{H}_{0}^{*}}{\tilde{E}_{x}^{*}}\right)nk_{0}=\frac {nk_{0}\tilde{H}_{0}^{*}}{\tilde{E}_{x}^{*}}=\frac{nk_{0}}{\eta^{*}}. \tag{29}\] This relation may be rewritten in terms of the impedance as \[\eta^{*}=\frac{\tilde{E}_{x}^{*}}{\tilde{H}_{0}^{*}}=\frac{nk_{0}}{\omega \epsilon}=\frac{\tilde{E}_{x}}{\tilde{H}_{0}}=\eta, \tag{30}\] revealing that the generalized Yee cell has the same impedance as its conventional counterpart, and is hence impedance-matched to it, which ensure that no spurious numerical scattering occurs in the transition regions [two edges of the staircase band in Fig. 4(b)] between the two types of Yee cells. The generalized Yee cell should also be velocity-matched to the conventional Yee cell for it would not the velocity would abruptly change in the transition regions [two edges of the staircase band in Fig. 4(b)], which would induce spurious numerical frequency transitions. Such velocity matching is also true. This is immediately seen upon noting that the \(\mathbf{E}^{*}\) and \(\mathbf{H}^{*}\) fields [Eqs. (6)] have the same phase as their physical counterparts, since the \(\mathbf{E}\) and \(\mathbf{B}\) fields and the \(\mathbf{H}\) and \(\mathbf{D}\) fields have the same phase and \(\mathbf{v}\) is real, and have hence also the same phase velocity (or refractive index). Figure 9 provides a numerical verification and illustration of the double impedance- and velocity-matching of the generalized Yee cell with its conventional counterpart by simulating an imaginary interface, with immaterial, testing Yee cell transition, between two identical media. As expected, neither scattering nor frequency transformation occur in that simulation. ## Appendix D Scattering from an Accelerated Perturbation Interface We derive here expressions for the fields scattered by an accelerated perturbation interface for bench-marking the numerical results in Sec. VI-C. The interface is assumed to have constant proper (or comoving-frame) acceleration, \(a^{\prime}\), which corresponds to the Rindler space-time metric [59] via Einstein's gravity-acceleration equivalence principle [60]. The incident (i), reflected (r) and transmitted (t) electric and magnetic fields have the general forms \[E_{i,x}=A_{i}f(k_{i}z-\omega_{i}t), \tag{31a}\] \[E_{r,x}=A_{r}f(k_{r}z+\omega_{i}t),\] (31b) \[E_{t,x}=A_{t}f(k_{t}z-\omega_{i}t), \tag{31c}\] Fig. 8: Interface between two different dielectric media (of permittivities \(\epsilon_{1}\) and \(\epsilon_{2}\)) in the conventional Yee-cell FDTD grid for (a) a pure-space (or stationary) discontinuity, with continuity of the (tangential) \(\mathbf{E}\) field, and (b) a pure-time (or instantaneous) discontinuity, with continuity of the \(\mathbf{D}\) field. and \[H_{\mathrm{i},y}=\frac{A_{\mathrm{i}}}{\eta_{1}}f(k_{\mathrm{i}}z-\omega_{ \mathrm{i}}t), \tag{32a}\] \[H_{\mathrm{r},y}=-\frac{A_{\mathrm{r}}}{\eta_{1}}f(k_{\mathrm{r}}z+\omega_{ \mathrm{r}}t),\] (32b) \[H_{\mathrm{t},y}=\frac{A_{\mathrm{t}}}{\eta_{2}}f(k_{\mathrm{t}}z-\omega_{ \mathrm{t}}t), \tag{32c}\] where \(f(\cdot)\) represents an arbitrary waveform profile. We shall use again the frame-hopping strategy [57], i.e., transform the fields (31) and (32) into their comoving-frame counterparts, apply stationary boundary conditions in the moving frame, and transform the resulting complete fields back to the laboratory frame. The corresponding frame transformations are generally given by the tensorial coordinate transformations formulas [61] \[F_{\mu^{\prime}\nu^{\prime}}=\frac{\partial x^{\rho}}{\partial x^{\mu^{\prime }}}\frac{\partial x^{\sigma}}{\partial x^{\nu^{\prime}}}F_{\rho\sigma}\] (33a) and \[\left|\text{det}\left(\frac{\partial x^{\mu^{\prime}}}{\partial x^{\nu^{\prime }}}\right)\right|G^{\mu^{\prime}\nu^{\prime}}=\frac{\partial x^{\mu^{\prime }}}{\partial x^{\rho}}\frac{\partial x^{\nu^{\prime}}}{\partial x^{\sigma}}G^ {\rho\sigma}, \tag{33b}\] where, in the case of electromagnetics, \(F_{\mu\nu}\) and \(G^{\mu\nu}\) are the Faraday tensor and its dual [57] \[F_{\mu\nu}=\begin{bmatrix}0&E_{x}&E_{y}&E_{z}\\ -E_{x}&0&-cB_{z}&cB_{y}\\ -E_{y}&cB_{z}&0&-cB_{x}\\ -E_{z}&-cB_{y}&cB_{x}&0\end{bmatrix}\] (34a) and \[G^{\mu\nu}=\begin{bmatrix}0&-cD_{x}&-cD_{y}&-cD_{z}\\ cD_{x}&0&H_{z}&-H_{y}\\ cD_{y}&-H_{z}&0&H_{x}\\ cD_{z}&H_{y}&-H_{x}&0\end{bmatrix}. \tag{34b}\] In addition to the electromagnetic fields in Eqs. (34), the tensorial coordinate transformations in Eqs. (33) also require the space-time variable Rindler transformations [59], or their more general Kottler-Moller version [62] for accommodating nonzero initial velocities, viz., \[ct=\frac{c^{2}}{a^{\prime}}\sqrt{g_{00}}\sinh(\xi+\xi_{0})-\frac{c^{2}}{a^{ \prime}}\sinh(\xi_{0}), \tag{35a}\] \[x=x^{\prime},\] (35b) \[y=y^{\prime},\] (35c) \[z=\frac{c^{2}}{a^{\prime}}\sqrt{g_{00}}\cosh(\xi+\xi_{0})-\frac{c^{2}}{a^{ \prime}}\cosh(\xi_{0}), \tag{35d}\] where \[\xi=a^{\prime}t^{\prime}/c \tag{35e}\] and \[\xi_{0}=\sinh^{-1}(\beta_{0}\gamma_{0}), \tag{35f}\] with \(\beta_{0}\) and \(\gamma_{0}\) being the initial relative velocity and Lorentz factor, respectively, and \[g_{00}=\left[1+(a^{\prime}z^{\prime}/c^{2})\right]^{2}, \tag{35g}\] and \(g_{00}\) being the \(00\)-term of the metric \(g_{\mu^{\prime}\nu^{\prime}}=\mathrm{diag}(g_{00},1,1,1)\) Substituting Eqs. (31) and (32) into Eqs. (34) as well as Eqs. (35) into Eqs. (33) yields the transformed-field expressions \[E^{\prime}_{\mathrm{i},x} =A_{\mathrm{i}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_{0})-n_{1}\sinh( \xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{i}}), \tag{36a}\] \[E^{\prime}_{\mathrm{r},x} =A_{\mathrm{r}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_{0})+n_{1}\sinh( \xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{r}}),\] (36b) \[E^{\prime}_{\mathrm{t},x} =A_{\mathrm{t}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_{0})-n_{2}\sinh( \xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{t}}) \tag{36c}\] and \[H^{\prime}_{\mathrm{i},y} =\frac{A_{\mathrm{i}}}{\eta_{1}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_ {0})-n_{1}\sinh(\xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{i}}), \tag{36d}\] \[H^{\prime}_{\mathrm{r},y} =\frac{A_{\mathrm{r}}}{\eta_{1}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_ {0})+n_{1}\sinh(\xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{r}}),\] (36e) \[H^{\prime}_{\mathrm{t},y} =\frac{A_{\mathrm{t}}}{\eta_{2}}\sqrt{g_{00}}\left[\cosh(\xi+\xi_ {0})-n_{2}\sinh(\xi+\xi_{0})\right]f(\phi^{\prime}_{\mathrm{t}}), \tag{36f}\] where \[\phi^{\prime}_{\mathrm{i}}= k_{\mathrm{i}}\left[\frac{c^{2}}{a^{\prime}}\sqrt{g_{00}}\cosh(\xi+ \xi_{0})-\frac{c^{2}}{a^{\prime}}\cosh(\xi_{0})\right] \tag{37a}\] \[-\omega_{\mathrm{i}}\left[\frac{c}{a^{\prime}}\sqrt{g_{00}}\sinh( \xi+\xi_{0})-\frac{c}{a^{\prime}}\sinh(\xi_{0})\right],\] \[\phi^{\prime}_{\mathrm{r}}= k_{\mathrm{r}}\left[\frac{c^{2}}{a^{\prime}}\sqrt{g_{00}}\cosh( \xi+\xi_{0})-\frac{c^{2}}{a^{\prime}}\cosh(\xi_{0})\right]\] \[+\omega_{\mathrm{r}}\left[\frac{c}{a^{\prime}}\sqrt{g_{00}}\sinh( \xi+\xi_{0})-\frac{c}{a^{\prime}}\sinh(\xi_{0})\right],\] Fig. 9: Verification and illustration of the inherent impedance- and velocity-matching of the generalized Yee cell with the conventional Yee cell for a moving interface (\(v=-0.3c\)) between two identical media (\(\epsilon=1.5\)), showing temporal snapshots where the wave is (a) in the incident conventional Yee cell region, (b) across the input edge of the conventional to generalized Yee cell regions [left edge of the staircase band in Fig. 4(b)], (c) across the output edge of that region [right edge of the staircase band in Fig. 4(b)], and (d) in the output conventional Yee cell region. \[\begin{split}\phi_{\text{t}}^{\prime}=& k_{\text{t}}\left[\frac{c^{2}}{a^{\prime} \sqrt{g_{00}}}\cosh(\xi+\xi_{0})-\frac{c^{2}}{a^{\prime}}\cosh(\xi_{0})\right]\\ &-\omega_{\text{t}}\left[\frac{c}{a^{\prime}}\sqrt{g_{00}}\sinh( \xi+\xi_{0})-\frac{c}{a^{\prime}}\sinh(\xi_{0})\right].\end{split} \tag{37c}\] Enforcing the continuity of the tangential electric and magnetic fields across the comoving stationary interface and inverse-Rindler transforming the resulting expressions using (33) leads finally to the electric scattered fields \[\begin{split} E_{\text{r}}=&\frac{\eta_{2}-\eta_{1} }{\eta_{2}+\eta_{1}}\frac{1-n_{1}\tanh(\xi+\xi_{0})}{1+n_{1}\tanh(\xi+\xi_{0}) }\,A_{\text{i}}\\ &\cdot f\left[\frac{1-n_{1}\tanh(\xi+\xi_{0})}{1+n_{1}\tanh(\xi+ \xi_{0})}(k_{\text{i}}z+\omega_{\text{t}}t)\right]\end{split}\] (38a) and \[\begin{split} E_{\text{t}}=&\frac{2\eta_{2}}{ \eta_{2}+\eta_{1}}\frac{1-n_{1}\tanh(\xi+\xi_{0})}{1-n_{2}\tanh(\xi+\xi_{0}) }\,A_{\text{i}}\\ &\cdot f\left[\frac{1-n_{1}\tanh(\xi+\xi_{0})}{1-n_{2}\tanh(\xi+ \xi_{0})}(k_{\text{i}}z+\omega_{\text{t}}t)\right],\end{split} \tag{38b}\] where \(\xi=\xi(t^{\prime})\) and \(\xi_{0}\) were given by Eqs. (35e) and (35f). Note that these relations are expressed here in terms of a combination of comoving- and laboratory-frame quantities for the sake of compactness; the complete laboratory-frame quantities are obtained by substitution of the inverse relations of Eqs. (35a) and (35d).
2303.15793
Black holes as spherically-symmetric horizon-bound objects
Working in a semi-classical setting, we consider solutions of the Einstein equations that exhibit light trapping in finite time according to distant observers. In spherical symmetry, we construct near-horizon quantities from the assumption of regularity of the renormalized expectation value of the energy-momentum tensor, and derive explicit coordinate transformations in the near-horizon region. We examine the boundary conditions appropriate for embedding the model into a cosmological background, describe their evaporation in the linear regime and highlight the observational consequences, while also discussing the implications for the laws of black hole mechanics.
Pravin K. Dahal, Fil Simovic, Ioannis Soranidis, Daniel R. Terno
2023-03-28T07:58:40Z
http://arxiv.org/abs/2303.15793v2
# Physical black holes in cosmological spacetimes ###### Abstract Working in the semi-classical setting, we present an exactly solvable candidate model for astrophysical black holes, which can be embedded in a cosmological background and possess regular apparent horizons that form in finite observational time. We construct near-horizon quantities from the assumption of regularity of the renormalized expectation value of the energy-momentum tensor, and derive explicit coordinate transformations in the near-horizon region. We discuss the appropriate boundary conditions for the embedding of the model into an FRWL background, describe their evaporation in the linear regime, and highlight consequences for the laws of black hole mechanics when back-reaction is present. ## I Introduction Dozens of astrophysical black holes (ABHs) -- dark, massive, ultra-compact objects -- exist in the observable Universe. They range in appearance from the components of binary systems with the mass of a few suns, to the supermassive cores of quasars in the centers of galaxies. ABH candidates are identified and characterized via gravitational wave interferometry (the LIGO/Virgo collaboration [1]), and through electromagnetic observations [2], including very long baseline interferometry in the microwave (the Event Horizon Telescope [3]), x-ray spectroscopy (using the K\(\alpha\) line of iron [4; 5]), and more. As the existence of ABHs is now established beyond reasonable doubt, the question of their physical nature [6; 7; 8] becomes especially relevant. Broadly speaking there are two competing views. The first is that mathematical black holes (MBHs), which are possibly the most dramatic prediction of general relativity and embody our traditional notions of black holes [9; 10; 11; 12; 13; 14], are suitable for describing ABHs. Their defining feature is the event horizon, a null surface that causally disconnects the black hole interior from the outside world. For the Schwarzschild black hole it is located at the gravitational radius \(r_{g}=2GM/c^{2}\). The MBH paradigm explains a staggering variety of astrophysical phenomena and successfully models ABH properties across all currently accessible time and length scales [6; 7; 15]. However, identifying ABHs with MBHs comes with a conceptual price. The exteriors of Schwarzschild or Kerr MBHs are regular, but their interiors are not. They contain Cauchy horizons and singularities, with the most dramatic pathologies occurring at the curvature singularity of the Schwarzschild solution at \(r=0\). Such pathologies are expected to be resolved by a presently unknown quantum theory of gravity, but the known quantum effects are responsible for a host of technical difficulties and unresolved paradoxes [16; 12; 17; 18; 19]. The second view thus postulates the existence of some black hole mimickers that fit the observed data (and are thus sufficiently close to the MBH solutions of general relativity), but are pathology-free. A variety of models [6; 7; 20] designated as horizonless exotic compact objects (ECOs) appear to provide an alternative explanation of the observed ABHs, at the price of modifying known physics and/or the introduction of some exotic quantum matter. This conceptual dichotomy is somewhat blurred [21], especially if we take into account the following. On the one hand, Schwarzschild or Kerr geometries are the asymptotic states of a classical gravitational collapse. According to a distant observer (who we refer to as Bob) once the stellar remnant cannot be supported by degeneracy pressure, it turns into a frozen dark star of radius \(r\approx r_{g}\) within a few light crossing times \(t\sim r_{g}/c\). However, the event horizon is in principle an unobservable teleological entity [22; 23; 24], and quantum effects may prevent it from forming at all [25; 26]. Both numerical and observational studies thus focus on other characteristics of black holes [14; 27]. This is the rationale behind ECOs -- they are designed to closely mimic a MBH without forming an event horizon [6]. However, this mimicking requires violating one or more of the natural assumptions entering Buchdahl's theorem [6; 28]. A direct or indirect result of these violations is the existence of non-classical matter, whose energy-momentum tensor (EMT) \(T_{\mu\nu}\) violates at least the null energy condition (NEC), which states \(T_{\mu\nu}k^{\mu}k^{\nu}\geqslant 0\) for all null vectors \(k\)[11; 13; 29]. On the other hand, the existence of Hawking radiation leads to a large but finite lifetime for black holes and itself violates the NEC in the vicinity of the apparent horizon [12]. This motivates the introduction of another class of singularity-free objects, regular black holes (RBHs), which represent domains of spacetime that enable temporary but prolonged trapping of light [30; 31; 32; 33]. The trapping of light underpins our notions of what physically constitutes a black hole [34] and we use it as its defining feature [35; 36]. It is useful to introduce a suitable (not uniquely defined) parameter \(\epsilon\) that characterizes how close a proposed ultra-compact object is to its Schwarzschild or Kerr counterpart [20]. Classical MBHs correspond to the asymptotic scenario in which \(\epsilon\to 0\) as \(t\rightarrow\infty\), where \(t\) is the time measured by Bob. Various ECOs correspond to an \(\epsilon>0\) that is reached at finite \(t\) or approached asymptotically. On the other hand, an evaporating RBH is an example of a physical black hole (i.e. trapped spacetime domain) that forms at some finite time of Bob, with \(\epsilon=0\) for some \(t<\infty\). We use the label "physical black hole" (PBH) in this latter sense. A PBH may or may not have an event horizon or singularity [36]. As a result there is a need to distinguish between three candidates for the true identity of astrophysical black holes -- MBHs, ECOs, and PBHs [8; 36]. To uncover the true nature of ABHs it is necessary to compare the properties of PBHs with those of the conventional semiclassical black holes, and identify the potential for extracting observational signatures. This comparison, however, cannot take place in an asymptotically flat spacetime to which the standard MBH solutions belong. To within sub-percent precision, the Universe is described at cosmological scales by the perturbed spatially flat Freidmann-Robertson-Lemaitre-Walker (FRLW) metric [37; 38]. The Kerr solution is asymptotically flat and is thus necessarily provisional, even if the issues surrounding singularities and event horizons are resolved. Beyond time and length scales that are small relative to the reciprocal Hubble parameter, it can only be treated as an approximation to a more general solution [15]. In a separate but related development, activity over the last two decades has led to a renewed interest in mathematical models of inhomogeneities in the cosmological background, which straddle the cosmological and black hole scales [39]. In this work, we take steps towards developing a complete framework for describing astrophysical black holes. We present an exactly solvable model for an evaporating PBH, based on a small number of operationally natural assumptions. We demonstrate a general procedure for describing these PBHs as inhomogeneities in the FRLW background, and provide details of their embedding in a spatially flat asymptotically de Sitter spacetime. Since a majority of the results on cosmological black holes [13; 39] and concrete results on PBHs [36] are obtained in spherical symmetry, we work in this simplifying setting. The rest of this paper is organized as follows. In Sec. IIA we review the main aspects of the formalism used to construct the PBH model. In Sec. IIB we demonstrate explicit coordinate transformations between the two systems best adapted to evaporating BH models -- \((t,r)\) and \((v,r)\) coordinates -- and derive general relations between the leading contributions to near-horizon quantities. In Sec. III, we present exact solutions for the case of linear evaporation, and show that a linear evaporation law in one coordinate system necessarily implies linear evaporation in the other. In Sec. IV, we show that the PBH metric can be consistently embedded in an FRWL cosmology, and propose a representative compactification of the resulting spacetime. We conclude in Sec V with a summary of our results, their implications, and directions for future work. Throughout, we work in units where \(\hbar=c=G=1\). ## II Physical black holes ### General set-up and admissible solutions The self-consistent approach [36] is based on semiclassical gravity [40]. The spacetime geometry is described by a metric \(g_{\mu\nu}\), and the notion of test particles' trajectories, horizons, etc. are assumed to be well-defined. The metric itself is a solution of the Einstein equations, which may include higher-order curvature terms and a cosmological constant. Their source is the energy-momentum tensor \(T_{\mu\nu}\), which is a renormalized expectation value of some EMT operator in some unspecified state. We do not make any assumption about the nature of matter fields or their quantum states, and do not separate the background (cosmological and/or collapsing matter) from the generated quantum excitations. The goal is to infer as much information as possible about the EMT and the metric in the vicinity of the apparent horizon simply from its existence. Thus in practice we analyse the behaviour of solutions to \[R_{\mu\nu}-\tfrac{1}{2}g_{\mu\nu}R=8\pi T_{\mu\nu}\;, \tag{1}\] where \(R_{\mu\nu}\) and \(R\) are the Ricci tensor and scalar, respectively, and the right hand includes some or all of the described above components. A general spherically symmetric metric in Schwarzschild coordinates [10; 13] is given by \[ds^{2}=-e^{2h(t,r)}f(t,r)dt^{2}+f(t,r)^{-1}dr^{2}+r^{2}d\Omega_{2}\;, \tag{2}\] while using the advanced null coordinate \(v\) results in the form \[ds^{2}=-e^{2h_{+}(v,r)}f_{+}(v,r)dv^{2}+2e^{h_{+}(v,r)}dvdr+r^{2}d\Omega_{2}\;. \tag{3}\] The function \(f\) is coordinate-independent, i.e. \(f(t,r)\equiv f_{+}\big{(}v(t,r),r\big{)}\) and in what follows we omit the subscript. It is conveniently represented via the Misner-Sharp-Hernandez (MSH) mass \(M\equiv C/2\) as \[f=1-\frac{C(t,r)}{r}=1-\frac{C_{+}(v,r)}{r}=\partial_{\mu}r\partial^{\mu}r\;, \tag{4}\] where the coordinate \(r\) is the areal radius [13]. The functions \(h\) and \(h_{+}\) play the role of integrating factors in the coordinate transformation \[dt=e^{-h}(e^{h_{+}}dv-f^{-1}dr)\;. \tag{5}\] In an asymptotically flat spacetime, \(h\to 0\) and \(f\to 1\) as \(r\to 0\), and \(t\) is the physical time of a stationary observer Bob at spacelike infinity \(i^{0}\). For example, the Schwarzschild metric corresponds to \(h\equiv 0\), \(M\equiv C/2={\rm const.}\), and \(v=t+r_{*}\), where \(r_{*}\) is the tortoise coordinate [10; 11]. A description in terms of the retarded null coordinate \(u=t-r_{*}\) and its properties are described in Appendix B. A PBH is a trapped region; a domain where both ingoing and outgoing future-directed null geodesics emanating from a spacelike two-dimensional surface with spherical topology have negative expansion [11; 13; 41]. The apparent horizon is the boundary of this trapped region. In a cosmological setting, we assume that a separation of scales exists between geometric features associated with the black hole and those of the large-scale universe. In this case, the apparent horizon is given by the outermost real root of \(f(t,r)=0\) in the near-region, while the cosmological horizon is the innermost real root in the asymptotic region (the detailed summary of various definitions can be found in Refs. [13; 36; 41]. The Schwarzschild radius \(r_{\text{g}}\) then represents the location of this apparent horizon. Invariance of the MSH mass implies that \[r_{\text{g}}(t)=r_{+}\big{(}v(t,r_{\text{g}}(t)\big{)}. \tag{6}\] Unlike the globally defined event horizon, the notion of the apparent horizon is foliation-dependent. However, it is invariantly defined in all foliations that respect spherical symmetry [42]. In addition to requiring that a PBH is formed in a finite time according to Bob, we demand only the weakest form of the cosmic censorship conjecture [12; 13; 43]: all curvature scalars [9; 11] are finite up to and on the apparent horizon. It sufficient to ensure that only two of them, \(R\) and \(R_{\mu\nu}R^{\mu\nu}\), are finite [44]. Construction of finite invariants from the divergent quantities that describe a real-valued solution allows to describe properties of the near-horizon geometry. Because the metric in Schwarzschild coordinates is singular at the apparent horizon, it will often be convenient to work in null coordinates instead. Both the analysis of the Einstein equations and the evaluation of curvature invariants is conveniently performed using the effective EMT components \(\tau_{a}\), defined as [36] \[\tau_{t}:=e^{-2h}T_{tt}\,\qquad\tau^{r}:=T^{rr}\,\qquad\tau_{t}^{r}:=e^{-h}T_{t} ^{r}. \tag{7}\] The Einstein equations for the components \(G_{tt}\), \(G_{t}^{r}\), and \(G^{rr}\) are then, respectively \[\partial_{r}C =8\pi r^{2}\tau_{t}/f\, \tag{8}\] \[\partial_{t}C =8\pi r^{2}e^{h}\tau_{t}^{r}\,\] (9) \[\partial_{r}h =4\pi r\left(\tau_{t}+\tau^{r}\right)/f^{2}. \tag{10}\] To ensure finite values of the curvature scalars, it is sufficient to work with the invariant quantities \[\text{T}:=g_{\mu\nu}T^{\mu\nu}=(\tau^{r}-\tau_{t})/f\, \tag{11}\] \[\mathfrak{T}:=T^{\mu\nu}T_{\mu\nu}=\big{(}(\tau^{r})^{2}+(\tau_{t})^{2}-2(\tau _{t}^{r})^{2}\big{)}/f^{2}\, \tag{12}\] where the contributions of \(T_{\theta}^{\theta}\equiv T_{\phi}^{\phi}\) can be disregarded. One can verify that the resulting metric functions do not introduce further divergences [36; 44]. These considerations restrict the scaling of the effective EMT components near the apparent horizon such that \(\tau_{a}\propto f^{k}\), with \(k=0,1\). Solutions with \(k=0\) describe a PBH after formation (and before a possible disappearance of the trapped region). Dynamical RBH solutions belong to this class, while the Reissner-Nordstrom solution or static RBH solutions correspond to \(k=1\). In the following we will almost exclusively work with \(k=0\) solutions. The leading terms of the metric functions for \(k=0\) solutions are given in terms of \(x:=r-r_{\text{g}}(t)\) as \[C =r_{\text{g}}-4\sqrt{\pi}r_{\text{g}}^{3/2}\Upsilon\sqrt{x}+ \mathcal{O}(x)\, \tag{13}\] \[h =-\frac{1}{2}\ln\frac{x}{\xi}+\mathcal{O}(\sqrt{x}). \tag{14}\] The function \(\Upsilon(t)>0\) determines the energy density, pressure and flux at the apparent horizon, and \(\xi(t)\) is determined by choice of the time variable. The higher-order terms are matched with higher-order terms in the EMT expansion [36; 45]. The Einstein equation (9) serves as a consistency condition and establishes the relation between the rate of change of the MSH mass and the leading terms of the metric functions, \[r_{\text{g}}^{\prime}/\sqrt{\xi}=4\varepsilon_{\pm}\sqrt{\pi r_{\text{g}}} \Upsilon\, \tag{15}\] where primes indicate derivatives with respect to \(t\) and \(\varepsilon_{\pm}=\pm 1\) corresponds to the expansion and contraction of the Schwarzschild sphere, respectively. For a contracting Schwarzschild sphere the \((v,r)\) coordinates are regular across it. Evaluation of the expansion of the geodesic congruences identifies the solutions with \(r_{\text{g}}^{\prime}<0\) as black holes of decreasing mass. Similarly, the case \(r_{\text{g}}^{\prime}>0\) allows for a regular description in \((u,r)\) coordinates, and corresponds to an expanding white hole. In the following we consider only PBHs. The outer and (if it exists) inner components of the black hole apparent horizon are timelike, making definitions that are based on \((t,r)\) or \((v,r)\) coordinates coincide with the invariant definition [36; 46]. A schematic Carter-Penrose diagram for a RBH in an asymptotically flat spacetime is shown in Fig 1(a). PBH metrics in Schwarzschild coordinates are more singular than their Schwarzschild or Reissner-Nordstrom counterparts. Unlike the special algebraic case [9; 39]\(g_{tt}g_{rr}=1\), the metric determinant \(g:=\det\,g_{\mu\nu}\) diverges as \(x^{-1}\) on approach to the apparent horizon. The \((tr)\) block of the EMT near the Schwarzschild radius is \[T_{\ b}^{a}=\begin{pmatrix}\Upsilon^{2}/f&e^{-h}\Upsilon^{2}/f^{2}\\ -e^{h}\Upsilon^{2}&-\Upsilon^{2}/f\end{pmatrix},\quad T_{\hat{a}\hat{b}}=- \frac{\Upsilon^{2}}{f}\begin{pmatrix}1&1\\ 1&1\end{pmatrix}, \tag{16}\] where the second expression is written in an orthonormal frame [36]. For a static \(r=\mathrm{const.}\) observer that we call Eve, the energy density, pressure and flux all diverge. However, for an incoming (not necessarily freely-falling) observer Alice their values are finite [44]. As already mentioned the EMT violates the null energy condition [36; 29]. Ingoing radial null geodesics satisfy \[\frac{dr}{dt}=-e^{h}f\, \tag{17}\] so by noting that \[\lim_{r\to r_{\text{g}}}e^{h}f=|r_{\text{g}}^{\prime}|\, \tag{18}\] we see that the infall into a PBH takes a finite (even if very large) time according to Bob [44; 45; 36]. In \((v,r)\) coordinates the black hole metric is described by \[C_{+}(v,r) =r_{+}(v)+w_{1}(v)y+\mathcal{O}(y^{2})\, \tag{19}\] \[h_{+}(v,r) =\zeta_{0}(v)+\zeta_{1}(v)y+\mathcal{O}(y^{2})\, \tag{20}\] where \(y:=r-r_{+}(v)\). Note that a freedom in the redefinition of the null variable \(v\) allows one to set \(\zeta_{0}\equiv 0\). From the definition of the apparent horizon it follows that \(w_{1}\leqslant 1\). The inequality is saturated at the formation of the PBH (more details can be found in [46]). It is easy to see that the Schwarzschild sphere at \(r=r_{\text{g}}(t)=r_{+}(v)\) is timelike. Similarly, if the equation \(f(t,r)=0\) has more than one solution, the innermost surface \(r_{\text{in}}\) is timelike as well. As a result, these definitions of the inner and outer horizons coincide with the invariant definitions [46], and a RBH in an asymptotically flat spacetime has the schematic Carter-Penrose diagram shown in Fig. 1(a). In spherical symmetry the black hole mass is defined as the value of the MSH mass at the apparent horizon [13], \[2M(t):=C\big{(}t,r_{\text{g}}(t)\big{)}\equiv r_{\text{g}}(t)\;, \tag{21}\] with analogous expressions holding in \((v,r)\) and \((u,r)\) coordinates. Note that Eq. (13) makes \(dC/dt\) undefined on the apparent horizon, in contrast to \(2M=r_{\text{g}}(t)\). On the other hand, the null coordinate \(v\) gives \[\frac{dC_{+}}{dv}\bigg{|}_{r_{+}}=r^{\prime}_{+}(1-w_{1})\;, \tag{22}\] where the vertical bar indicates evaluation of the quantity on the apparent horizon \(r=r_{+}\). Finally we define the notion of surface gravity, which plays a crucial role in black hole mechanics and thermodynamics [11; 12; 13]. For stationary black holes the various definitions that appear in the literature are equivalent, but this degeneracy is lifted in the dynamical case. Among the possible generalizations, the Hayward-Kodama surface gravity [48; 13] \[\kappa_{\text{K}}=\frac{1}{2}\left(\frac{C_{+}(v,r)}{r^{2}}-\frac{\partial_{r }C_{+}(v,r)}{r}\right)\bigg{|}_{r_{+}}=\frac{(1-w_{1})}{2r_{+}} \tag{23}\] stands out as the most useful candidate. It is based on the Kodama vector [47], which provides a preferred time flow in the absence of a timelike Killing vector field. It also allows for the generalization of the first law of black hole mechanics to dynamical spacetimes [60], since the Kodama vector is associated with a conserved current. In fact its Noether charge is just the MSH mass defined previously, which in black hole thermodynamics plays the role of the internal energy of the system. Unlike some alternatives, it is well-defined for the PBH and shares many of the important properties of its stationary Killing counterpart, and will be used throughout this work. ### Some properties of physical black holes PBHs can be described in both \((t,r)\) and \((v,r)\) coordinates, as seen from Eqs. (2) and (3). In this section we examine connections between the metric functions in these coordinates using the transformation law (5). In \((t,r)\) coordinates the MSH Figure 1: Schematic Carter–Penrose diagram depicting the formation and evaporation of a RBH which is treated as PBH. Past and future timelike infinity are labelled by \(i^{-}\) and \(i^{+}\), respectively. Spacelike infinity is labelled by \(i^{0}\). Dashed grey lines correspond to outgoing radial null geodesics. The trajectory of a distant observer, Bob, is indicated in pink and labelled \(B\). The points \(\mathrm{f}\) and \(\mathrm{d}\) represent the two-spheres of formation and disappearance of the trapped region. The equal (Schwarzschild) time hypersurface \(\Sigma_{t_{\text{g}}}\) is shown as a dashed light green line. The outer (blue) and inner (dark red) components of the black hole apparent horizon (timelike membranes) are indicated according to the invariant definition ([36; 13]). (a) The invariantly-defined components of the apparent horizon correspond the largest and smallest root of \(f=0\) whether \(t\), \(v\) or \(u\) is used as the evolution parameter [46]. (b) Embedding into de Sitter spacetime. The solid black line connecting \(i^{-}\) and \(i^{0}\) represents the cosmological event horizon for an observer at \(r=0\). Static coordinates cover only the left quadrant, with the dotted diagonal line representing the particle horizon. Components of the black hole apparent horizon correspond to the largest and smallest roots of \(f=0\) (not including the cosmological horizon). The orange dashed lines \(\Sigma_{\text{f}}\) indicate hypersurfaces of constant comoving time \(\bar{t}\). The trajectory of an asymptotically comoving observer Eve (\(\chi=\mathrm{const}\)) is marked by the dark green line and labelled by the initial \(E\). mass is given by the expansion (13), while in \((v,r)\) coordinates it is given by (19). We examine the relation (close to the apparent horizon) connecting the quantities \(x\) and \(y\) and determine what information can be extracted from the invariance of the MSH mass. While the metric in \((t,r)\) coordinates is singular at the apparent horizon, a freely-falling observer Alice reaches the apparent horizon at \(r_{g}\) not only in her finite proper time \(\tau\) but in finite \(t\). We thus can consider the change in \(t\) from the value \(t\big{(}v,r_{+}(v)\big{)}\) along an ingoing null geodesic \(v=\mathrm{const}\). Along such a geodesic the time \(t(v,r)\) varies as \[t(v,r_{+}+y)=t(v,r_{+})+\left.\partial_{r}t\right|_{r_{+}}y+\tfrac{1}{2}\left. \partial_{r}^{2}t\right|_{r_{+}}y^{2}+\mathcal{O}(y^{3}). \tag{24}\] Determining the explicit form of the above relation requires evaluating partial derivatives at the apparent horizon. This can be done using the transformation law (5), which implies directly that \[\partial_{r}t=-e^{-h(t,r)}f(t,r)^{-1}=\frac{1}{r_{g}^{\prime}}+\mathcal{O}( \sqrt{x}). \tag{25}\] The time variation \(\delta t:=t(v,r_{+}+y)-t(v,r_{+})\) along an ingoing null geodesic is thus given by \[\delta t=\frac{y}{r_{g}^{\prime}}+\tfrac{1}{2}(\partial_{r}^{2}t)\underset{y =0}{\text{\Large$\underset{}{\text{\Large$\underset{}{\text{\Large$\underset{ }{\text{\Large$\underset{}\text{\Large$\underset{}\text{\Large$\underset{ }\text{\Large$\underset{\text{\Large$\underset{\text{\Large$ }$\underset{\text{\Large$\underset{\text{\Large$ }$\underset{\text{\Large$ \left}$}$}}}}}}}}}}}y^{2}+\mathcal{O}(y^{3})\, \tag{26}\] where the second partial derivative \((\partial_{r}^{2}t)\) is given in Appendix C.1. The corresponding expansion of the Schwarzschild radius \(r_{g}(t)\) is given by \[\begin{split} r_{g}(t(v,r_{+}+y))&=r_{g}(t(v,r_{+ }))+r_{g}^{\prime}\delta t\\ &\quad+\tfrac{1}{2}r_{g}^{\prime\prime}\delta t^{2}+\mathcal{O}( \delta t^{3})\,\end{split} \tag{27}\] where keeping terms of order \(\delta t^{2}\) is crucial. The variable \(x(t,r)=r-r_{g}(t)\) can further be expressed as a function of the advanced null coordinate \(v\) and \(r\), \[x(v,r_{+}+y)=(r_{+}+y)-r_{g}(t(v,r_{+}+y)) \tag{28}\] Using Eqs. (26) and (27) in (28) along with the invariance of the MSH mass (6) then results in the quadratic relationship between \(x\) and \(y\) near the apparent horizon: \[x=\tfrac{1}{2}\,\omega^{2}y^{2}\,\quad\text{where}\quad\omega^{2}\equiv-r_{g}^{ \prime}(\partial_{r}^{2}t)\big{|}_{y=0}-\frac{r_{g}^{\prime\prime}}{(r_{g}^{ \prime})^{2}} \tag{29}\] Then by using Eqs. (27) and (29) along with (6) we find that \[w_{1}(v)=1-2\sqrt{2\pi r_{g}^{3}}\,\Upsilon\ \omega\, \tag{30}\] which is the quantity entering the Hayward-Kodama surface gravity in Eq. (23). Explicit expressions for \(\omega^{2}\) and \(w_{1}(v)\) can be found in Appendix C.1 and A.2. We next turn to the evaluation of the unknown metric functions \(\Upsilon(t)\) and \(\xi(t)\). We assume the evaporation law in \((t,r)\) and \((v,r)\) coordinates can be written as \[r_{g}^{\prime}(t)=-\Gamma(r_{g})\,\quad r_{+}^{\prime}(v)=-\Gamma_{+}(r_{+}) \tag{31}\] in terms of the undetermined functions \(\Gamma,\Gamma_{+}\). The relation (16), which is derived from the Einstein equations in \((v,r)\) coordinates, determines \(\Upsilon(t)\): \[\Upsilon(t)=\sqrt{\frac{\Gamma_{+}(1-w_{1})}{8\pi r_{+}^{2}}} \tag{32}\] Using the consistency condition Eq. (15) along with (32) determines the other unknown metric function: \[\xi(t)=\frac{r_{g}\Gamma^{2}}{2\Gamma_{+}}. \tag{33}\] We now make the following assertion: in the quasi-static limit the first law of black hole dynamics should approach that of the stationary case, where \[dM=\frac{\kappa}{8\pi}dA. \tag{34}\] This will be the case in the absence of electric charge or angular momentum. Their inclusion would manifest in the first law by the appearance of work terms like \(\Phi\,dQ\) or \(\Omega\,dJ\) which arise from the Hamiltonian variation. The first law of black hole dynamics [11; 12; 64] holds in any diffeomorphism-invariant Lagrangian theory of gravity [61]. If the apparent horizon is taken to be the relevant surface for which the first law is formulated, then the area being \(A=4\pi r_{g}^{2}\) and the MSH mass being \(M=r_{g}/2\) requires that \(w_{1}=0\) for the identification of \(\kappa\) with the Hayward-Kodama surface gravity of Eq. (23). In this case \(\Upsilon(t)\) and \(\xi(t)\) assume the forms \[\Upsilon(t)=\sqrt{\frac{\Gamma_{+}}{8\pi r_{+}^{2}}}\,\quad\xi(t)=\frac{r_{g} \Gamma^{2}}{2\Gamma_{+}}\, \tag{35}\] and \(\omega\) as defined in (29) reduces to \[\omega=\frac{1}{2\sqrt{2\pi r_{g}^{3}}\Upsilon}. \tag{36}\] This leads to the following relationship between the near-horizon expansion parameters \(x\) and \(y\): \[x=\frac{1}{16\pi r_{g}^{3}\Upsilon^{2}}\ y^{2}. \tag{37}\] The results above follow from the connection between \((t,r)\) and \((v,r)\) coordinates along a constant-\(v\) line. Analogous relations can be derived by instead considering the relationship between \((t,r)\) and \((u,r)\) coordinates, as detailed in Appendix B. However in this case the retarded null coordinate \(u\) exhibits singular behaviour on the apparent horizon in concert with the Schwarzschild coordinate \(t\), and the metric functions exhibiting similar behaviour. ## III Linear mass loss: exact coordinate transformations In the previous section we established a number of relations based on transformations between \((t,r)\) and \((v,r)\) coordinates, along with the condition \(w_{1}=0\) obtained from the first law of black hole mechanics. We now demonstrate that these results imply that 1) the near-horizon metric of the PBH is described by the ingoing Vaidya metric, and 2) if one assumes a linear evaporation law in \((v,r)\) coordinates, the evaporation law in \((t,r)\) coordinates must also be linear. Having assumed that \(w_{1}=0\), the expansion of the metric functions (19) and (20) become \[C_{+}(v,r) =r_{+}+\mathcal{O}(y^{2})\, \tag{38}\] \[h_{+}(v,r) =\zeta_{1}(v)y+\mathcal{O}(y^{2})\, \tag{39}\] where \(\zeta_{0}\) is set to zero by a suitable time reparametrization. Following the semiclassical arguments [18; 62] that in the quasi-stationary region \(\partial h_{+}/\partial r\sim L_{H}/r\) (where \(L_{H}\) is the Hawking luminosity) we have that \(\zeta_{1}r_{+}\ll 1\), and thus a PBH near the Schwarzschild sphere is well-described by a Vaidya metric, \[ds^{2}=-f(v,r)dv^{2}+2dvdr+r^{2}d\Omega_{2}\, \tag{40}\] where \[f(v,r)=1-\frac{r_{+}(v)}{r}. \tag{41}\] Different aspects of the black hole geometry are best captured by different coordinate systems. However, transformations between them are difficult [50], and where exact coordinate transformations do exist, multiple coordinate patches are required to cover the entire spacetime [63]. In what follows we assume a _linear_ evaporation law, such that \[r_{+}(v)=r_{0}-\alpha v\,\quad\text{with}\quad\alpha>0\, \tag{42}\] where \(r_{0}\) is the initial areal radius and \(\alpha\) is the evaporation rate. While extension of this metric to large distances \(r\gg r_{+}\) is not justified, it provides a setting in which the exact transformations to \((t,r)\) and \((u,r)\) coordinates are possible. Moreover, its counterpart with decreasing \(r_{-}(u)\) provides a good description of an evaporating black hole at distances \(r\gtrsim 2r_{\mathcal{g}}\) and its transformation to \((t,r)\) coordinates can be performed analogously. The first step in the transformation to Schwarzschild coordinates is to bring the metric into a form that is conformally equivalent to the Schwarzschild metric in \((u,r)\) coordinates. This is effected by defining \[v:=\frac{r_{0}}{\alpha}\left(1-e^{-\alpha\mathcal{U}/r_{0}}\right),\qquad r:= Re^{-\alpha\mathcal{U}/r_{0}}, \tag{43}\] and the explicit form of the metric in \((\mathcal{U},\mathcal{R})\) coordinates is given in Appendix A.3. Then, defining a time coordinate \(\tilde{t}\) by \[d\tilde{t}=d\mathcal{U}-b(R)\left(1-\frac{r_{0}}{R}+\frac{2\alpha R}{r_{0}} \right)^{-1}, \tag{44}\] allows the metric to take the form \[ds^{2}=g_{\tilde{t}\tilde{t}}d\tilde{t}^{2}+2g_{\tilde{t}r}d\tilde{t}dr+g_{ \tilde{r}r}dr^{2}+r^{2}d\Omega_{2}\, \tag{45}\] where expressions for the metric components \(g_{\mu\nu}(\tilde{t},r)\) are given in Appendix A.3. The coordinates \(\mathcal{U}\) and \(R\) that appear therein are treated as functions of \(\tilde{t}\) and \(r\). The function \(b(R)\) is then chosen such that the off-diagonal metric component \(g_{\tilde{t}r}\) vanishes: \[b(R)=\left(1-\frac{r_{0}}{R}+\frac{2\alpha R}{r_{0}}\right)\left(1-\frac{r_{0}} {R}+\frac{\alpha R}{r_{0}}\right)^{-1} \tag{46}\] As a result, the metric becomes \[ds^{2}=-e^{-2\alpha\mathcal{U}/r_{0}}\frac{\left(1-\frac{r_{0}}{R}+\frac{ \alpha R}{r_{0}}\right)^{2}}{1-\frac{r_{0}}{R}}d\tilde{t}^{2}+\frac{1}{1-\frac {r_{0}}{R}}dr^{2}+r^{2}d\Omega_{2}. \tag{47}\] Comparing (47) with the general spherically symmetric metric (2) identifies the metric function \(f\) as \[f(\tilde{t},r)=1-\frac{C(\tilde{t},r)}{r}=1-\frac{r_{+}(v)}{r}=1-\frac{r_{0}} {R}, \tag{48}\] with \(h\) being given by \[e^{\tilde{h}(\tilde{t},r)}=e^{-a\mathcal{U}/r_{0}}\left(\frac{1-\frac{r_{0}}{ R}+\frac{\alpha R}{r_{0}}}{1-\frac{r_{0}}{R}}\right). \tag{49}\] From Eqs. (43), (44), and (46), supplemented by the initial condition \(\tilde{t}\big{(}r=r_{0},v=0\big{)}=0\), we obtain \[\tilde{t}(v,r) =\frac{r_{0}}{2\alpha}\ln\left(\frac{\alpha r_{0}^{2}}{\alpha r^{ 2}-r_{+}^{2}+r_{+}r}\right) \tag{50}\] \[\quad+\frac{r_{0}}{\alpha\sqrt{1+4\alpha}}\operatorname{arctanh} \left(\frac{\sqrt{1+4\alpha}(r-r_{+})}{(1+2\alpha)r-r_{+}}\right). \tag{51}\] Note that it is still possible to apply an arbitrary coordinate transformation \(\tilde{t}\to t=T(\tilde{t})\). The choice can be constrained by considering the form of the relations between \(v\), \(t\) and \(r\) in the asymptotic region. The limit of \(\tilde{t}\) that propagates backwards along an ingoing null geodesic, i.e \(v\) is constant and \(r\to\infty\), using Eq. (51) is \[\tilde{t}\to\frac{r_{0}}{\alpha}\left[-\ln\left(\frac{r}{r_{0}}\right)+\gamma \right]\, \tag{52}\] where we have defined \[\gamma:=\frac{1}{\sqrt{1+4\alpha}}\operatorname{arctanh}\left(\frac{\sqrt{1+4 \alpha}}{1+2\alpha}\right). \tag{53}\] Similarly we find that \[\tilde{h}\to\ln\left(\frac{ar}{r_{0}}\right). \tag{54}\] Since we require an asymptotic relation \(v\approx t+r\) and \[r\to r_{0}\exp\left(-\frac{\alpha\tilde{t}}{r_{0}}+\gamma\right) \tag{55}\] for \(\tilde{t}\to-\infty\), we define the new time variable as \[t:=-r_{0}\exp\left(-\frac{\alpha\tilde{t}}{r_{0}}+\gamma\right)+\mathfrak{t}\,. \tag{56}\] We choose the constant \(\mathfrak{t}\) such that \(t=0\) at \(\tilde{t}=0\), hence \[\mathfrak{t}=r_{0}e^{\gamma}\;. \tag{57}\] Noting that \[T^{\prime}(\tilde{t})=-\frac{\alpha}{r_{0}}(t-\mathfrak{t})\;, \tag{58}\] we see that \(h(t,r)\to 0\) at constant \(v\) and \(r\to\infty\), while \[\frac{dr_{\tilde{g}}}{dt}=\frac{r_{\tilde{g}}}{t-\mathfrak{t}}\;, \tag{59}\] resulting in the linear evaporation law \[r_{g}(t)=r_{0}(\mathfrak{t}-t)\;. \tag{60}\] We have thus presented an exactly solvable model for an evaporating PBH, using a conformal transformation based on a linear evaporation law. Importantly, the transformation from ingoing null coordinates to Schwarzschild coordinates preserves the form of the evaporation law, a common assumption used in various models [50; 36]. In contrast to our work, such models typically assume Page's evaporation law where the mass-loss rate is given by \[\frac{dr_{g}}{dt}\sim-\frac{A}{r_{g}^{2}}\;, \tag{61}\] for some constant \(A\)[51; 12]. For macroscopic black holes this evaporation law can be treated as linear for times that are long compared to the cosmological timescale but are still short relative to the evaporation time. Moreover, the linear Vaidya metric has been proposed as the correct description near the endpoint of the evaporation process [52] and serves as a basis for model-building in the semiclassical setting. ## IV Physical black holes in cosmology Models of compact objects with cosmological boundary conditions have been investigated since the introduction of the McVittie metric [53], which generalizes the Schwarzschild solution to arbitrary FRLW spacetimes. In isotropic coordinates \((t,\bar{r})\) the McVitte metric has the form \[ds^{2}= -\frac{\left(1-m(t)/2\bar{r}\right)^{2}}{\left(1+m(t)/2\bar{r} \right)^{2}}dt^{2}\] \[+a^{2}(t)\left(1+\frac{m(t)}{2\bar{r}}\right)^{4}\left(d\bar{r}^ {2}+\bar{r}^{2}d\Omega_{2}\right), \tag{62}\] where the time dependence of the mass function is governed by the scale factor, \[m(t)\equiv m_{0}/a(t)\;. \tag{63}\] Due to the non-linearity of the Einstein equations, it is impossible to split a metric into a homogenous and isotropic cosmological background and a part describing even a spherical inhomogeneity. It is possible, however, to loosely describe the "embedding" of black holes into a cosmological "background" if the metric reduces to a FRLW metric when the parameter that describes the inhomogeneity vanishes [13]. Despite the existence of numerous models with quite remarkable properties [39; 13], there currently exists no MBH solution that satisfies observational constraints at the horizon scale while approaching a FRLW metric on the largest scales [15]. This part of the averaging ambiguity problem [54; 55] of the Friedmann equations. A spatially flat, homogenous, and isotropic background metric \(\mathpzc{g}^{\rm B}_{\mu\nu}\) can be represented by the line element [37; 10] \[ds_{\rm B}^{2}=a^{2}(\eta)\big{(}-d\eta^{2}+d\mathbf{x}^{2}\big{)}\;, \tag{64}\] where \(\eta\) is the conformal time \(d\eta=d\bar{t}/a\) and \(a\) is the scale factor (though in describing the embedding of a PBH in a cosmological background it will be more convenient to use the comoving time \(\bar{t}\)). The full metric can be written as a (position-dependent) perturbation \(h_{\mu\nu}\) of the homogeneous background metric \[\mathpzc{g}_{\mu\nu}=\mathpzc{g}^{\rm B}_{\mu\nu}+a^{2}(\eta)\epsilon h^{(1)} _{\mu\nu}(\eta,\mathbf{x})\;, \tag{65}\] where \(\epsilon\) quantifies the relative magnitude of the perturbation. The dynamics are governed by the Einstein equations (1), where observational data necessitates an EMT decomposition of the form [15] \[T_{\mu\nu}=T^{(0)}_{\mu\nu}(\eta)+\epsilon T^{(1)}_{\mu\nu}(\eta,\mathbf{x})\;. \tag{66}\] Clearly, an averaging procedure is required to eliminate the position-dependence and achieve homogeneity on the largest scales, but there are ambiguities in the different methods available. In [55] it is shown how one can resolve this ambiguity -- to do so requires localised matter to couple directly to the cosmic expansion rate. In Ref [56] the effect of this cosmological coupling of localised mass was parameterized in terms of the scale factor \(a\) as \[M(a)=M(a_{\rm in})\left(\frac{a}{a_{\rm in}}\right)^{p}\;, \tag{67}\] where \(a_{\rm in}<a\) is the scale factor at which the object becomes cosmologically coupled and \(p\geqslant 0\). This mechanism explains the local mass scaling, such as the one present in the McVittie metric (63), but does not capture the local dynamics. We begin the procedure of embedding a PBH into a spatially flat FRLW background by writing its metric in the form of Eq. (2) [13]. In comoving coordinates \((\bar{t},\chi)\) one has \[ds^{2}=-d\bar{t}^{2}+a^{2}(\bar{t})\left(d\chi^{2}+\chi^{2}d\Omega_{2}\right)\;, \tag{68}\] while using the areal radius \(r\) as the radial coordinate brings the metric into Painleve-Gullstrand form \[ds^{2}=-\left(1-H^{2}r^{2}\right)d\bar{t}^{2}-2Hrd\bar{t}dr+dr^{2}+r^{2}d \Omega_{2}\;, \tag{69}\] where \(H=\dot{a}/a\) is the Hubble parameter. The cross term can be eliminated and the metric can be written in Schwarzschild form by introducing a new time coordinate \(t\) for which \[dt=\frac{1}{F}\left(d\bar{t}+\beta dr\right)\;, \tag{70}\] where \(F(t,r)\) is an integration factor satisfying \[\frac{\partial}{\partial r}\left(\frac{1}{F}\right)=\frac{\partial}{\partial \bar{t}}\left(\frac{\beta}{F}\right), \tag{71}\] and the function \(\beta\) is chosen so that \(\mathpzc{g}_{tr}=0\). This is accomplished by having \[\beta=\frac{Hr}{1-H^{2}r^{2}}\;, \tag{72}\] which results in the line element \[ds^{2}=-\left(1-H^{2}r^{2}\right)F^{2}dt^{2}+\frac{1}{1-H^{2}r^{2}}dr^{2}+r^{2 }d\Omega_{2}\;. \tag{73}\] This is the spatially flat FLRW metric in Schwarzschild coordinates (de Sitter space is the special case where \(H=\text{const}\). and \(F\equiv 1\)). Thus the proper boundary conditions for a PBH metric are \[e^{h}\to F\;,\qquad C\to H^{2}r^{3}\;, \tag{74}\] and in addition to the outer apparent horizon which bounds the trapped region at \(r=r_{\mathpzc{g}}(t)\), there is a cosmological apparent horizon at \[r\approx\frac{1}{H}\;. \tag{75}\] In the case of a spatially flat de Sitter space, the static domain can be parameterized using advanced and retarded null coordinates. They can be defined analogously to the Schwarzschild spacetime. For example, using the advanced null coordinate \[v:=t+r_{*}\;,\qquad r_{*}:=\frac{1}{2H}\frac{1+r/H}{1-r/H}\;, \tag{76}\] where \(r_{*}\) is the de Sitter analogue of the tortoise coordinate, the de Sitter metric can be written as Eq. (3) with \[h_{+}=0\;,\qquad f=1-H^{2}r^{2}\;. \tag{77}\] A schematic Carter-Penrose diagram for a RBH in an asymptotically flat de Sitter spacetime is shown in Fig. 1(b). It is natural to consider Vaidya black holes, both using retarded [58] and advanced [59] null coordinates. Consider the latter case with with \[f(v,r)=1-\frac{2m(v)}{r}-H^{2}r^{2}\;. \tag{78}\] In a cosmological setting \(m\ll H^{-1}\) and the Schwarzschild radius is slightly modified by the cosmological coupling, \[r_{+}(v)=2m\big{(}1+4m^{2}H^{2}+\mathcal{O}(H^{4})\big{)}\;, \tag{79}\] similar to the Schwarzschild-de Sitter metric [13]. It is also interesting to note that for this model, \[w_{1}=12m^{2}H^{2}+\mathcal{O}(H^{4})=6r_{+}H^{2}+\mathcal{O}(H^{4})\;, \tag{80}\] which shows a deviation from a Vaidya-like geometry due to the presence of the cosmological horizon. This also disagrees with the static Schwarzschild limit, a consequence of modifications to the ordinary first law which are required when considering asymptotically de Sitter (or anti-de Sitter) black holes [71; 72]. ## V Discussion Our results show that for an uncharged black hole, compliance with the first law results in the coincidence of the Hayward-Kodama surface gravity with its Schwarzschild black hole value, \(\kappa_{\text{K}}=1/(2r_{\mathpzc{g}})=1/4M\), and implies \(w_{1}=0\), that is that the metric near the outer apparent horizon is approximately Vaidya. For a charged black hole it is possible to match the surface gravity with that of the Reissner-Nordstrom black hole, \[\kappa=\frac{r_{+}-r_{-}}{2r_{+}^{2}}\;, \tag{81}\] where \(r_{+}\) and \(r_{-}\) are the areal radii of the outer and the inner horizons, respectively, by having \(w_{1}\neq 0\). While it is obvious that the same form of the Page's law can be maintained in both \((t,r)\) and \((v,r)\) coordinate systems it is unclear if this is compatible with the redefinition of the null coordinate \(v\) such that \(\zeta_{0}=0\). We also have seen that a simple and pathology-free PBH model in asymptotically de Sitter space does not comply with the ordinary form of the first law. This can be seen as a natural consequence of the inclusion of back-reaction in our model. In asymptotically de Sitter black hole spacetimes, it is known that a first law can be formulated separately for the event and cosmological horizons [65]. However, if back-reaction from the Hawking flux of each horizon is not ignored, the heat flux between the two horizons places the system out of equilibrium and the first law no longer suffices to capture variations between nearby equilibrium states for the entire spacetime. This back-reaction issue (along with ambiguities in the definition of mass in de Sitter spacetimes [66]) makes formulating the laws of black hole mechanics in de Sitter technically and conceptually challenging, though a number of solutions have been extensively pursued [67; 68; 69]. While dealing with axially-symmetric PBHs is much more difficult [70], the investigation of their embedding in a cosmological background is very important. In forthcoming work, we will detail the embedding of Kerr-Vaidya metrics in asymptotically de Sitter spacetimes. These models will serve as a basis for developing even more sophisticated descriptions of dynamical physical black holes, and provide a framework for extracting observational features of their astrophysical manifestations. ###### Acknowledgements. Useful discussions with Robert Mann, Sebastian Murk and Amos Ori are gratefully acknowledged. P.K.D. and I.S. are supported by an International Macquarie University Research Excellence Scholarship. The work of D.R.T. is supported by the ARC Discovery project Grant No. DP210101279. F.S is funded by the ARC Discovery project Grant No. DP210101279. ## Appendix A Summary of useful relations ### Effective EMT components in \((t,r)\) coordinates We give a detailed summary of the relations used in this paper in \((t,r)\) coordinates. By explicitly including higher-order terms in the expansions of the MSH mass \(C(t,r)\) and metric function \(h(t,r)\), the Einstein equations (8), (9), and (10) give the form of various EMT components to comparative order. The expansion of \(C(t,r)\) is given by \[C(t,r)=r_{g}(t)+c_{12}(t)\sqrt{x}+c_{1}(t)x+\mathcal{O}(x^{3/2})\;, \tag{10}\] with \(x=r-r_{g}(t)\) and coefficients given by \[c_{12}(t)=-4\sqrt{\pi r_{g}^{3}}\Upsilon\;,\quad c_{1}(t)=\frac{1}{3}+\frac{4 \sqrt{\pi r_{g}^{3}}e_{12}}{3\Upsilon}\;. \tag{11}\] The expansion of \(h(t,r)\) is likewise given by \[h(t,r)=-\frac{1}{2}\ln\frac{x}{\xi(t)}+h_{12}(t)\sqrt{x}+\mathcal{O}(x)\;, \tag{12}\] where \[h_{12}(t)=\frac{1}{2\sqrt{\pi}r_{g}^{3/2}\Upsilon}-\frac{e_{12}-3p_{12}}{6 \Upsilon^{2}}\;. \tag{13}\] The effective EMT components defined in Section II then have the following series expansions \[\tau_{t} =-\Upsilon^{2}+e_{12}(t)\sqrt{x}+e_{1}(t)x+\mathcal{O}(x^{3/2})\;, \tag{14}\] \[\tau_{t}^{r} =-\Upsilon^{2}+\phi_{12}(t)\sqrt{x}+\phi_{1}(t)x+\mathcal{O}(x^{ 3/2})\;,\] (15) \[\tau^{r} =-\Upsilon^{2}+p_{12}(t)\sqrt{x}+p_{1}(t)x+\mathcal{O}(x^{3/2})\;, \tag{16}\] where \[\phi_{12}=\frac{1}{2}(e_{12}+p_{12})\;. \tag{17}\] ### Einstein equations and effective EMT components in \((v,r)\) coordinates In \((v,r)\) coordinates, the EMT is represented as \(\Theta_{\mu\nu}\) and the effective EMT components are \[\theta_{v}=e^{-2h_{+}}\Theta_{vv}\;,\quad\theta_{vr}=e^{-h_{+}}\Theta_{vr}\;,\quad\theta_{r}=\Theta_{rr}\;. \tag{18}\] The Einstein equations then take the following form \[e^{-h_{+}}\partial_{v}C_{+}+f\partial_{r}C_{+}=8\pi r^{2}\theta_ {v}\;, \tag{19}\] \[\partial_{r}C_{+}=-8\pi r^{2}\theta_{vr}\;,\] (20) \[\partial_{r}h_{+}=4\pi r\theta_{r}\;. \tag{21}\] Using the coordinate transformation (5) one can find relations between the effective EMT components in \((v,r)\) with those in \((t,r)\). They are related through \[\theta_{v}=\tau_{t},\quad\theta_{vr}=\frac{\tau_{t}^{r}-\tau_{t}}{f}\;,\quad \theta_{r}=\frac{\tau_{t}+\tau^{r}-2\tau_{t}^{r}}{f^{2}}\;. \tag{22}\] Expanding the LHS of Eq. (10) in a series around \(r_{+}\) and the RHS around \(r_{g}\), after making use of Eq. (22), and comparing order-by-order, one arrives at the following relation for \(w_{1}(v)\): \[w_{1}(v)=\frac{e_{12}-p_{12}}{\Upsilon}\sqrt{\pi}\tau_{g}^{3/2} \tag{23}\] The condition \(e_{12}(t)=p_{12}(t)\) is therefore equivalent to \(w_{1}(v)=0\). From Eq. (19) in the near horizon limit, we get a relation for the evaporation rate \[e^{-\zeta_{0}}r_{+}^{\prime}(v)=\frac{8\pi r_{+}^{2}\theta_{v}^{+}}{1-w_{1}}\;, \tag{24}\] where \(\theta_{v}^{+}:=\lim_{r\to r_{\ell}}\theta_{v}=-\Upsilon^{2}\). In the final equality we used relation (22). With appropriate redefinition of the advanced coordinate \(v\) one can eliminate the exponential term \(e^{\zeta_{0}}\) and arrive at \[r_{+}^{\prime}(v)=-\frac{8\pi r_{+}^{2}\Upsilon^{2}}{1-w_{1}}\;, \tag{25}\] where we have used the same variable \(v\) for the redefined coordinate. ### Details of the coordinate transformation The linearly evaporating Vaidya metric is given by Eq. (40). We will perform a coordinate transformation from \((v,r)\) to \((\mathcal{U},R)\) coordinates, where these coordinates are defined in Eq.(43). The transformed metric is given by \[ds^{2}=e^{-2\alpha\mathcal{U}/r_{0}}\left(-\left(1-\frac{r_{0}}{R}+\frac{2 \alpha R}{r_{0}}\right)+2d\mathcal{U}dR+R^{2}d\Omega_{2}\right). \tag{26}\] We then define a timelike coordinate \(\tilde{t}\) by \[d\tilde{t}=d\mathcal{U}-b(R)\left(1-\frac{r_{0}}{R}+\frac{2\alpha R}{r_{0}} \right)^{-1}, \tag{27}\] that allows one to re-write the metric as \[ds^{2}=\mathpzc{g}_{\tilde{t}\tilde{t}}d\tilde{t}^{2}+2\mathpzc{g}_{\tilde{t} _{T}}d\tilde{t}dr+\mathpzc{g}_{rr}dr^{2}+r^{2}d\Omega_{(2)}\;, \tag{28}\] where explicit values for the metric components \(\texts{g}_{\mu\nu}(\tilde{t},r)\) are given below, and the coordinates \(\mathcal{U}\) and \(R\) that appear are treated as functions of \(\tilde{t}\) and \(r\). The function \(b(R)\) is chosen by requiring that the off-diagonal metric component \(\textsl{g}_{tr}\) vanishes. Furthermore, by using (43) and (44) the differential \(dR\) can be written as follows \[dR=\frac{A(r)}{1-\frac{r_{0}}{r}+\frac{(2-b(R))\alpha R}{r_{0}}}\left(e^{ \alpha d/r_{0}}dr+\frac{\alpha R}{r_{0}}d\tilde{t}\right)\;, \tag{100}\] where for simplicity we have defined \[A(r)=1-\frac{r_{0}}{R}+\frac{2\alpha R}{r_{0}}\;. \tag{101}\] The metric becomes \[ds^{2}=e^{-\frac{2\alpha d\mathcal{U}}{r_{0}}}\left[\Bigg{(}-A(r) +\frac{2(1-b(R))A(r)}{1-\frac{r_{0}}{R}+\frac{(2-b(R))\alpha R}{r_{0}}}\frac{ \alpha R}{r_{0}}+\frac{(2b(R)-b(R)^{2})A(r)}{\left(1-\frac{r_{0}}{R}+\frac{(2 -b(R))\alpha R}{r_{0}}\right)^{2}}\frac{a^{2}R^{2}}{r_{0}^{2}}\Bigg{)}d\tilde{ t}^{2}+\right. \tag{102}\] \[\left.+2e^{\frac{\alpha d\mathcal{U}}{r_{0}}}\left(\frac{(1-b(R) )A(r)}{1-\frac{r_{0}}{R}+\frac{(2-b(R))\alpha R}{r_{0}}}+\frac{(2b(R)-b(R)^{2 })A(r)}{\left(1-\frac{r_{0}}{R}+\frac{(2-b(R))\alpha R}{r_{0}}\right)^{2}} \frac{aR}{r_{0}}\right)d\tilde{t}dr+e^{2\alpha d\mathcal{U}/r_{0}}\frac{(2b(R )-b(R)^{2})A(r)}{\left(1-\frac{r_{0}}{R}+\frac{(2-b(R))\alpha R}{r_{0}}\right) ^{2}}dr^{2}+R^{2}d\Omega_{2}\right].\] Requiring the coefficient of the \(d\tilde{t}dr\) term to vanish gives the form of the function \(b(R)\) as in Eq.(46). As a result, the metric simplifies to Eq.(47) and the evaporation rate becomes \[\frac{dr_{g}}{d\tilde{t}}=-\frac{\alpha r_{g}}{r_{0}}\;, \tag{103}\] while from the relation (35), assuming a linear evaporation law \(\Gamma_{+}=\alpha\), we have that \[\Upsilon=\frac{\sqrt{\alpha}}{2\sqrt{2\pi}r_{g}}\;. \tag{104}\] Additionally, using the relation (15) one can show that \[\tilde{\xi}(\tilde{t})=\frac{\alpha^{2}r_{g}}{16\pi r_{0}^{2}\Upsilon^{2}}\;. \tag{105}\] As a consistency check, we can rewrite the expression for \(\tilde{h}\) in the vicinity of the apparent horizon, \[\tilde{h}(\tilde{t},r)\approx\ln\frac{\alpha r_{+}^{2}}{r_{0}(r-r_{+})}=\ln \frac{\alpha C^{2}}{r_{0}rf}\rightarrow\ln\frac{\alpha\sqrt{r_{g}}}{4\sqrt{ \pi}r_{0}\Upsilon\sqrt{x}}, \tag{106}\] and confirm the validity of the expression for \(\bar{\xi}\). A direct evaluation gives that Eq. (14) holds identically. ## Appendix B Useful relations in retarded coordinates ### Series expansion of the metric functions in (u,r) The line element of the metric in \((u,r)\) coordinates is given by \[ds^{2}=-e^{2h_{-}(u,r)}f(u,r)du^{2}-e^{h_{-}(u,r)}dudr+r^{2}d \Omega_{2}\;, \tag{107}\] where \[f(u,r)=1-\frac{C_{-}(u,r)}{r}\;, \tag{108}\] with \(C_{-}(u,r)\) representing the invariant MSH mass. The transformation laws from \((t,r)\) to \((u,r)\) coordinates are given by \[dt=e^{-h(t,r)}\left(e^{h_{-}(u,r)}du^{2}+\frac{dr}{f}\right)\;. \tag{109}\] The transformation law between \((v,r)\) and \((u,r)\) coordinates is obtained by combining equations (5) and (109), giving \[du=e^{-h_{-}(u,r)}\left(e^{h_{+}(v,r)}dv-\frac{2}{f}\right)\;. \tag{110}\] The Einstein equations in \((u,r)\) coordinates are \[-e^{-h_{-}}\partial_{u}C_{-}+f\partial_{r}C_{-}=8\pi r^{2}\bar{ \theta}_{u}\;, \tag{111}\] \[\partial_{r}C_{-}=8\pi r^{2}\bar{\theta}_{ur}\;,\] (112) \[\partial_{r}h_{-}=4\pi r\bar{\theta}_{r}\;, \tag{113}\] where the effective EMT components are defined as \[\bar{\theta}_{u}=e^{-2h_{-}}\bar{\Theta}_{uu}\;,\quad\bar{\theta}_{ ur}=e^{-h_{-}}\bar{\Theta}_{ur}\;,\quad\bar{\theta}_{r}=\bar{\Theta}_{rr}\;. \tag{100}\] Before proceeding with solving Einstein's equations, it is useful to write down the equations relating the effective EMT components in \((u,r)\) with the other coordinate systems. This can be done by transformation of the EMT components according to the laws (101) and (102). The relations are \[\bar{\theta}_{u}=\tau_{t}\;,\quad\bar{\theta}_{ur}=\frac{\tau_{t}+\tau_{t}^{r} }{f}\;,\quad\bar{\theta}_{r}=\frac{\tau_{t}+\tau^{r}+2\tau_{t}^{r}}{f^{2}}\;, \tag{101}\] and \[\bar{\theta}_{u}=\theta_{v}\;,\quad\bar{\theta}_{ur}=\theta_{vr}+\frac{2 \theta_{v}}{f}\;,\quad\bar{\theta}_{r}=\frac{4\theta_{v}+4f\theta_{vr}+f^{2} \theta_{r}}{f^{2}}\;. \tag{102}\] Solving the Einstein equations begins with the use of Eq. (100). We seek a solution of the form \[C_{-}(u,r)=r_{-}(u)+W_{-}(u,r)\;, \tag{103}\] with \(W_{-}(u,r_{-})=0\), so that \(r_{-}\) represents the apparent horizon radius in \((u,r)\) coordinates. We also define the variable \(z:=r-r_{-}\), so that near the apparent horizon we have the following partial differential equation \[\partial_{z}W_{-}=\frac{-16\pi r_{-}^{3}\Upsilon^{2}}{z-W_{-}}\;, \tag{104}\] which admits the following series solution \[W_{-}(u,r)=-4\Upsilon\sqrt{2\pi r_{-}^{3}}\sqrt{z}+\mathcal{O}(z)\;. \tag{105}\] The expansion of the MSH mass will be written in a similar form as \(C(t,r)\), \[C_{-}(u,r)=r_{-}(u)+\bar{c}_{12}(u)\sqrt{z}+\bar{c}_{1}(u)z+ \mathcal{O}(z^{3/2})\;, \tag{106}\] with \[\bar{c}_{12}(u)=-4\Upsilon\sqrt{2\pi r_{-}^{3}}\;. \tag{107}\] We continue with the solution of Eq. (100), which near the horizon (in terms of the variable \(z\)) becomes \[\partial_{z}h_{-}=-\frac{1}{2z}+\mathcal{O}\left(\frac{1}{\sqrt{z}}\right)\;. \tag{108}\] We thus obtain the solution \[h_{-}(z,r)=-\frac{1}{2}\ln\frac{z}{\bar{\xi}(u)}+\bar{h}_{12}(u) \sqrt{z}+\bar{h}_{1}(u)z+\mathcal{O}(z^{3/2})\;. \tag{109}\] The functions \(h(t,r)\) and \(h_{-}(u,r)\) exhibit the same logarithmically divergent behaviour at the horizon, leading to the aforementioned issues with \((t,r)\) and \((u,r)\) coordinates. We now proceed with the final Einstein equation (104), which plays the role of a consistency condition since \(C_{-}\) and \(h_{-}\) have already been determined. This consistency condition is used to extract information about the evaporation rate in \((u,r)\) coordinates. Using the solutions for \(C_{-}\) and \(h_{-}\) in (104), and taking the near-horizon limit, we have that \[r_{-}^{\prime}(u)=-2\sqrt{2\pi r_{-}\bar{\xi}(u)}\Upsilon\;. \tag{110}\] ### Coordinate transformations To derive the transformation between \((t,r)\) and \((u,r)\) coordinates, we proceed in a fashion similar to the \((v,r)\) case, expressing the variable \(z=r-r_{-}\) in terms of \(y\) and \(x\). The computations should be done along an ingoing null geodesic due to the non-singular behaviour of the advanced coordinate \(v\). We start by first considering the relation between \(z\) and \(y\), which requires determining the variation of \(u\) along the ingoing null geodesic. \(u\) can be written as a function of \(v\) and \(r\) using the transformation (102), \[u(v,r)=u(v,r_{+})+(\partial_{r}u)\big{|}_{y=0}\;y+\tfrac{1}{2}( \partial_{r}^{2}u)\big{|}_{y=0}y^{2}+\mathcal{O}(y^{3}), \tag{111}\] or in a simpler form \[\delta u=(\partial_{r}u)\big{|}_{y=0}\;y+\tfrac{1}{2}(\partial_{r}^{2}u)\big{|} _{y=0}\;y^{2}+\mathcal{O}(y^{3})\;. \tag{112}\] From the transformation law (102) and the expansions (106) and (109) we have that \[\partial_{r}u\big{|}_{y=0}=\big{(}-e^{-h_{-}}f^{-1}\big{)}\big{|}_{y=0}=\frac{ 1}{r_{-}^{\prime}(u)}\;. \tag{113}\] The variation of \(u\) can thus be written as \[\delta u=\frac{y}{r_{-}^{\prime}}+\tfrac{1}{2}(\partial_{r}^{2}u)\big{|}_{y=0 }\;y^{2}+\mathcal{O}(y^{3})\;. \tag{114}\] Now we can proceed with the calculation of the relation between \(z\) and \(y\). We define \(z\) as a function of \(v\) and \(r\) as \[z(v,r_{+}+y)=(r_{+}+y)-r_{-}(u(v,r_{+}+y))\;. \tag{115}\] The term \(r_{-}(u(v,r_{+}+y))\) is expanded as \[r_{-}(u(v,r_{+}+y)) =r_{-}(u(v,r_{+}))+r_{-}^{\prime}(u)\delta u \tag{116}\] \[\quad+\tfrac{1}{2}r_{-}^{\prime\prime}(u)\delta u^{2}\;.\] Identifying \(r_{-}(u(v,r_{+}))=r_{+}(v)\) and using the above equation in (115) we find that near the apparent horizon \(z\) and \(y\) are related through \[z=\frac{1}{2}\tilde{\omega}^{2}y^{2}\;, \tag{117}\] with \[\tilde{\omega}^{2}=-r_{-}^{\prime}(u)(\partial_{r}^{2}u)\big{|}_{y=0}\;-\; \frac{r_{-}^{\prime\prime}(u)}{(r_{-}^{\prime}(u))^{2}}\;. \tag{118}\] The derivative \((\partial_{r}^{2}u)|_{y=0}\) is finite and its determination is given in Appendix C.2. To determine the relationship between \(x\) and \(z\) we use the relations (29) and (100), which gives the following linear relationship between these coordinates: \[z=\frac{\bar{\omega}^{2}}{\omega^{2}}x \tag{101}\] We can find \(\bar{\omega}\) in the same manner as for \((t,r)\) coordinates by using the invariance of the MSH mass \[C_{-}(u(v,r),r)=C_{+}(v,r)\;. \tag{102}\] Using both expansions of the MSH mass respectively we have \[r_{-}(u(v,r))+\bar{c}_{12}(u)\sqrt{z}=r_{+}(v)+w_{1}(v)y+\;, \tag{103}\] with subleading terms of order \(+\mathcal{O}(z)\) and \(\mathcal{O}(y^{2})\). In order to compare the left and right hand side we need to first expand \(r_{-}(u(v,r))\) and then use the relation (100). This expansion is given by \[r_{-}(u(v,r))=r_{-}(u(v,r_{+}))+r^{\prime}_{-}(u)\delta u+\mathcal{O}(\delta u ^{2})\;. \tag{104}\] We identify \(r_{-}(u(v,r_{+}))=r_{+}(v)\) and make use of the relation (101), wherein Eq. (103) implies \[\bar{\omega}=\frac{1-w_{1}}{4\sqrt{\pi}r_{-}^{3/2}\Upsilon}\;. \tag{105}\] Finally, using the above equation and (30), (101) becomes \[x=2z\;. \tag{106}\] ### Condition for \(w_{1}=0\) The effective EMT component \(\bar{\theta}_{u}\) is defined by Eq. (100). The Einstein equations imply that \[\bar{\theta}_{u}=\frac{1}{8\pi}e^{-2h_{-}}\bar{G}_{uu}\;. \tag{107}\] Expanding the RHS of the above equation near the apparent horizon using the expansions (102) and (103) gives \[\bar{\theta}_{u}=-\Upsilon^{2}(t)+\left(\frac{\bar{c}_{1}(u)\Upsilon(t)}{ \sqrt{2\pi r_{-}^{3/2}}}-\bar{h}_{12}(u)\Upsilon^{2}(t)\right)\sqrt{z}+ \mathcal{O}(z)\;. \tag{108}\] However, Eqs.(103) and (104) hold identically, so we can compare the expansions and use the relation (106) to find that \[e_{12}(t)=\frac{1}{\sqrt{2}}\left(\frac{\bar{c}_{1}(u)\Upsilon(t)}{\sqrt{2 \pi r_{-}^{3/2}}}-\bar{h}_{12}(u)\Upsilon^{2}(t)\right). \tag{109}\] The same procedure using \(\tau^{r}\) instead gives a relation for \(p_{12}(t)\). Combining equations (103) implies that \[\tau^{r}=f^{2}\bar{\theta}_{r}+\bar{\theta}_{u}-2f\bar{\theta}_{ur}\;. \tag{110}\] The RHS of the above equation can be expanded about the apparent horizon in the same manner as was done for \(\bar{\theta}_{u}\), by using the definition of the effective EMT components (100) and the Eqs. (102) and (103). The expansion for the LHS is given by (102). Using the transformation law (106) and comparing the expansions then gives \[p_{12}(t)=\frac{1}{\sqrt{2}}\left(-\frac{\Upsilon^{2}(t)}{\sqrt{2\pi r_{-}^{3 }}}+3\bar{h}_{12}(u)\Upsilon^{2}(t)\right)\;. \tag{111}\] The condition (102), assuming \(w_{1}(v)=0\), is then equivalent to the condition that \(e_{12}=p_{12}\) which immediately yields \[\bar{c}_{1}(u)=4\sqrt{2\pi r_{-}^{3}}\bar{h}_{12}(u)\Upsilon(t)-1\quad\text{ when }w_{1}=0\;. \tag{112}\] ### Evaporation relations It is useful to derive relations that connect the evaporation law in \((u,r)\) coordinates with the other coordinate systems used in this paper. We begin by writing the evaporation law in \((u,r)\), assuming it has the following form: \[r^{\prime}_{-}(u)=-\Gamma_{-}(r_{-}) \tag{113}\] This implies that \[\frac{r^{\prime}_{-}(u)}{r^{\prime}_{g}(t)}=\frac{-\Gamma_{-}}{-\Gamma}=\frac {-4\sqrt{\pi r_{g}\xi(t)}\Upsilon}{-2\sqrt{2\pi r_{-}\xi(u)}\Upsilon}\;, \tag{114}\] which leads to a relation between \(\xi(t)\) and \(\bar{\xi}(u)\), \[\bar{\xi}(u)=\frac{2\,\Gamma_{-}^{2}}{\Gamma^{2}}\,\xi(t)\;. \tag{115}\] This relation represents a constraint between \(\xi(t)\) and \(\bar{\xi}(u)\), which must be satisfied in order to have the same functional form of the evaporation law in both \((t,r)\) and \((u,r)\) coordinates. A relation between the evaporation rate in \((u,r)\) and \((v,r)\) can also be found, by using Eq. (100) and Eq. (32), giving \[\Gamma_{-}^{2}=\frac{(1-w_{1})\bar{\xi}(u)}{r_{+}}\Gamma_{+}\;. \tag{116}\] ## Appendix C Second derivatives of time along a null geodesic ### Second partial derivative in \((t,r)\) For the calculation of the second derivative which appears in \(\omega\), it is necessary to use an ingoing null geodesic due to the singular nature of the coordinate \(t\) at the apparent horizon. For the metric (2), ingoing null rays are described by \[\frac{dt}{dr}=-e^{-h}f^{-1}\;. \tag{117}\] The second partial derivative can be written as \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v}=\frac{d}{dr}\left(\frac{dt}{dr} \right)\bigg{|}_{v}=\frac{d}{dr}\left(-e^{-h}f^{-1}\right)\;, \tag{100}\] where (101) is used in the final equality. Explicit calculation of the above equation along an ingoing null geodesic leads to \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v} =e^{-h}\left(-(\partial_{t}h)e^{-h}f^{-2}+(\partial_{r}h)f^{-1}\right.\] \[\left.-(\partial_{t}f)e^{-h}f^{-3}+f^{-2}\partial_{r}f\right)\;. \tag{102}\] A series expansion near the apparent horizon then leads to the following expression \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v}=\frac{1}{32\pi^{ 3/2}r_{g}^{7/2}\xi^{2}\Upsilon^{5}}\Big{(}\pi e_{12}^{2}r_{g}^{3}\xi^{3/2}+\pi p _{12}^{2}r_{g}^{3}\xi^{3/2}+2\sqrt{\pi}p_{12}r_{g}^{3/2}\xi^{3/2}\Upsilon-2 \sqrt{\pi}e_{12}r_{g}^{3/2}\xi^{3/2}\left(\sqrt{\pi}p_{12}r_{g}^{3/2}+\Upsilon \right)+\] \[+\Upsilon^{2}\left(\xi^{3/2}\left(1+4\pi r_{g}^{2}\Upsilon^{2} \right)-\sqrt{\pi}r_{g}^{5/2}\Upsilon\xi^{\prime}-2\sqrt{\pi}r_{g}^{5/2}\xi \Upsilon^{\prime}\right)\Big{)}+\mathcal{O}(\sqrt{x})\;. \tag{103}\] We are interested in the specific case where \(w_{1}=0\), a condition which was shown in Appendix A.2 to be equivalent to \(e_{12}=p_{12}\). Using this condition in (103) simplifies the result as follows: \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v} =\frac{1}{32\pi^{3/2}r_{g}^{7/2}\xi^{2}\Upsilon^{3}}\left(\xi^{3/ 2}\left(1+4\pi r_{g}^{2}\Upsilon^{2}\right)\right. \tag{104}\] \[\left.-\sqrt{\pi}r_{g}^{5/2}\Upsilon\xi^{\prime}-2\sqrt{\pi}r_{g} ^{5/2}\xi\Upsilon^{\prime}\right)+\mathcal{O}(\sqrt{x})\] \(\omega^{2}\) can then be determined using the general expression for the partial derivative (103) and (29). In the general case \(\omega^{2}\) thus takes the form \[\omega^{2}=\frac{\left(\sqrt{\pi}(-e_{12}+p_{12})r_{g}^{3/2}+\Upsilon\right)^ {2}}{8\pi r_{g}^{3}\Upsilon^{4}}+\mathcal{O}(\sqrt{x})\;, \tag{105}\] while for the special case \(w_{1}(v)=0\) (i.e. \(e_{12}=p_{12}\)) we have that \[\omega^{2}=\frac{1}{8\pi r_{g}^{3}\Upsilon^{2}}\;, \tag{106}\] where (101) is used in the final equality. Explicit calculation of the above equation along an ingoing null geodesic leads to \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v} =e^{-h}\left(-(\partial_{t}h)e^{-h}f^{-2}+(\partial_{r}h)f^{-1}\right.\] \[\left.-(\partial_{t}f)e^{-h}f^{-3}+f^{-2}\partial_{r}f\right)\;. \tag{107}\] A series expansion near the apparent horizon then leads to the following expression \[\frac{\partial^{2}t}{\partial r^{2}}\bigg{|}_{v} =\frac{1}{512\pi^{3/2}r_{-}^{7/2}\bar{\xi}^{2}\Upsilon^{3}}\Big{(} -16\sqrt{\pi}(-1+\bar{c}_{1}(u))\bar{h}_{12}(u)r_{-}^{3/2}\bar{\xi}^{3/2} \Upsilon+\] \[+\sqrt{2}\bar{\xi}^{3/2}\left((-1+\bar{c}_{1}(u))^{2}+32\pi r_{-} ^{2}(1+\bar{h}_{12}^{2}(u)r_{-})\Upsilon^{2}\right)-8\sqrt{\pi}r_{-}^{5/2} \Upsilon\bar{\xi}^{\prime}\Big{)}+\mathcal{O}(\sqrt{z})\;. \tag{108}\] This expression holds for the general case (\(w_{1}\neq 0\)) but simplifies greatly in the near-Vaidya limit \(w_{1}=0\). Using the condition (102) for \(w_{1}=0\) gives \[\frac{\partial^{2}u}{\partial r^{2}}\bigg{|}_{v}=\frac{\sqrt{2}\bar{\xi}^{3/2}(1 +8\pi r_{-}^{2}\Upsilon^{2})-2\sqrt{\pi}r_{-}^{5/2}\Upsilon\bar{\xi}^{\prime}} {128\pi^{3/2}r_{-}^{7/2}\bar{\xi}\Upsilon^{3}}+\mathcal{O}(\sqrt{z})\;. \tag{109}\] \(\tilde{\omega}\) is then given in the general case by \[\tilde{\omega}^{2}=\frac{\bar{h}_{12}^{2}(u)}{2}+\frac{(\bar{c}_{1}(u)-1)^{2}}{64 \pi r_{-}^{3}\Upsilon^{2}}-\frac{(\bar{c}_{1}(u)-1)\bar{h}_{12}(u)}{2\sqrt{2\pi} r_{-}^{3/2}\Upsilon}+\mathcal{O}(\sqrt{2}) \tag{13}\] while in the special case where \(w_{1}=0\) we have that \[\tilde{\omega}^{2}=\frac{1}{16\pi r_{-}^{3}\Upsilon^{2}}\;. \tag{14}\] This expression for \(\tilde{\omega}^{2}\) in the near-Vaidya limit is in agreement with Eq. (12) for \(w_{1}=0\).
2307.07660
Zip-zip Trees: Making Zip Trees More Balanced, Biased, Compact, or Persistent
We define simple variants of zip trees, called zip-zip trees, which provide several advantages over zip trees, including overcoming a bias that favors smaller keys over larger ones. We analyze zip-zip trees theoretically and empirically, showing, e.g., that the expected depth of a node in an $n$-node zip-zip tree is at most $1.3863\log n-1+o(1)$, which matches the expected depth of treaps and binary search trees built by uniformly random insertions. Unlike these other data structures, however, zip-zip trees achieve their bounds using only $O(\log\log n)$ bits of metadata per node, w.h.p., as compared to the $\Theta(\log n)$ bits per node required by treaps. In fact, we even describe a ``just-in-time'' zip-zip tree variant, which needs just an expected $O(1)$ number of bits of metadata per node. Moreover, we can define zip-zip trees to be strongly history independent, whereas treaps are generally only weakly history independent. We also introduce \emph{biased zip-zip trees}, which have an explicit bias based on key weights, so the expected depth of a key, $k$, with weight, $w_k$, is $O(\log (W/w_k))$, where $W$ is the weight of all keys in the weighted zip-zip tree. Finally, we show that one can easily make zip-zip trees partially persistent with only $O(n)$ space overhead w.h.p.
Ofek Gila, Michael T. Goodrich, Robert E. Tarjan
2023-07-14T23:47:40Z
http://arxiv.org/abs/2307.07660v2
# Zip-zip Trees: Making Zip Trees More Balanced, Biased, Compact, or Persistent+ ###### Abstract We define simple variants of zip trees, called _zip-zip trees_, which provide several advantages over zip trees, including overcoming a bias that favors smaller keys over larger ones. We analyze zip-zip trees theoretically and empirically, showing, e.g., that the expected depth of a node in an \(n\)-node zip-zip tree is at most \(1.3863\log n-1+o(1)\), which matches the expected depth of treaps and binary search trees built by uniformly random insertions. Unlike these other data structures, however, zip-zip trees achieve their bounds using only \(O(\log\log n)\) bits of metadata per node, w.h.p., as compared to the \(\Theta(\log n)\) bits per node required by treaps. In fact, we even describe a "just-in-time" zip-zip tree variant, which needs just an expected \(O(1)\) number of bits of metadata per node. Moreover, we can define zip-zip trees to be strongly history independent, whereas treaps are generally only weakly history independent. We also introduce _biased zip-zip trees_, which have an explicit bias based on key weights, so the expected depth of a key, \(k\), with weight, \(w_{k}\), is \(O(\log(W/w_{k}))\), where \(W\) is the weight of all keys in the weighted zip-zip tree. Finally, we show that one can easily make zip-zip trees partially persistent with only \(O(n)\) space overhead w.h.p. ## 1 Introduction A _zip tree_ is a randomized binary search tree introduced by Tarjan, Levy, and Timmel [27]. Each node contains a specified key and a small randomly generated _rank_. Nodes are in symmetric order by key, smaller to larger, and in max-heap order by rank. At a high level, zip trees are similar to other random search structures, such as the _treap_ data structure of Seidel and Aragon [24], the _skip list_ data structure of Pugh [21], and the _randomized binary search tree_ (RBST) data structure of Martinez and Roura [17], but with two advantages: 1. Insertions and deletions in zip trees are described in terms of simple "zip" and "unzip" operations rather than sequences of rotations as in treaps and RBSTs, which are arguably more complicated; and 2. Like treaps, zip trees organize keys using random ranks, but the ranks used by zip trees use \(\Theta(\log\log n)\) bits each, whereas the key labels used by treaps and RBSTs use \(\Theta(\log n)\) bits each. Also, as we review and expand upon, zip trees are topologically isomorphic to skip lists, but use less space. In addition, zip trees have a desirable privacy-preservation property with respect to their _history independence_[16]. A data structure is _weakly history independent_ if, for any two sequences of operations \(X\) and \(Y\) that take the data structure from initialization to state \(A\), the distribution over memory after \(X\) is performed is identical to the distribution after \(Y\). Thus, if an adversary observes the final state of the data structure, the adversary cannot determine the sequence of operations that led to that state. A data structure is _strongly history independent_, on the other hand, if, for any two (possibly empty) sequences of operations \(X\) and \(Y\) that take a data structure in state \(A\) to state \(B\), the distribution over representations of \(B\) after \(X\) is performed on a representation, \(r\), is identical to the distribution after \(Y\) is performed on \(r\). Thus, if an adversary observes the states of the data structure at different times, the adversary cannot determine the sequence of operations that lead to the second state beyond just what can be inferred from the states themselves. For example, it is easy to show that skip lists and zip trees are strongly history independent, and that treaps and RBSTs are weakly history independent.3 Footnote 3: If the random priorities used in a treap are distinct and unchanging for all keys and all time (which occurs only probabilistically), then the treap is strongly history independent. Indeed, zip trees and skip lists are strongly history independent for exactly the same reason, since Tarjan, Levy, and Timmel [27] define zip trees using a tie-breaking rule for ranks that makes zip trees isomorphic to skip lists, so that, for instance, a search in a zip tree would encounter the same keys as would be encountered in a search in an isomorphic skip list. This isomorphism between zip trees and skip lists has a potentially undesirable property, however, in that there is an inherent bias in a zip tree that favors smaller keys over larger keys. For example, as we discuss, the analysis from Tarjan, Levy, and Timmel [27] implies that the expected depth of the smallest key in an (original) zip tree is \(0.5\log n\) whereas the expected depth of the largest key is \(\log n\). Moreover, this same analysis implies that the expected depth for any node in a zip tree is at most \(1.5\log n+O(1)\), whereas Seidel and Aragon [24] show that the expected depth of any node in a treap is at most \(1.3863\log n+1\), and Martinez and Roura [17] prove a similar result for RBSTs. As mentioned above, the inventors of zip trees chose their tie-breaking rule to provide an isomorphism between zip trees and skip lists. But one may ask if there is a (hopefully simple) modification to the tie-breaking rule for zip trees that makes them more balanced for all keys, ideally while still maintaining the property that they are strongly history independent and that the metadata for keys in a zip tree requires only \(O(\log\log n)\) bits per key w.h.p. In this paper, we show how to improve the balance of nodes in zip trees by a remarkably simple change to its tie-breaking rule for ranks. Specifically, we describe and analyze a zip-tree variant we call _zip-zip trees_, where we give each key a rank pair, \(r=(r_{1},r_{2})\), such that \(r_{1}\) is chosen from a geometric distribution as in the original definition of zip trees, and \(r_{2}\) is an integer chosen uniformly at random, e.g., in the range \([1,\log^{c}n]\), for \(c\geq 3\). We build a zip-zip tree just like an original zip tree, but with these rank pairs as its ranks, ordered and compared lexicographically. We also consider a just-in-time (JIT) variant of zip-zip trees, where we build the secondary \(r_{2}\) ranks bit by bit as needed to break ties. Just like an original zip tree, zip-zip trees (with static secondary ranks) are strongly history independent, and, in any variant, each rank in a zip-zip tree requires only \(O(\log\log n)\) bits w.h.p. Nevertheless, as we show (and verify experimentally), the expected depth of any node in a zip-zip tree storing \(n\) keys is at most \(1.3863\log n-1+o(1)\), whereas the expected depth of a node in an original zip tree is \(1.5\log n+O(1)\), as mentioned above. We also show (and verify experimentally) that the expected depths of the smallest and largest keys in a zip-zip tree are the same--namely, they both are at most \(0.6932\log n+\gamma+o(1)\), where \(\gamma=0.577721566\ldots\) is the Euler-Mascheroni constant. In addition to showing how to make zip trees more balanced, by using the zip-zip tree tie-breaking rule, we also describe how to make them more biased for weighted keys. Specifically, we study how to store weighted keys in a zip-zip tree, to define the following variant (which can also be implemented for the original zip-tree tie-breaking rule): * _biased zip-zip trees_: These are a biased version of zip-zip trees, which support searches with expected performance bounds that are logarithmic in \(W/w_{k}\), where \(W\) is the total weight of all keys in the tree and \(w_{k}\) is the weight of the search key, \(k\). Biased zip-zip trees can be used in simplified versions of the link-cut tree data structure of Sleator and Tarjan [26] for dynamically maintaining arbitrary trees, which has many applications, e.g., see Acar [1]. Zip-zip trees and biased zip-zip trees utilize only \(O(\log\log n)\) bits of metadata per key w.h.p. (assuming polynomial weights in the weighted case) and are strongly history independent. The just-in-time (JIT) variant utilizes only \(O(1)\) bits of metadata per operation w.h.p. but lacks history independence. Moreover, if zip-zip trees are implemented using the tiny pointers technique of Bender, Conway, Farach-Colton, Kuszmaul, and Tagliavini [5], then all of the non-key data used to implement such a tree requires just \(O(n\log\log n)\) bits overall w.h.p. Additional Prior Work.Before we provide our results, let us briefly review some additional related prior work. Although this analysis doesn't apply to treaps or RBSTs, Devroye [8, 9] shows that the expected height of a randomly-constructed binary search tree tends to \(4.311\log n\) in the limit, which tightened a similar earlier result of Flajolet and Odlyzko [12]. Reed [22] tightened this bound even further, showing that the variance of the height of a randomly-constructed binary search tree is \(O(1)\). Eberl, Haslbeck, and Nipkow [11] show that this analysis also applies to treaps and RBSTs, with respect to their expected height. Papadakis, Munro, and Poblete [20] provide an analysis for the expected search cost in a skip list, showing the expected cost is roughly \(2\log n\). With respect to weighted keys, Bent, Sleator, and Tarjan [6] introduce a _biased search tree_ data structure, for storing a set, \(\mathcal{K}\), of \(n\) weighted keys, with a search time of \(O(\log(W/w_{k}))\), where \(w_{k}\) is the weight of the search key, \(k\), and \(W=\sum_{k\in\mathcal{K}}w_{k}\). Their data structure is not history independent, however. Seidel and Aragon [24] provide a weighted version of treaps, which are weakly history independent and have expected \(O(\log(W/w_{k}))\) access times, but weighted treaps have weight-dependent key labels that use exponentially more bits than are needed for weighted zip-zip trees. Afek, Kaplan, Korenfeld, Morrison, and Tarjan [2] provide a fast concurrent self-adjusting biased search tree when the weights are access frequencies. Zip trees and by extension zip-zip trees would similarly work well in a concurrent setting as most updates only affect the bottom of the tree, although such an implementation is not explored in this paper. Bagchi, Buchsbaum, and Goodrich [4] introduce randomized _biased skip lists_, which are strongly history independent and where the expected time to access a key, \(k\), is likewise \(O(\log(W/w_{k}))\). Our weighted zip-zip trees are dual to biased skip lists, but use less space. ## 2 A Review of Zip Trees In this section, we review the (original) zip tree data structure of Tarjan, Levy, and Timmel [27]. A Brief Review of Skip Lists.We begin by reviewing a related structure, namely, the _skip list_ structure of Pugh [21]. A skip list is a hierarchical, linked collection of sorted lists that is constructed using randomization. All keys are stored in level 0, and, for each key, \(k\), in level \(i\geq 0\), we include \(k\) in the list in level \(i+1\) if a random coin flip (i.e., a random bit) is "heads" (i.e., 1), which occurs with probability \(1/2\) and independent of all other coin flips. Thus, we expect half of the elements from level \(i\) to also appear in level \(i+1\). In addition, every level includes a node that stores a key, \(-\infty\), that is less than every other key, and a node that stores a key, \(+\infty\), that is greater than every other key. The highest level of a skip list is the smallest \(i\) such that the list at level \(i\) only stores \(-\infty\) and \(+\infty\). (See Figure 1.) The following theorem follows from well-known properties of skip lists. Theorem 2.1: _Let \(S\) be a skip list built from \(n\) distinct keys. The probability that the height of \(S\) is more than \(\log n+f(n)\) is at most \(2^{-f(n)}\), for any monotonically increasing function \(f(n)>0\)._ Proof: Note that the highest level in \(S\) is determined by the random variable \(X=\max\{X_{1},X_{2},\ldots,X_{n}\}\), where each \(X_{i}\) is an independent geometric random variable with success probability \(1/2\). Thus, for any \(i=1,2,\ldots,n\), \[\Pr(X_{i}>\log n+f(n))<2^{-(\log n+f(n))}=2^{-f(n)}/n;\] hence, by a union bound, \(\Pr(X>\log n+f(n))<2^{-f(n)}\). _Zip Trees and Their Isomorphism to Skip Lists._ Let us next review the definition of the (original) zip tree data structure [27]. A zip tree is a binary search tree where nodes are max-heap ordered according to random _ranks_, with ties broken in favor of smaller keys, so that the parent of a node has rank greater than that of its left child and no less than that of its right child [27]. The rank of a node is drawn from a geometric distribution with success probability \(1/2\), starting from a rank \(0\), so that a node has rank \(k\) with probability \(1/2^{k+1}\). As noted by Tarjan, Levy, and Timmel [27], there is a natural isomorphism between a skip-list, \(L\), and a zip tree, \(T\), where \(L\) contains a key \(k\) in its level-\(i\) list if and only if \(k\) has rank at least \(i\) in \(T\). That is, the rank of a key, \(k\), in \(T\) equals the highest level in \(L\) that contains \(k\). See Figure 2. Incidentally, this isomorphism is topologically identical to a duality between skip lists and binary search trees observed earlier by Dean and Jones [7], but the constructions of Dean and Jones are for binary search trees that involve rotations to maintain balance and have different metadata than zip trees, so, apart from the topological similarities, the analyses of Dean and Jones don't apply to zip trees. As we review in an appendix, insertion and deletion in a zip tree are done by simple "unzip" and "zip" operations, and these same algorithms also apply to the variants we discuss in this paper, with the only difference being the way we define ranks. An advantage of a zip tree, \(T\), over its isomorphic skip list, \(L\), is that \(T\)'s space usage is roughly half of that of \(L\), and \(T\)'s sea Figure 1: An example skip list. Figure 2: An example zip tree, corresponding to the skip list in Figure 1. Nevertheless, there is a potential undesirable property of zip trees, in that an original zip tree is biased towards smaller keys, as we show in the following. Theorem 2.2: _Let \(T\) be an (original) zip tree storing \(n\) distinct keys. Then the expected depth of the smallest key is \(0.5\log n+O(1)\) whereas the expected depth of the largest key is \(\log n+O(1)\)._ Proof: The bound for the largest (resp., smallest) key follows immediately from Lemma 3.3 (resp., Lemma 3.4) from Tarjan, Levy, and Timmel [27] and the fact that the expect largest rank in \(T\) is at most \(\log n+O(1)\). That is, the expected depth of the largest key in an original zip tree is twice that of the smallest key. This bias also carries over, unfortunately, into the characterization of Tarjan, Levy, and Timmel [27] for the expected depth of a node in an original zip tree, which they show is at most \(1.5\log n+O(1)\). In contrast, the expected depth of a node in a treap or randomized binary search tree can be shown to be at most \(1.39\log n+O(1)\)[24, 17]. ## 3 Zip-zip Trees In this section, we define and analyze the zip-zip tree data structure. Uniform Zip Trees.As a warm-up, let us first define a variant to an original zip tree, called a _uniform zip tree_, which is a zip tree where we define the rank of each key to be a random integer drawn independently from a uniform distribution over a suitable range. We perform insertions and deletions in a uniform zip tree exactly as in an original zip tree, except that rank comparisons are done using these uniform ranks rather than using ranks drawn from a geometric distribution. Thus, if there are no rank ties that occur during its construction, then a uniform zip tree is a treap [24]. But if a rank tie occurs, we resolve it using the tie-breaking rule for a zip tree, rather than doing a complete tree rebuild, as is done for a treap [24]. Still, we introduce uniform zip trees only as a stepping stone to our definition of zip-zip trees, which we give next. Zip-zip Trees.A _zip-zip tree_ is a zip tree where we define the rank of each key to be the pair, \(r=(r_{1},r_{2})\), where \(r_{1}\) is drawn independently from a geometric distribution with success probability \(1/2\) (as in the original zip tree) and \(r_{2}\) is an integer drawn independently from a uniform distribution on the interval \([1,\log^{c}n]\), for \(c\geq 3\). We perform insertions and deletions in a zip-zip tree exactly as in an original zip tree, except that rank comparisons are done lexicographically based on the \((r_{1},r_{2})\) pairs. That is, we perform an update operation focused primarily on the \(r_{1}\) ranks, as in the original zip tree, but we break ties by reverting to \(r_{2}\) ranks. And if we still get a rank tie for two pairs of ranks, then we break these ties as in original zip tree approach, biasing in favor of smaller keys. Fortunately, as we show, such ties occur with such low probability that they don't significantly impact the expected depth of any node in a zip-zip tree, and this also implies that the expected depth of the smallest key in a zip-zip tree is the same as for the largest key. Let \(x_{i}\) be a node in a zip-zip tree, \(T\). Define the \(r_{1}\)_-rank group_ of \(x_{i}\) as the connected subtree of \(T\) comprising all nodes with the same \(r_{1}\)-rank as \(x_{i}\). That is, each node in \(x_{i}\)'s \(r_{1}\)-rank group has a rank tie with \(x_{i}\) when comparing ranks with just the first rank coordinate, \(r_{1}\). Lemma 1: _The \(r_{1}\)-rank group for any node, \(x_{i}\), in a zip-zip tree is a uniform zip tree defined using \(r_{2}\)-ranks._ Proof: The proof follows immediately from the definitions. Incidentally, Lemma 1 is the motivation for the name "zip-zip tree," since a zip-zip tree can be viewed as a zip tree comprised of little zip trees. Moreover, this lemma immediately implies that a zip-zip tree is strongly history independent, since both zip trees and uniform zip trees are strongly history independent. See Figure 3. Lemma 2: _The number of nodes in an \(r_{1}\)-rank group in a zip-zip tree, \(T\) storing \(n\) keys has expected value \(2\) and is at most \(2\log n\) w.h.p._ Proof: By the isomorphism between zip trees and skip lists, the set of nodes in an \(r_{1}\)-rank group in \(T\) is dual to a sequence of consecutive nodes in a level-\(r_{1}\) list in the skip list but not in the level-\((r_{1}+1)\) list. Thus, the number of nodes, \(X\), in an \(r_{1}\)-rank group is a random variable drawn from a geometric distribution with success probability \(1/2\); hence, \(E[X]=2\) and \(X\) is at most \(2\log n\) with probability at least \(1-1/n^{2}\). Moreover, by a union bound, all the \(r_{1}\)-rank groups in \(T\) have size at most \(2\log n\) with probability at least \(1-1/n\). We can also define a variant of a zip-zip tree that is not history independent but which uses only \(O(1)\) bits of metadata per key in expectation. Figure 3: A zip-zip tree, with each node labeled with its \((r_{1},r_{2})\) rank. Each shaded subtree is an \(r_{1}\)-rank group defining a uniform zip tree based on \(r_{2}\) ranks. Just-in-Time Zip-zip Trees.In a _just-in-time (JIT) zip-zip tree_, we define the \((r_{1},r_{2})\) rank pair for a key, \(x_{i}\), so that \(r_{1}\) is (as always) drawn independently from a geometric distribution with success probability \(1/2\), but where \(r_{2}\) is an initially empty string of random bits. If at any time during an update in a JIT zip-zip tree, there is a tie between two rank pairs, \((r_{1,i},r_{2,i})\) and \((r_{1,j},r_{2,j})\), for two keys, \(x_{i}\) and \(x_{j}\), respectively, then we independently add unbiased random bits, one bit at a time, to \(r_{2,i}\) and \(r_{2,j}\) until \(x_{i}\) and \(x_{j}\) no longer have a tie in their rank pairs, where \(r_{2}\)-rank comparisons are done by viewing the binary strings as binary fractions after a decimal point. Note that the definition of an \(r_{1}\)-rank group is the same for a JIT zip-zip tree as a (standard) zip-zip tree. Rather than store \(r_{1}\)-ranks explicitly, however, we store them as a difference between the \(r_{1}\)-rank of a node and the \(r_{1}\)-rank of its parent (except for the root). Moreover, by construction, each \(r_{1}\)-rank group in a JIT zip-zip tree is a treap; hence, a JIT zip-zip tree is topologically isomorphic to a treap. We prove the following theorem in an appendix. Theorem 4.1: _Let \(T\) be a JIT zip-zip tree resulting from \(n\) update operations starting from an initially empty tree. The expected number of bits for rank metadata in any non-root node in \(T\) is \(O(1)\) and the number of bits required for all the rank metadata in \(T\) is \(O(n)\) w.h.p._ Depth Analysis.The main theoretical result of this paper is the following. Theorem 4.2: _The expected depth, \(\delta_{j}\), of the \(j\)-th smallest key in a zip-zip tree, \(T\), storing \(n\) keys is equal to \(H_{j}+H_{n-j+1}-1+o(1)\), where \(H_{n}=\sum_{i=1}^{n}(1/i)\) is the \(n\)-th harmonic number._ Proof: Let us denote the ordered list of (distinct) keys stored in \(T\) as \(L=(x_{1},x_{2},\ldots,x_{n})\), where we use "\(x_{j}\)" to denote both the node in \(T\) and the key that is stored there. Let \(X\) be a random variable equal to the depth of the \(j\)-th smallest key, \(x_{j}\), in \(T\), and note that \[X=\sum_{i=1,\ldots,j-1,j+1,\ldots,n}X_{i},\] where \(X_{i}\) is an indicator random variable that is \(1\) iff \(x_{i}\) is an ancestor of \(x_{j}\). Let \(A\) denote the event where the \(r_{1}\)-rank of the root, \(z\), of \(T\) is more than \(3\log n\), or the total size of all the \(r_{1}\)-rank groups of \(x_{j}\)'s ancestors is more than \(d\log n\), for a suitable constant, \(d\), chosen so that, by Lemma 3 (in an appendix), \(\Pr(A)\leq 2/n^{2}\). Let \(B\) denote the event, conditioned on \(A\) not occurring, where the \(r_{1}\)-rank group of an ancestor of \(x_{j}\) contains two keys with the same rank, i.e., their ranks are tied even after doing a lexicographic rank comparison. Note that, conditioned on \(A\) not occurring, and assuming \(c\geq 4\) (for the sake of a \(o(1)\) additive term4), the probability that any two keys in any of the \(r_{1}\)-rank groups of \(x_{j}\)'s ancestors have a tie among their \(r_{2}\)-ranks is at most \(d^{2}\log^{2}n/\log^{4}n\); hence, \(\Pr(B)\leq d^{2}/\log^{2}n\). Finally, let \(C\) denote the complement event to both \(A\) and \(B\), that is, the \(r_{1}\)-rank of \(z\) is less than \(3\log n\) and each \(r_{1}\)-rank group for an ancestor of \(x_{j}\) has keys with unique \((r_{1},r_{2})\) rank pairs. Thus, by the definition of conditional expectation, \[\delta_{j}=E[X] =E[X|A]\cdot\Pr(A)+E[X|B]\cdot\Pr(B)+E[X|C]\cdot\Pr(C)\] \[\leq\frac{2n}{n^{2}}+\frac{d^{3}\log n}{\log^{2}n}+E[X|C]\] \[\leq E[X|C]+o(1).\] So, for the sake of deriving an expectation for \(X\), let us assume that the condition \(C\) holds. Thus, for any \(x_{i}\), where \(i\neq j\), \(x_{i}\) is an ancestor of \(x_{j}\) iff \(x_{i}\)'s rank pair, \(r=(r_{1},r_{2})\), is the unique maximum such rank pair for the keys from \(x_{i}\) to \(x_{j}\), inclusive, in \(L\) (allowing for either case of \(x_{i}<x_{j}\) or \(x_{j}<x_{i}\), and doing rank comparisons lexicographically). Since each key in this range has equal probability of being assigned the unique maximum rank pair among the keys in this range, \[\Pr(X_{i}=1)=\frac{1}{|i-j|+1}.\] Thus, by the linearity of expectation, \[E[X|C]=H_{j}+H_{n+1-j}-1.\] Therefore, \(\delta_{j}=H_{j}+H_{n+1-j}-1+o(1)\). This immediately gives us the following: Corollary 1: _The expected depth, \(\delta_{j}\), of the \(j\)-th smallest key in a zip-zip tree, \(T\), storing \(n\) keys can be bounded as follows:_ 1. _If_ \(j=1\) _or_ \(j=n\)_, then_ \(\delta_{j}<\ln n+\gamma+o(1)<0.6932\log n+\gamma+o(1)\)_, where_ \(\gamma=0.57721566\ldots\) _is the Euler-Mascheroni constant._ 2. _For any_ \(1\leq j\leq n\)_,_ \(\delta_{j}<2\ln n-1+o(1)<1.3863\log n-1+o(1)\)_._ Proof: The bounds all follow from Theorem 4, the fact that \(\ln 2=0.69314718\ldots\), and Franel's inequality (see, e.g., Guo and Qi [14]): \[H_{n}<\ln n+\gamma+\frac{1}{2n}.\] Thus, for (1), if \(j=1\) or \(j=n\), \(\delta_{j}=H_{n}<\ln n+\gamma+o(1)\). For (2), if \(1\leq j\leq n\), \[\delta_{j} =H_{j}+H_{n-j+1}-1\] \[<\ln j+\ln(n-j+1)+2\gamma-1+o(1)\] \[\leq 2\ln n-1+o(1),\] since \(\ln 2>\gamma\) and \(j(n-j+1)\) is maximized at \(j=n/2\) or \(j=(n+1)/2\). Incidentally, these are actually tighter bounds than those derived by Seidel and Aragon for treaps [24], but similar bounds can be shown to hold for treaps. Making Zip-zip Trees Partially Persistent.A data structure that can be updated in a current version while also allowing for queries in past versions is said to be _partially persistent_, and Driscoll, Sarnak, Sleator, and Tarjan [10] show how to make any bounded-degree linked structure, like a binary search tree, \(T\), into a partially persistent data structure by utilizing techniques employing "fat nodes" and "node splitting." They show that if a sequence of \(n\) updates on \(T\) only modifies \(O(n)\) data fields and pointers, then \(T\) can be made partially persistent with only an constant-factor increase in time and space for processing the sequence of updates, and allows for queries in any past instance of \(T\). We show in an appendix that zip-zip trees have this property, w.h.p., thereby proving the following theorem. Theorem 3.1: _One can transform an initially empty zip-zip tree, \(T\), to be partially persistent, over the course of \(n\) insert and delete operations, so as to support, w.h.p., \(O(\log n)\) amortized-time updates in the current version and \(O(\log n)\)-time queries in the current or past versions, using \(O(n)\) space._ ## 4 Experiments We augment our theoretical findings with experimental results, where we repeatedly constructed search trees with keys, \(\{0,1,\ldots,n-1\}\), inserted in order (since insertion order doesn't matter). Randomness was obtained by using a linear congruential pseudo-random generator. For both uniform zip trees and zip-zip trees with static \(r_{2}\)-ranks, we draw integers independently for the uniform ranks from the intervals \([1,n^{c}]\), and \([1,\log^{c}n]\), respectively, choosing \(c=3\). Depth Discrepancy.First, we consider the respective depths of the smallest and the largest keys in an original zip tree, compared with the depths of these Figure 4: Experimental results for the depth discrepancy between the smallest and largest keys in the original, uniform (treap), and zip-zip variants of the zip tree. Each data point is scaled down by a factor of \(\log n\) (base 2). keys in a zip-zip tree. See Figure 4. The empirical results for the depths for smallest and largest keys in a zip tree clearly match the theoretic expected values of \(0.5\,\log n\) and \(\log n\), respectively, from Theorem 2. For comparison purposes, we also plot the depths for smallest and largest keys in a uniform zip tree, which is essentially a treap, and in a zip-zip tree (with static \(r_{2}\)-ranks). Observe that, after the number of nodes, \(n\), grows beyond small tree sizes, there is no discernible difference between the depths of the largest and smallest keys, and that this is very close to the theoretical bound of \(0.69\log n\). Most notably, apart from some differences for very small trees, the depths for smallest and largest keys in a zip-zip tree quickly conform to the uniform zip tree results, while using exponentially fewer bits for each node's rank. Average Key Depth and Tree Height.Next, we empirically study the average key depth and average height for the three aforementioned zip tree variants. See Figure 5. Notably, we observe that for all tree sizes, despite using exponentially fewer rank bits per node, the zip-zip tree performs indistinguishably well from the uniform zip tree, equally outperforming the original zip tree variant. The average key depths and average tree heights for all variants appear to approach some constant multiple of \(\log n\). For example, the average depth of a key in an original zip tree, uniform zip tree, and zip-zip tree reached \(1.373\log n\), \(1.267\log n\), \(1.267\log n\), respectively. Interestingly, these values are roughly \(8.5\%\) less than the original zip tree and treap theoretical average key depths of \(1.5\log n\)[27] and \(1.39\log n\)[24], respectively, suggesting that both variants approach their limits at a similar rate. Also, we note that our empirical average height bounds for uniform zip trees and zip-zip trees get as high as \(2.542\log n\). It is an open problem to bound these expectations theoretically, but we show in an appendix that the height of a zip-zip tree is at most \(3.82\log n\) with probability \(1-o(1)\), which clearly beats the \(4.31107\log n\) expected height for a randomly-constructed binary search tree [8, 9, 12, 22]. Rank Comparisons.Next, we experimentally determine the frequency of complete rank ties (collisions) for the uniform and zip-zip variants. See Figure 6 (left). The experiments show how the frequencies of rank collisions decrease polynomially in \(n\) for the uniform zip tree and in \(\log n\) for the second rank of the zip-zip variant. This reflects how these rank values were drawn uniformly from a range of \(n^{c}\) and \(\log^{c}n\), respectively. Specifically, we observe the decrease to be polynomial to \(n^{-2.97}\) and \(\log^{-2.99}n\), matching our chosen value of \(c\) being \(3\). Just-in-Time Zip-zip Trees.Finally, we show how the just-in-time zip-zip tree variant uses an expected constant number of bits per node. See Figure 6 (right). We observe a results of only \(1.133\) bits per node for storing the geometric (\(r_{1}\)) rank differences, and only \(2.033\) bits per node for storing the uniform (\(r_{2}\)) ranks, leading to a remarkable total of \(3.166\) expected bits per node of rank metadata to achieve ideal treap properties. ## 5 Biased Zip-zip Trees In this section, we describe how to make zip-zip trees biased for weighted keys. In this case, we assume each key, \(k\), has an associated weight, \(w_{k}\), such as an access frequency. Without loss of generality, we assume that weights don't change, since we can simulate a weight change by deleting and reinserting a key with its new weight. Our method for modifying zip-zip trees to accommodate weighted keys is simple--when we insert a key, \(k\), with weight, \(w_{k}\), we now assign \(k\) a rank pair, \(r=(r_{1},r_{2})\), such that \(r_{1}\) is \(\lfloor\log w_{k}\rfloor+X_{k}\), where \(X_{k}\) is drawn independently from a geometric distribution with success probability \(1/2\), and \(r_{2}\) is an integer independently chosen uniformly in the range from \(1\) to \(\lceil\log^{c}n\rceil\), where \(c\geq 3\). Thus, the only modification to our zip-zip tree construction to define a biased zip-zip tree is that the \(r_{1}\) component is now a sum of a logarithmic rank and a Figure 5: Experimental results for the average node depth and tree height, comparing the original, uniform (treap-like), and zip-zip variants of the zip tree. Each data point is scaled down by a factor of \(\log n\) (base 2). Figure 6: (Left) The frequency of encountered rank ties per rank comparison for the uniform variant and per element insertion for the zip-zip variant. (Right) The metadata size for the just-in-time implementation of the zip-zip tree. value drawn from a geometric distribution. As with our zip-zip tree definition for unweighted keys, all the update and search operations for biased zip-zip trees are the same as for the original zip trees, except for this modification to the rank, \(r\), for each key (and performing rank comparisons lexicographically). Therefore, assuming polynomial weights, we still can represent each such rank, \(r\), using \(O(\log\log n)\) bits w.h.p. We also have the following theorem, which implies the expected search performance bounds for weighted keys. Theorem 4.1: _The expected depth of a key, \(k\), with weight, \(w_{k}\), in a biased zip-zip tree storing a set, \(\mathcal{K}\), of \(n\) keys is \(O(\log(W/w_{k}))\), where \(W=\sum_{k\in\mathcal{K}}w_{k}\)._ Proof: By construction, a biased zip-zip tree, \(T\), is dual to a biased skip list, \(L\), defined on \(\mathcal{K}\) with the same \(r_{1}\) ranks as for the keys in \(\mathcal{K}\) as assigned during their insertions into \(T\). Bagchi, Buchsbaum, and Goodrich [4] show that the expected depth of a key, \(k\), in \(L\) is \(O(\log(W/w_{k}))\). Therefore, by Theorem 4.1, and the linearity of expectation, the expected depth of \(k\) in \(T\) is \(O(\log(W/w_{k}))\), where, as mentioned above, \(W\) is the sum of the weights of the keys in \(T\) and \(w_{k}\) is the weight of the key, \(k\). Thus, a biased zip-zip tree has similar expected search and update performance as a biased skip list, but with reduced space, since a biased zip-zip tree has exactly \(n\) nodes, whereas, assuming a standard skip-list representation where we use a linked-list node for each instance of a key, \(k\), on a level in the skip list (from level-0 to the highest level where \(k\) appears) a biased skip list has an expected number of nodes equal to \(2n+2\sum_{k\in\mathcal{K}}\log w_{k}\). For example, if there are \(n^{\varepsilon}\) keys with weight \(n^{\varepsilon}\), then such a biased skip list would require \(\Omega(n\log n)\) nodes, whereas a dual biased zip-zip tree would have just \(n\) nodes. Further, due to their simplicity and weight biasing, we can utilize biased zip-zip trees as the biased auxiliary data structures in the link-cut dynamic tree data structure of Sleator and Tarjan [26], thereby providing a simple implementation of link-cut trees.
2305.10571
Cotorsion of anti-cyclotomic Selmer groups on average
For an elliptic curve, we study how many Selmer groups are cotorsion over the anti-cyclotomic $\mathbb{Z}_p$-extension as one varies the prime $p$ or the quadratic imaginary field in question.
Debanjana Kundu, Florian Sprung
2023-05-17T21:15:33Z
http://arxiv.org/abs/2305.10571v1
# Cotorsion of anti-cyclotomic Selmer groups on average ###### Abstract. For an elliptic curve, we study how many Selmer groups are cotorsion over the anti-cyclotomic \(\mathbb{Z}_{p}\)-extension as one varies the prime \(p\) or the quadratic imaginary field in question. Key words and phrases:Iwasawa theory, Selmer groups, elliptic curves, anti-cyclotomic, supersingular primes 2010 Mathematics Subject Classification: 11G05, 11R23 (primary); 11R45 (secondary) ## 1. Introduction Let \(\mathsf{E}\) be an elliptic curve defined over \(\mathbb{Q}\) and let \(\mathbb{Q}(\sqrt{d})\) be an imaginary quadratic field. Much of the arithmetic of \(\mathbb{Q}(\sqrt{d})\)-rational points of \(\mathsf{E}\) is contained in the behaviour of points of \(\mathsf{E}\) up the anti-cyclotomic \(\mathbb{Z}_{p}\)-tower \(\mathbb{Q}(\sqrt{d})_{\mathrm{ac}}\) of \(\mathbb{Q}(\sqrt{d})\) for a prime \(p\) of good reduction. The goal of this paper is to count the proportion of anti-cyclotomic Selmer groups whose behaviour we understand. There are two different cases to be considered. The first is the _indefinite case_, in which the number of bad primes of \(\mathsf{E}\) that are inert in \(\mathbb{Q}(\sqrt{d})\) is even. This condition is also known as the (generalized) Heegner hypothesis, and should allow for many rational points on the elliptic curve. Indeed, M. Bertolini [1] and M. Longo-S. Vigni [13] show in various scenarios that the appropriate anti-cyclotomic Selmer groups have corank \(1\) over the anti-cyclotomic Iwasawa algebra. The second one is the _definite case_ and was studied by R. Pollack and T. Weston in [11], and in their joint work with C. Kim [15]. Here, the number of bad inert primes is odd, preventing the existence of Heegner points. Consequently, there should be few rational points. Their work confirms this and shows that under various hypotheses, the anti-cyclotomic Selmer group is cotorsion. While the hypotheses employed in works concerning the first (indefinite) case are mild from a statistical point of view, the ones employed in the second (definite) case (i.e. [1, 15, 16]) cut down the proportion of provably corank \(0\) Selmer groups, and we are interested in counting what this proportion is. To this end, there are known results in some cases: For a fixed pair \((\mathsf{E}_{/\mathbb{Q}},p)\) where \(p\) is ordinary, [10] gave a lower bound in this definite case for the proportion of imaginary quadratic fields of the form \(\mathbb{Q}(\sqrt{-\ell})\) with \(\ell\) a prime for which the Selmer groups are known to be cotorsion. Our paper generalizes [10] in two ways: 1. We include all imaginary quadratic fields \(\mathbb{Q}(\sqrt{d})\) in the count. 2. We remove the ordinarity hypothesis by including supersingular primes for which \(a_{p}=0\). The main result of our paper measures from a statistical point of view how mild the assumptions in [15] and [16] are. A bit more precisely, it asserts that the proportion of such imaginary quadratic fields is halved (i.e. multiplied by \(\frac{1}{2}\)) for each prime of bad reduction that is _split_ that would violate the key hypothesis of [15]_were it inert_. Instead of overwhelming the reader with precise statements, we give a flavour via an example. (For the technically savvy reader: they are Theorems 4.6 and 4.14, addressing the supersingular and the ordinary cases separately.) The elliptic curve 497a1 with Weierstrass equation \(y^{2}+xy=x^{3}+x^{2}+25x-14\) has bad reduction at \(7\) and \(71\). At the prime \(5\), this elliptic curve attains good supersingular reduction. Our theorem then says that the proportion of cotorsion anti-cyclotomic Selmer groups as one varies the quadratic imaginary field is at least \[\frac{1}{4}\times\frac{5\times 7\times 71}{6\times 8\times 72}=0.1797598\cdots.\] When randomly choosing imaginary quadratic fields, \(p\) should split half of the time and we should land in the definite case half of the time. This accounts for the factor of \(\frac{1}{4}=\frac{1}{2}\times\frac{1}{2}\). The other factor occurs because we count fields with discriminant coprime to \(p\) and conductor of the elliptic curve. If instead of the assumptions in [10], we relied on [1]1, the lower bound would be Footnote 1: [10, Remark 4.2] compares the different assumptions in more detail. \[\frac{1}{8}\times\frac{5\times 7\times 71}{6\times 8\times 72}=0.0898799\cdots,\] i.e. our theorem shows that the work of [10] doubled the desired proportion! We achieve the lower bound by counting the proportion of such imaginary quadratic fields that satisfy several hypotheses imposed in [10], which we call _choired_. _'Cho'_ indicates we are in the definite case, while _'ired'_ indicates a ramification hypothesis. More precisely, _'Choired'_ stands for: **C**onductor shouldn't satisfy **H**eegner hypothesis, so has an **O**dd number of factors. Furthermore, **I**nert primes **R**amify only under **E**xtra **D**ifficulty imposed by (Kim-)Pollack-Weston. Three steps are needed to perform the count. First, in Lemma 4.9 we encode imaginary quadratic fields with the same splitting type at the bad primes of **E** into something that is easier to count. We achieve this by working with the discriminants of the fields modulo the conductor of **E** (denoted by \(N_{\mathsf{E}}\)) and modulo the prime \(p\), showing that each family of such fields corresponds to a proportion of \(\frac{1}{2^{p+1}}\) of the possible residue classes, where \(r\) is the number of bad primes. The second step is to estimate the proportion of imaginary quadratic fields with discriminant coprime to \(pN_{\mathsf{E}}\) (Proposition 4.10). To do this, we sum appropriate estimates due to K. Prachar and P. Humphries over the residue classes in question from the first step. In the final step, we break up the choired fields into a disjoint union of families of imaginary quadratic fields with prescribed splitting type as in step 1. Then using the counting estimates in the previous two steps and a combinatorial count for this disjoint union, we arrive at our estimate. _Organization_: Including this introduction, the article has four sections. Section 2 is preliminary in nature. We introduce the definition of the key objects and also introduce the criterion of Pollack-Weston in this section. Sections 3 and 4 are devoted to studying this criterion on average. Section 3 is a warm-up to the main result: we show that for a fixed a pair \((\mathsf{E}_{/\mathbb{Q}},\mathbb{Q}(\sqrt{d}))\), the Selmer groups are cotorsion for almost all \(p\) of good ordinary reduction. It would be interesting to prove an analogous statement for supersingular primes. See Theorem 3.2 for the precise statement. We then develop the methods to prove our main result in Section 4, following the three steps described above. _Outlook_: If we fix the pair \(\left(p,\mathbb{Q}(\sqrt{d})\right)\) and vary the elliptic curve (ordered by height or conductor) with good reduction at \(p\), it seems to be significantly more difficult to estimate for what proportion of elliptic curves the appropriate Selmer groups are \(\Lambda\)-cotorsion. We will investigate aspects of this question in future projects. ## Acknowledgements We thank Manjul Bhargava, Allysa Lumley, V. Kumar Murty, Artene Siad, Joseph Silverman, and Andrew Sutherland for stimulating and helpful discussions, Ming-Lun Hsieh, Chan-Ho Kim, Robert Pollack, and Tom Weston for answering our questions related to [10, 10], and Noam Elkies for answering a question related to [11, 12]. We also thank Stefano Vigni for clarifying the papers [1, 12] and Melanie Matchett Wood for answering a question related to counting imaginary quadratic fields. DK was supported by a PIMS Postdoctoral Fellowship. FS is supported by an NSF grant and a Simons grant. We thank the anonymous referees for their detailed feedback on an earlier version of our manuscript. ## 2. Preliminaries Let \(p>3\) be a prime and \(K=\mathbb{Q}(\sqrt{d})\) an imaginary quadratic field. Denote by \(K_{\mathrm{ac}}\) the anti-cyclotomic \(\mathbb{Z}_{p}\)-extension of \(K\). For \(n\geq 0\), the _\(n\)-th layer_ is the unique number field \(K_{n}\) such that \(K\subseteq K_{n}\subset K_{\mathrm{ac}}\) and \([K_{n}:K]=p^{n}\). Note that \(K_{n}\) is Galois over \(\mathbb{Q}\) and its Galois group \(\mathrm{Gal}(K_{n}/\mathbb{Q})\) is (isomorphic to) the dihedral group of order \(2p^{n}\). Denote by \(\Gamma\) the Galois group \(\mathrm{Gal}(K_{\mathrm{ac}}/K)\) and pick a topological generator \(\gamma\in\Gamma\). The _Iwasawa algebra_\(\Lambda\) is the completed group algebra \(\mathbb{Z}_{p}\llbracket T\rrbracket:=\varprojlim_{n}\mathbb{Z}_{p}[\Gamma/ \Gamma^{p^{n}}]\). Fix an isomorphism of rings \(\Lambda\simeq\mathbb{Z}_{p}\llbracket T\rrbracket\) by sending \(\gamma-1\) to the formal variable \(T\). Let \(\mathsf{E}_{/\mathbb{Q}}\) be an elliptic curve with good reduction at \(p\) and of conductor \(N_{\mathsf{E}}\) so that the discriminant of \(K=\mathbb{Q}(\sqrt{d})\) is coprime to \(pN_{\mathsf{E}}\). The main objects of study in this paper are the _minimal Selmer groups_ defined in [11, Section 3.1]. For the convenience of the reader, we work with a less technical but equivalent definition of these \(p\)-primary Selmer groups than that of [11]2. Footnote 2: For the equivalence of the definitions in the setting of this paper, we refer the reader to [11, Appendix A]. The two sentence summary of their discussion is that Selmer groups are defined via a collection of local conditions, which the authors call _Selmer structures_. While these Selmer structures may be different at \(K_{n}\), they become the same as one takes the limit and works with \(K_{\mathrm{ac}}\). Choose a finite set of primes \(S\) containing the primes \(v|p\) in \(K\), the archimedean primes, and the primes at which \(\mathsf{E}\) has bad reduction. For any finite extension \(L/K\), write \(S(L)\) to denote the set of primes \(w\) of \(L\) such that \(w\) lies above a prime \(v\in S\). For ease of notation, define \[J_{v}(\mathsf{E}/L)=\prod_{w|v}H^{1}\left(L_{w},\mathsf{E}[p^{\infty}]\right) /\left(\mathsf{E}(L_{w})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\right),\] where the product is over all primes \(w\) of \(L\) lying above \(v\). Following R. Greenberg [10, p. 107] (see also [10, p. 20]), the _\(p\)-primary Selmer group over \(L\)_ is defined as follows \[\mathrm{Sel}_{p^{\infty}}(\mathsf{E}/L):=\ker\left\{H^{1}\left(L,\mathsf{E}[p ^{\infty}]\right)\longrightarrow\bigoplus_{v}J_{v}(\mathsf{E}/L)\right\}.\] It is also possible to define the _\(p\)-primary Selmer group_ by using a smaller Galois group, \[\mathrm{Sel}_{p^{\infty}}(\mathsf{E}/L):=\ker\left\{H^{1}\left(K_{S}/L, \mathsf{E}[p^{\infty}]\right)\longrightarrow\bigoplus_{v\in S}J_{v}(\mathsf{ E}/L)\right\}.\] The fact that these two definitions agree follows from [12, Proposition 6.5 or Corollary 6.6]. For a detailed discussion, we refer the reader to [11, Section 1.7 (Cassels-Poitou-Tate sequence)]. Next, set \(J_{v}(\mathsf{E}/K_{\mathrm{ac}})\) to be the direct limit \[J_{v}(\mathsf{E}/K_{\mathrm{ac}}):=\varprojlim_{L}J_{v}(\mathsf{E}/L),\] where \(L\) ranges over all number fields contained in \(K_{\mathrm{ac}}\). Taking direct limits, the _\(p\)-primary Selmer group over \(K_{\mathrm{ac}}\)_ can be defined as follows \[\mathrm{Sel}_{p^{\infty}}(E/K_{\mathrm{ac}}):=\ker\left\{H^{1}\left(K_{S}/K_{ \mathrm{ac}},\mathsf{E}[p^{\infty}]\right)\longrightarrow\bigoplus_{v\in S}J _{v}(\mathsf{E}/K_{\mathrm{ac}})\right\}.\] As explained earlier, over \(K_{\mathrm{ac}}\) as well, we have an equivalent definition of the Selmer group, \[\mathrm{Sel}_{p^{\infty}}(E/K_{\mathrm{ac}}):=\ker\left\{H^{1}\left(K_{ \mathrm{ac}},\mathsf{E}[p^{\infty}]\right)\longrightarrow\bigoplus_{v}J_{v}( \mathsf{E}/K_{\mathrm{ac}})\right\}.\] Note that the map above is a map of \(\Lambda\)-modules. A \(\Lambda\)-module \(M\) is said to be _cofinitely generated_ (resp. _cotorsion_) if its Pontryagin dual \(M^{\vee}:=\operatorname{Hom}_{\mathbb{Z}_{p}}\big{(}M,\mathbb{Q}_{p}/\mathbb{Z} _{p}\big{)}\) is finitely generated (resp. torsion) as a \(\Lambda\)-module. A standard application of Nakayama's lemma shows that \(\operatorname{Sel}_{p^{\infty}}(E/K_{\mathrm{ac}})\) is cofinitely generated. However, this Selmer group _need not be \(\Lambda\)_-cotorsion. Let \(\mathsf{E}_{/\mathbb{Q}}\) be an elliptic curve with supersingular reduction at \(p>3\). Set \(\widehat{\mathsf{E}}\) to be the formal group of \(\mathsf{E}\) over \(\mathbb{Z}_{p}\). Let \(L\) be a finite extension of \(\mathbb{Q}_{p}\) with valuation ring \(\mathcal{O}_{L}\) and let \(\widehat{\mathsf{E}}(L)\) denote \(\widehat{\mathsf{E}}(\mathfrak{m}_{L})\), where \(\mathfrak{m}_{L}\) is the maximal ideal in \(L\). Let \(v\) be a prime above \(p\) in \(K_{n}\). Following S. Kobayashi [13], we define the plus (and minus) norm groups as follows \[\widehat{\mathsf{E}}^{+}(K_{n,v}) :=\left\{P\in\widehat{\mathsf{E}}(K_{n,v})\mid\operatorname{tr}_{ n/m+1}(P)\in\widehat{\mathsf{E}}(K_{m,v}),\text{ for }0\leq m<n\text{ and }m\text{ even}\right\},\] \[\widehat{\mathsf{E}}^{-}(K_{n,v}) :=\left\{P\in\widehat{\mathsf{E}}(K_{n,v})\mid\operatorname{tr}_ {n/m+1}(P)\in\widehat{\mathsf{E}}(K_{m,v}),\text{ for }0\leq m<n\text{ and }m\text{ odd}\right\},\] where \(\operatorname{tr}_{n/m+1}:\widehat{\mathsf{E}}(K_{n,v})\to\widehat{\mathsf{E} }(K_{m+1,v})\) is the trace map with respect to the formal group law on \(\widehat{\mathsf{E}}\). Define the _plus (resp. minus) Selmer group_ at the \(n\)-th layer of the \(\mathbb{Z}_{p}\)-extension as follows \[0\to\operatorname{Sel}_{p^{\infty}}^{\pm}\left(\mathsf{E}/K_{n}\right)\to \operatorname{Sel}_{p^{\infty}}\left(\mathsf{E}/K_{n}\right)\to\prod_{v\mid p }\frac{H^{1}\left(K_{n,v},\mathsf{E}[p^{\infty}]\right)}{\widehat{\mathsf{E} }^{\pm}(K_{n,v})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}}\] in view of \(\left(\widehat{\mathsf{E}}(L_{v})^{\pm}\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p} \right)\subset\left(\mathsf{E}(L_{v})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\right)\). The plus (resp. minus) Selmer groups over \(K_{\mathrm{ac}}\) are defined by taking direct limits, i.e., \[\operatorname{Sel}_{p^{\infty}}^{\pm}\left(\mathsf{E}/K_{\mathrm{ac}}\right):= \varinjlim_{n}\operatorname{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/K_{n}).\] Let \(M\) be a finitely generated \(\Lambda\)-module and \(M^{\vee}\) denote its Pontryagin dual. The _Structure Theorem for \(\Lambda\)-modules_ (see [16, Theorem 13.12]) asserts that \(M\) is pseudo-isomorphic to a finite direct sum of cyclic \(\Lambda\)-modules, i.e., there is a map of \(\Lambda\)-modules \[M\longrightarrow\Lambda^{r}\oplus\left(\bigoplus_{i=1}^{s}\Lambda/(p^{\mu_{i} })\right)\oplus\left(\bigoplus_{j=1}^{t}\Lambda/(f_{j}(T))\right)\] with finite kernel and cokernel. Here, \(\mu_{i}>0\), and \(f_{j}(T)\) are distinguished polynomials (monic polynomials with non-leading coefficients divisible by \(p\)). The \(\mu\)-invariant of \(M\) is defined as the power of \(p\) in \(f_{M}(T):=p^{\sum_{i}\mu_{i}}\prod_{j}f_{j}(T)\). More precisely, \[\mu_{p}(M):=\begin{cases}\sum_{i=1}^{s}\mu_{i}&\text{ if }s>0\\ 0&\text{ if }s=0.\end{cases}\] _Remark 2.1_.: We are interested in the \(\Lambda\)-modules \(\operatorname{Sel}_{p^{\infty}}(\mathsf{E}/K_{\mathrm{ac}})\) (or \(\operatorname{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/K_{\mathrm{ac}})\)) when they are \(\Lambda\)-cotorsion, see [14, Theorem 1.3]. We write \(\mu(\mathsf{E}/K_{\mathrm{ac}})\) (or \(\mu^{\pm}(\mathsf{E}/K_{\mathrm{ac}})\)) for the \(\mu\)-invariant of the appropriate Selmer group. Keeping the notation introduced earlier, write \(N_{\mathsf{E}}=N_{\mathsf{E}}^{+}N_{\mathsf{E}}^{-}\) where \(N_{\mathsf{E}}^{+}\) is the product of the bad reduction primes that are split in \(K\) and \(N_{\mathsf{E}}^{-}\) is the product of the bad reduction inert primes. The following hypothesis guarantees the cotorsionness of the above remark in a large number of cases. It was introduced in [14, Assumption 1.1 and Remark 1.4] for correction). **Hypothesis choired**.: _For a prime \(p>3\), this hypothesis is the following list of conditions:_ 1. \(\overline{\rho}_{\mathsf{E},p}:\operatorname{Gal}(\overline{\mathbb{Q}}/ \mathbb{Q})\to\operatorname{GL}_{2}(\mathbb{F}_{p})\) _is surjective._ 2. _If_ \(q\) _is a prime with_ \(q|N_{\mathsf{E}}^{-}\) _and_ \(q\equiv\pm 1\pmod{p}\)_, then_ \(\overline{\rho}_{\mathsf{E},p}\) _is ramified at_ \(q\)_._ 3. \(N_{\mathsf{E}}^{-}\) _is square-free and the number of primes dividing_ \(N_{\mathsf{E}}^{-}\) _is odd._ 4. \(a_{p}\not\equiv\pm 1\pmod{p}\) _Remark 2.2_.: In [11] and [12], various subsets of the above hypotheses are named 'CR' - CR stands for 'controlled ramification' [13]. We make a few clarifying remarks. In [12], Condition (4) had been omitted. Also, the results in _loc. cit._ implicitly assumed that \(\overline{\rho}_{\mathsf{E},p}\) is ramified at all primes dividing \(N_{\mathsf{E}}^{+}\). This latter assumption of \(N^{+}\)-minimality can now be removed by a level-lowering trick introduced in [11], see Section 1.2 in _loc. cit._ for details. Finally, we recall the main theorems of interest in [12, 11]. **Theorem 2.3** ([12, Theorem 1.3] and [11, Theorem 2.2]).: _Let \(\mathsf{E}\) be an elliptic curve with good reduction at a prime \(p>3\) and conductor \(N_{\mathsf{E}}\) so that Hypothesis choired holds._ 1. _When_ \(a_{p}=0\)_, let_ \(K\) _be an imaginary quadratic field so that the discriminant of_ \(K\) _is coprime to_ \(pN_{\mathsf{E}}\) _and_ \(p\) _splits in_ \(K\) _into two primes that are totally ramified in_ \(K_{\mathrm{ac}}/K\)_. Then_ \(\mathrm{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/K_{\mathrm{ac}})^{\vee}\) _is_ \(\Lambda\)_-torsion with_ \(\mu^{\pm}(\mathsf{E}/K_{\mathrm{ac}})=0\)_._ 2. _When_ \(a_{p}\not\in\{\pm 1,0\}\)_, let_ \(K\) _be an imaginary quadratic field so that the discriminant of_ \(K\) _is coprime to_ \(pN_{\mathsf{E}}\)_. Then_ \(\mathrm{Sel}_{p^{\infty}}(\mathsf{E}/K_{\mathrm{ac}})^{\vee}\) _is_ \(\Lambda\)_-torsion with_ \(\mu(\mathsf{E}/K_{\mathrm{ac}})=0\)_._ ## 3. Average results: Varying over primes In this short warm-up section, we study what happens when one fixes a pair \((\mathsf{E}/_{\mathbb{Q}},K)\) and varies \(p\). The main theorem shows that for almost all ordinary primes3, the anti-cyclotomic Selmer groups are cotorsion with trivial \(\mu\)-invariant. We do this by proving that Hypothesis choired holds for appropriately large ordinary primes, so that the aforementioned work of Pollack-Weston guarantees the desired \(\Lambda\)-torsionness and also guarantees the vanishing of the \(\mu\)-invariant(s). Note that we require _no_ hypothesis on the Mordell-Weil rank of the elliptic curve over \(\mathbb{Q}\) (or \(K\)). Footnote 3: and hence almost all primes The results of Pollack-Weston require that \(K\) is an imaginary quadratic field of discriminant coprime to \(pN_{\mathsf{E}}\). In the supersingular reduction case, it is also required that \(p\) splits in \(K\) and that the primes above \(p\) are totally ramified in \(K_{\mathrm{ac}}/K\). _Remark 3.1_.: If \(p\) does not divide the class number of \(K\), denoted by \(h_{K}\), then the primes above \(p\) in \(K\) are totally ramified in the anti-cyclotomic \(\mathbb{Z}_{p}\)-extension, see [1, p. 2131 last paragraph]. **Theorem 3.2**.: _Fix a pair \((\mathsf{E}/_{\mathbb{Q}},K)\), where \(K\) is an imaginary quadratic field as above and \(\mathsf{E}/_{\mathbb{Q}}\) is an elliptic curve without complex multiplication so that \(N_{\mathsf{E}}^{-}\) is a product of an odd number of distinct primes. Let \(p>68N_{\mathsf{E}}(1+\log\log N_{\mathsf{E}})^{1/2}\) be a prime of good reduction._ 1. _If_ \(a_{p}=0\)_,_ \(p\) _splits in_ \(K\) _and_ \(p\nmid h_{K}\)_, then the_ \(p\)_-primary signed Selmer groups_ \(\mathrm{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/K_{\mathrm{ac}})\) _are_ \(\Lambda\)_-cotorsion with_ \(\mu^{\pm}(\mathsf{E}/K_{\mathrm{ac}})=0\)_._ 2. _If_ \(a_{p}\not\equiv 0,\pm 1\pmod{p}\) _and_ \(p\) _is unramified in_ \(K\)_, then the_ \(p\)_-primary Selmer group_ \(\mathrm{Sel}_{p^{\infty}}(\mathsf{E}/K_{\mathrm{ac}})\) _is_ \(\Lambda\)_-cotorsion with_ \(\mu(\mathsf{E}/K_{\mathrm{ac}})=0\)_._ Proof.: We check for which primes the four criteria appearing in Hypothesis choired hold. The last two always hold by assumption. The first two are satisfied for \(p>68N_{\mathsf{E}}(1+\log\log N_{\mathsf{E}})^{1/2}\): 1. \(\overline{\rho}_{\mathsf{E},p}\) is surjective: Serre's Open Image Theorem (see for example [10]) asserts that for a fixed elliptic curve \(\mathsf{E}\) without complex multiplication there exists a positive constant \(C_{\mathsf{E}}\) such that for \(p>C_{\mathsf{E}}\) the mod-\(p\) representation is surjective. The bound \(C_{\mathsf{E}}\leq 68N_{\mathsf{E}}(1+\log\log N_{\mathsf{E}})^{1/2}\) is due to A. Kraus [13]. Hence, this condition is satisfied as soon as \(p>68N_{\mathsf{E}}(1+\log\log N_{\mathsf{E}})^{1/2}\). 2. If \(q|N_{\mathsf{E}}^{-}\) and \(q\equiv\pm 1\pmod{p}\), then \(\overline{\rho}_{\mathsf{E},p}\) is ramified at \(q\): For any of the finitely many primes \(q\) dividing \(N_{\mathsf{E}}^{-}\), the condition \(q\equiv\pm 1\pmod{p}\) is never satisfied for \(p\gg 0\), e.g. \(p>68N_{\mathsf{E}}^{-}\). In particular, this condition is vacuous for all good reduction primes \(p>68N(1+\log\log N)^{1/2}\). We thus conclude that Hypothesis choired is satisfied for all sufficiently large \(p\) in either case. To complete the proof we apply [41, Theorems 1.1 and 1.3]. Note that in (1), Remark 3.1 ensures that \(p\) is totally ramified in \(K_{\mathrm{ac}}/K\). This allows using the said results from _loc. cit_. _Remark 3.3_.: 1. A. Cojocaru has obtained bounds similar to that of Kraus in [31, Theorem 2]. Thus, our theorem may be improved by replacing Kraus's bound \(68N_{\mathsf{E}}(1+\log\log N_{\mathsf{E}})^{1/2}\) by Cojocaru's \(\frac{4\sqrt{6}}{3}N_{\mathsf{E}}\!\!\prod_{p|N_{\mathsf{E}}}\left(1+\frac{1}{p }\right)^{\frac{1}{2}}\) whenever it is smaller. 2. It is conjectured that \(C_{\mathsf{E}}=37\), see [23, p. 399]. Consequently, the Selmer group \(\mathrm{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/K_{\mathrm{ac}})\) (resp. \(\mathrm{Sel}_{p^{\infty}}(\mathsf{E}/K_{\mathrm{ac}})\)) would be \(\Lambda\)-cotorsion as soon as \(p>\max\{N_{\mathsf{E}}^{-}+1,37,h_{K}\}\) (resp. \(p>\max\{N_{\mathsf{E}}^{-}+1,37\}\)) and \(p\) splits (resp. is unramified) in \(K\). 3. Elkies proved that given \(\mathsf{E}_{/\mathbb{Q}}\), there are infinitely many primes at which it has supersingular reduction [15, Theorem 1]. By the Chebotarev density theorem, half of the primes split in \(K\). A priori it is not obvious that given a pair \((\mathsf{E}_{/\mathbb{Q}},K)\) there are infinitely many primes of supersingular reduction of \(\mathsf{E}\) which split in \(K\). It is possible to find non-CM elliptic curves over \(\mathbb{Q}\) for which only finitely many supersingular primes split in a given _imaginary quadratic_ field. For example (see [26, p. 1]) the supersingular primes of \(X_{1}(15)\), given by the equation \[Y^{2}+XY+Y=X^{3}+X^{2},\] satisfy the property that \(p\equiv 3\pmod{4}\)4. However, such primes do not split in \(K=\mathbb{Q}(i)\). In [26, Section 4.1], it is explained that in any real (resp. imaginary) quadratic field \(K\), there is a bias _in favour of_ (resp. _against_) the occurrence of supersingular primes that split in the field. However, the averaging results in _loc. cit._ suggest that if \(\mathsf{E}\) is a non-CM elliptic curve picked _at random_ then there is a positive proportion of supersingular primes which split in \(K\). Footnote 4: Note that not all primes of the form \(p\equiv 3\pmod{4}\) are supersingular. ## 4. Average results: Varying the imaginary quadratic field Fix an elliptic curve \(\mathsf{E}_{/\mathbb{Q}}\) without complex multiplication of square-free conductor \(N_{\mathsf{E}}\) and a prime \(p>3\) of good reduction with \(a_{p}\not\equiv\pm 1\pmod{p}\). In this section, we count for what proportion of imaginary quadratic fields the associated Selmer groups are \(\Lambda\)-cotorsion with \(\mu\)-invariant equal to \(0\). We prove the theorem separately when \(p\) is a prime of supersingular or ordinary reduction. _Remark 4.1_.: The methods in this section have been written so that they can be extended to the case of weight \(2\) modular forms. ### The supersingular case In this section, we assume \(a_{p}=0\) (which is equivalent to \(p\) being supersingular, since \(p>3\).) Varying over _imaginary quadratic_ fields \(\mathbb{Q}(\sqrt{d})\) we estimate how often \(\mathrm{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/\mathbb{Q}(\sqrt{d})_{\mathrm{ac}})\) is \(\Lambda\)-cotorsion with \(\mu^{\pm}(\mathsf{E}/\mathbb{Q}(\sqrt{d})_{\mathrm{ac}})=0\). To this end, we estimate for what proportion of _imaginary quadratic_ fields the following properties hold: 1. \(\gcd\left(pN_{\mathsf{E}},\Bigl{|}D_{\mathbb{Q}(\sqrt{d})}\Bigr{|}\right)=1\), 2. Hypothesis choired is satisfied by the triple \((\mathsf{E}_{/\mathbb{Q}},\mathbb{Q}(\sqrt{d})_{\mathrm{ac}},p),\)_and_ 3. \(p\) splits in \(\mathbb{Q}(\sqrt{d})\). 4. \(p\) does not divide the class number of \(\mathbb{Q}(\sqrt{d})\). As for the last property, the _Cohen-Lenstra heuristics_ predict that among all imaginary quadratic fields, the proportion for which \(p\)_divides_ the class number is [17], [16, Section 9.I] \[c_{p}=\frac{6}{\pi^{2}}\left(1-\prod_{j=1}^{\infty}\left(1-\frac{1}{p^{j}} \right)\right). \tag{4.1}\] We remind the readers that this proportion is expected to be _positive_. A result of K. Horie and Y. Onishi (see [10]) establishes that there are infinitely many imaginary quadratic number fields such that \(p\)_does not_ divide the class number. In [10], W. Kohnen and K. Ono have obtained lower bound asymptotic but we are still quite far from establishing the Cohen-Lenstra heuristics. **Definition 4.2**.: _Let \(\mathcal{S}\) be a subset of imaginary quadratic fields. Define the density of \(\mathcal{S}\) as_ \[\delta(\mathcal{S}):=\lim_{x\to\infty}\frac{\#\left\{\mathbb{Q}(\sqrt{d}): \left|D_{\mathbb{Q}(\sqrt{d})}\right|<x\text{ and }\mathbb{Q}(\sqrt{d})\in \mathcal{S}\right\}}{\#\left\{\mathbb{Q}(\sqrt{d}):d<0,\,\left|D_{\mathbb{Q}( \sqrt{d})}\right|<x\right\}}.\] **Definition 4.3**.: _Given a pair \((\mathsf{E}_{/\mathbb{Q}},p)\) of an elliptic curve \(\mathsf{E}\) of conductor \(N_{\mathsf{E}}\) and a prime \(p\) of good reduction, define \(Q^{-}(\text{choired},p+)\) as the following set_ \[\left\{\mathbb{Q}(\sqrt{d}):d<0,\,\,\gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})} \right|,pN_{\mathsf{E}}\right)=1,\,\,p\text{ splits in }\mathbb{Q}(\sqrt{d}),\,\,\text{Hypothesis desired holds for }(\mathsf{E}_{/\mathbb{Q}},\mathbb{Q}(\sqrt{d}),p)\right\}.\] **Definition 4.4**.: _Given an elliptic curve \(\mathsf{E}_{/\mathbb{Q}}\) of conductor \(N_{\mathsf{E}}\) and a prime \(p\), define \(k\) to be the number of bad primes \(q|N_{\mathsf{E}}\) that satisfy both of the following:_ 1. \(q\equiv\pm 1\pmod{p}\)_,_ 2. \(\bar{\rho}_{\mathsf{E},p}\) _is unramified at_ \(q\)_._ _Remark 4.5_.: In words, \(k\) counts the number of bad primes \(q\) that would defy the key assumption of [10] if \(q\) were inert in the quadratic imaginary field to be chosen. Of course, since we are working under their assumption, the primes in Definition 4.4 must split - they are 'kounterexamples,' i.e. fake counterexamples to the key assumption of [10], hence the choice of the letter \(k\). **Theorem 4.6**.: _Fix a pair \((\mathsf{E}_{/\mathbb{Q}},p)\) so that_ 1. \(\mathsf{E}_{/\mathbb{Q}}\) _is an elliptic curve with square-free conductor_ \(N_{\mathsf{E}}=\prod_{i=1}^{r}q_{i}\)_, and_ 2. \(p>3\) _is a prime at which_ \(\mathsf{E}\) _has good supersingular reduction,_ \(\overline{\rho}_{\mathsf{E},p}\) _is surjective, and_ \(k<r\)_._ _Then_ \[\delta\left(Q^{-}(\text{choired},\,p+)\right)=\frac{pN_{\mathsf{E}}}{2^{k+2}(p +1)\prod_{q_{i}|N_{\mathsf{E}}}(q_{i}+1)}.\] _Let \(c_{p}^{*}\) denote the proportion of imaginary quadratic fields in \(Q^{-}(\text{choired},\,p+)\) with \(p\) dividing the class number. The proportion of imaginary quadratic fields with \(\gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})}\right|,pN_{\mathsf{E}}\right)=1\), the prime \(p\) splits in \(\mathbb{Q}(\sqrt{d})\), and \(\mathrm{Sel}_{p^{\infty}}^{\pm}(\mathsf{E}/\mathbb{Q}(\sqrt{d})_{\mathrm{ac}})\) is \(\Lambda\)-cotorsion with \(\mu^{\pm}\)-invariant equal to zero is at least_ \[\frac{pN_{\mathsf{E}}}{2^{k+2}(p+1)\prod_{q_{i}|N_{\mathsf{E}}}(q_{i}+1)}\cdot (1-c_{p}^{*}).\] First, we introduce some notation. **Definition 4.7**.: _Define \(\Pi_{p}(N_{\mathsf{E}})\) to be the set of prime divisors of \(N_{\mathsf{E}}\) together with \(p\), i.e.,_ \[\Pi_{p}(N_{\mathsf{E}})=\{p,q_{1},\ldots,q_{r}\}.\] _Choose the indices so that \(q_{i}\) with \(i\leq k\) are the primes that satisfy the conditions of Definition 4.4. This choice allows keeping track of primes that potentially violate condition (2) in Hypothesis choired._ **Definition 4.8**.: _For any partition \(\Pi=\Pi^{-}\sqcup\Pi^{+}\) of \(\Pi_{p}(N_{\mathsf{E}})\) into two disjoint parts, define_ \[Q^{-}(\Pi):=\left\{\mathbb{Q}(\sqrt{d}):d<0,\,\,\gcd\left(\left|D_{\mathbb{Q}( \sqrt{d})}\right|,pN_{\mathsf{E}}\right)=1,\text{ primes in }\Pi^{-}\text{ are inert in }\mathbb{Q}(\sqrt{d}),\right.\\ \left.\text{ and primes in }\Pi^{+}\text{ split in }\mathbb{Q}(\sqrt{d})\right\}.\] _Denote by \(\delta_{\Pi}\) the density of \(Q^{-}(\Pi)\), i.e. \(\delta_{\Pi}:=\delta\left(Q^{-}\left(\Pi\right)\right)\)._ **Lemma 4.9**.: _Pick a partition \(\Pi=\Pi^{-}\sqcup\Pi^{+}\) of \(\Pi_{p}(N_{\mathsf{E}})\) into two disjoint parts._ 1. _When_ \(2\nmid N_{\mathsf{E}}\)_, there exists a subset_ \(\mathfrak{r}_{\Pi}\) _of_ \((\mathbb{Z}/pN_{\mathsf{E}}\mathbb{Z})^{*}\) _of size_ \(\frac{\varphi(pN_{\mathsf{E}})}{2^{r+1}}\) _so that_ \[\mathbb{Q}(\sqrt{d})\in Q^{-}(\Pi)\iff D_{\mathbb{Q}(\sqrt{d})}\mod pN_{ \mathsf{E}}\in\mathfrak{r}_{\Pi}.\] 2. _When_ \(2\mid N_{\mathsf{E}}\)_, there exists a subset_ \(\mathfrak{r}_{\Pi}\) _of_ \((\mathbb{Z}/4pN_{\mathsf{E}}\mathbb{Z})^{*}\) _of size_ \(\frac{\varphi(pN_{\mathsf{E}})}{2^{r}}\) _so that_ \[\mathbb{Q}(\sqrt{d})\in Q^{-}(\Pi)\iff D_{\mathbb{Q}(\sqrt{d})}\mod 4pN_{ \mathsf{E}}\in\mathfrak{r}_{\Pi}.\] _In either case, \(Q^{-}(\Pi)\) corresponds to \(\frac{1}{2^{r+1}}\) of the possible residue classes for discriminants of quadratic imaginary fields coprime to \(pN_{\mathsf{E}}\)._ Proof.: From [13, Proposition 5.16], we have that \(\mathbb{Q}(\sqrt{d})\in Q^{-}(\Pi)\) if and only if for all \(q\in\Pi^{-}\), the Kronecker symbol \(\left(\frac{D_{\mathbb{Q}(\sqrt{d})}}{q}\right)=-1\) and for all \(q\in\Pi^{+}\), the Kronecker symbol \(\left(\frac{D_{\mathbb{Q}(\sqrt{d})}}{q}\right)=+1\). We first handle the case \(2\nmid N_{\mathsf{E}}\). For \(q\in\Pi_{p}(N_{\mathsf{E}})\), denote by \(\mathfrak{r}_{q}^{+}\subset(\mathbb{Z}/q\mathbb{Z})^{*}\) the set of quadratic residues, and by \(\mathfrak{r}_{q}^{-}\) the set of quadratic non-residues. Note that for an odd prime \(q\) we have that \(\#\mathfrak{r}_{q}^{+}=\#\mathfrak{r}_{q}^{-}=\frac{q-1}{2}\), since exactly half the elements of \((\mathbb{Z}/q\mathbb{Z})^{*}\) are squares. Define \[\mathfrak{r}_{\Pi}:=\prod_{q\in\Pi^{-}}\mathfrak{r}_{q}^{-}\times\prod_{q\in \Pi^{+}}\mathfrak{r}_{q}^{+}\subset\prod_{q\in\Pi_{p}(N_{\mathsf{E}})}( \mathbb{Z}/q\mathbb{Z})^{*}\cong(\mathbb{Z}/pN_{\mathsf{E}}\mathbb{Z})^{*}.\] Thus, \(\mathbb{Q}(\sqrt{d})\in Q^{-}(\Pi)\) if and only if \(D_{\mathbb{Q}(\sqrt{d})}\mod pN_{\mathsf{E}}\in\mathfrak{r}_{\Pi}\), as claimed. It also follows that \[\#\mathfrak{r}_{\Pi}=\frac{(p-1)}{2}\prod_{i=1}^{r}\frac{q_{i}-1}{2}=\frac{ \varphi(pN_{\mathsf{E}})}{2^{r+1}}.\] To handle the case \(2\mid N_{\mathsf{E}}\), set \[\mathfrak{r}_{2}^{+}:=\left\{1\right\}\in\left(\mathbb{Z}/8\mathbb{Z}\right)^{*}\] Note that by definition of the Kronecker symbol, \(D_{\mathbb{Q}(\sqrt{d})}\mod 8\in\left\{1\right\}\subset\left(\mathbb{Z}/8 \mathbb{Z}\right)^{*}\) if and only if \(\left(\frac{D_{\mathbb{Q}(\sqrt{d})}}{2}\right)=1\). We can then proceed analogously to the case when \(N_{\mathsf{E}}\) was odd, working with \[\mathfrak{r}_{\Pi}:=\prod_{q\in\Pi^{-}}\mathfrak{r}_{q}^{-}\times\prod_{q\in \Pi^{+}}\mathfrak{r}_{q}^{+}\subset\left(\mathbb{Z}/8\mathbb{Z}\right)^{*} \times\prod_{\text{odd }q\in\Pi_{p}(N_{\mathsf{E}})}(\mathbb{Z}/q\mathbb{Z})^{*}\cong( \mathbb{Z}/4pN_{\mathsf{E}}\mathbb{Z})^{*}.\] For the last assertion, note that when \(N_{\mathsf{E}}\) is odd, \(D_{\mathbb{Q}(\sqrt{d})}\) can reduce to any element in \((\mathbb{Z}/pN_{\mathsf{E}}\mathbb{Z})^{*}\), while for \(N_{\mathsf{E}}\) even, the reduction in the \((\mathbb{Z}/8\mathbb{Z})^{*}\) part lies in \(\left\{1,5\right\}\) as \(2\) does not ramify by assumption. **Proposition 4.10**.: _Let \(M\) be any square-free integer. Define_ \[Q^{-}\left(x,D\perp M\right):=\left\{\mathbb{Q}(\sqrt{d})\text{ imaginary}:\,\left|D_{\mathbb{Q}(\sqrt{d})}\right|<x\text{ and }\gcd\left(D_{\mathbb{Q}(\sqrt{d})},M\right)=1\right\}.\] _Then asymptotically,_ \[\lim_{x\to\infty}\#Q^{-}\left(x,D\perp M\right)\sim\frac{1}{2}\frac{x}{\zeta( 2)}\frac{M}{\prod_{q\mid M}(q+1)}\] Proof.: A result of Prachar [14, formula 1] (see [10] for two proofs by Humphries) says that \[\lim_{x\to\infty}\#\left\{n\text{ square-free}:\,\,0<n<x,\,\,n\equiv a\pmod{b}, \,\gcd(b,a)=1\right\}\sim\frac{x}{\zeta(2)}\frac{1}{b}\prod_{q\mid b}\left(1- \frac{1}{q^{2}}\right)^{-1},\] where \(a\) and \(b\) are integers, and the \(q\)'s are primes. We first handle the case \(2\nmid M\). Put \(b=4M\). Then there are \(\prod_{q\mid M}(q-1)\) congruence classes \(a\pmod{b}\) with \(\gcd(a,4M)=1\) and \(a\equiv 1\mod 4\), over which we sum the above Prachar-Humphries estimate. Therefore, (*) \[\begin{split}&\lim_{x\to\infty}\#\left\{n\text{ square-free}:\ \ n\equiv 1\pmod{4},\ 0<n<x,\ \gcd\left(n,M\right)=1\right\}\\ &\sim\prod_{q\mid M}(q-1)\times\frac{x}{\zeta(2)}\frac{1}{4M} \prod_{q\mid 4M}\left(1-\frac{1}{q^{2}}\right)^{-1}\\ &=\prod_{q\mid M}(q-1)\times\frac{x}{\zeta(2)}\frac{1}{4M}\frac{ 4}{3}\prod_{q\mid M}\frac{q^{2}}{q^{2}-1}\\ &=\frac{x}{\zeta(2)}\frac{1}{3}\frac{M}{\prod_{q\mid M}(q+1)}. \end{split}\] The same estimate works for the congruence class \(n\equiv 3\pmod{4}\). To handle the congruence class \(n\equiv 2\pmod{4}\), note that \(n\equiv 2\pmod{4}\) with \((n,M)=1\) is square-free if and only if \(\frac{n}{2}\) is. But \(\frac{n}{2}\) can be \(\equiv 1\pmod{4}\) or \(\equiv 3\pmod{4}\), so by using the above argument, \[\begin{split}&\lim_{x\to\infty}\#\left\{n\text{ square-free}:\ \ n\equiv 2\pmod{4},\ 0<n<x,\ \gcd\left(n,M\right)=1\right\}\\ &=\lim_{x\to\infty}\#\left\{n\text{ square-free}:\ \ n\equiv 1\text{ or }3\pmod{4},\ 0<n<\frac{x}{2},\ \gcd\left(n,M\right)=1\right\}\\ &\sim 2\times\frac{x/2}{\zeta(2)}\frac{1}{3}\frac{M}{\prod_{q\mid M}(q+1)}. \end{split}\] Since we are assuming that \(2\nmid M\), note that \(\gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})}\right|,M\right)=1\iff\gcd(d|\,,M)=1\). Hence, we are interested in the following estimate: (**) \[\begin{split}&\lim_{x\to\infty}\#\left\{d\text{ square-free}:\ d<0,\,\left|D_{\mathbb{Q}(\sqrt{d})}\right|<x,\ \gcd\left(\left|d\right|,M\right)=1\right\}\\ &=\lim_{x\to\infty}\#\left\{d\text{ square-free}:\ d<0,\,\left|d\right|<x,\ \gcd\left(\left|d\right|,M\right)=1,\ d\equiv 1\pmod{4}\right\}\cup\\ &\quad\lim_{x\to\infty}\#\left\{d\text{ square-free}:\ d<0,\,\left|d\right|<\frac{x}{4},\ \gcd\left(\left|d\right|,M\right)=1,\ d\equiv 2,3\pmod{4} \right\}\\ &\sim\left(\frac{1}{3}\frac{x}{\zeta(2)}+\frac{1}{3}\frac{x/4}{ \zeta(2)}+\frac{1}{3}\frac{x/4}{\zeta(2)}\right)\left(\frac{M}{\prod_{q\mid M }(q+1)}\right)=\frac{1}{2}\frac{x}{\zeta(2)}\frac{M}{\prod_{q\mid M}(q+1)}. \end{split}\] The case \(2|M\) is a bit easier. The simplification appears when decomposing (**) into \(\pmod{4}\) congruence classes. Since \(2\mid M\), note that \(\gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})}\right|,M\right)=1\) implies \(d\equiv 1\pmod{4}\). Thus, \[\begin{split}&\lim_{x\to\infty}\#\left\{\mathbb{Q}(\sqrt{d}):\ d<0,\,\left|D_{\mathbb{Q}(\sqrt{d})}\right|<x,\ \gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})}\right|,M\right)=1\right\}\\ =&\lim_{x\to\infty}\#\left\{d\text{ square-free}:\ d\equiv 1\pmod{4},\ -x<d<0,\ \gcd\left(\left|d\right|,M\right)=1\right\}\\ \sim&\frac{1}{2}\frac{x}{\zeta(2)}\left(\frac{M}{ \prod_{q\mid M}(q+1)}\right).\end{split}\] The reason for the different result in the Prachar-Humphries estimate (a factor of \(\frac{1}{2}\) instead of \(\frac{1}{3}\)) is the following. The integer \(b=2M\) is a multiple of \(4\), so there are again \(\prod_{q|M}(q-1)\) congruence classes \(a\pmod{b}\) with \((a,M)=1\) and \(a\equiv 1\pmod{4}\). Summing the Prachar-Humphries estimate over these classes, 1. the term \(\frac{1}{4M}\) in the equation (*) is replaced by \(\frac{1}{2M}\), and 2. the term \(\frac{4}{3}\) in the line below disappears, since in the product \(\prod_{q|M}\frac{q^{2}}{q^{2}-1}\), one of the primes \(q=2\) by assumption. This accounts for a total scaling by a factor of \(2\times\frac{3}{4}\). The same argument also applies when \(a\equiv 1\pmod{4}\) is replaced by \(a\equiv 3\pmod{4}\). _Remark 4.11_.: 1. It is well known that (see for example, [1, Corollary 1.3]5) Footnote 5: This count is slightly different from the one in [10] where all fields are weighted by \(1/\#\operatorname{Aut}\). \[\lim_{x\to\infty}\#\left\{\mathbb{Q}(\sqrt{d})\text{ imaginary quadratic}:\,\left|D_{\mathbb{Q}(\sqrt{d})}\right|<x\right\}\sim\frac{1}{2} \frac{x}{\zeta(2)}.\] In _loc. cit._, one finds an extra factor of \(\frac{1}{2^{r_{2}}}\), but \(r_{2}=0\) since \(\mathbb{Q}\) has signature (1,0). 2. Proposition 4.10 says that for each prime \(q\) with respect to which the coprimality condition is imposed on the discriminant, the proportion of imaginary quadratic fields reduces by a factor of \(\frac{q}{q+1}\). We are now in a position to prove Theorem 4.6. Proof of Theorem 4.6.: For Hypothesis chaired to be satisfied, we need * \(N_{\mathsf{E}}^{-}\) is a product of an odd number of primes _and_ * none of the primes \(q_{i}\) for \(i\leq k\) divide \(N_{\mathsf{E}}^{-}\). Let \(\mathcal{N}\) be the collection of all candidate subsets of the prime divisors of \(N_{\mathsf{E}}^{-}\). In other words, let \(\mathcal{N}\) be the collection of all subsets \(\Pi^{-}\subseteq\{q_{k+1},\dots,q_{r}\}\) for which \(\#\Pi^{-}\) is odd. Each such \(\Pi^{-}\) determines a partition \(\Pi=\Pi^{-}\sqcup\Pi^{+}\) of \(\Pi_{p}(N_{\mathsf{E}})\) so that \(p\in\Pi^{+}\). Then \[Q^{-}(\text{{choired, }}p+)=\bigsqcup_{\Pi\text{ so that }\Pi^{-}\in\mathcal{N}}Q^{-}(\Pi).\] Lemma 4.9 asserts that \(Q^{-}(\Pi)\) corresponds to \(\frac{1}{2^{r+1}}\) of the possible residue classes of discriminants coprime to \(pN_{\mathsf{E}}\). Setting \(M=pN_{\mathsf{E}}\), Proposition 4.10 and Remark 4.11(1) tell us that the proportion of those imaginary quadratic fields with discriminant coprime to \(pN_{\mathsf{E}}\) among all quadratic imaginary fields is given by \[\mathfrak{d}:=\frac{pN_{\mathsf{E}}}{(p+1)\prod_{q_{i}|N_{\mathsf{E}}}(q_{i}+1 )},\] so that \[\delta_{\Pi}=\frac{1}{2^{r+1}}\mathfrak{d}=\frac{pN_{\mathsf{E}}}{2^{r+1}(p+1 )\prod_{q_{i}|N_{\mathsf{E}}}(q_{i}+1)},\] Since \(\#\mathcal{N}=2^{r-k-1}\)[12, Exercise 1.1.13], \[\delta\left(Q^{-}(\text{{choired, }}p+)\right)=\#\mathcal{N}\cdot\delta_{\Pi}.\] This proves our first assertion. For the final assertion regarding the \(\Lambda\)-cotorsionness and triviality of the \(\mu\)-invariants, we need the two primes above \(p\) to be _totally ramified_ in \(\mathbb{Q}(\sqrt{d})_{\mathrm{ac}}/\mathbb{Q}(\sqrt{d})\). A sufficient condition for this is the (additional) hypothesis that \(p\) does not divide the class number of \(\mathbb{Q}(\sqrt{d})\), which gives the factor of \(1-c_{p}^{*}\) in the final assertion. _Remark 4.12_.: The relationship between \(c_{p}\) and \(c_{p}^{*}\) is not immediate to the authors, e.g. should they be equal? We think it would be worthwhile to investigate this question in greater depth. ### The ordinary case We now prove an analogue of Theorem 4.6 when \((\mathsf{E}_{/\mathbb{Q}},p)\) is a fixed pair as before but \(p\) is a prime of good _ordinary_ reduction of \(\mathsf{E}\). We begin with the following definition: **Definition 4.13**.: _Given a pair \((\mathsf{E}_{/\mathbb{Q}},p)\) of an elliptic curve \(\mathsf{E}\) of square-free conductor \(N_{\mathsf{E}}\) and a prime \(p\) of good ordinary reduction, define \(Q^{-}(\text{choired},p\pm)\) - or less precisely \(Q^{-}(\text{choired})\) - as the following set_ \[\left\{\mathbb{Q}(\sqrt{d}):d<0,\ \gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})} \right|,pN_{\mathsf{E}}\right)=1,\ \text{Hypothesis choired holds for }(\mathsf{E}_{/\mathbb{Q}},\mathbb{Q}(\sqrt{d}),p), \right\}.\] Unlike in the supersingular case, here we count imaginary quadratic fields where \(p\) is either inert or \(p\) splits. This is because, as pointed out in Section 2.4, Pollack-Weston have shown that for any triple \((\mathsf{E}_{/\mathbb{Q}},\mathbb{Q}(\sqrt{d}),p)\) with \(\mathbb{Q}(\sqrt{d})\in Q^{-}(\text{choired},p\pm)\), the associated Selmer group is \(\Lambda\)-cotorsion with \(\mu(\mathsf{E}/\mathbb{Q}(\sqrt{d})_{\text{ac}})=0\). In particular, there are no restrictions imposed on the splitting of \(p\) in the imaginary quadratic field or on the class number of the said field. We write \[Q^{-}(\text{choired},p\pm)=Q^{-}(\text{choired},p+)\sqcup Q^{-}(\text{choired},p-). \tag{4.2}\] Here, \(p+\) (resp. \(p-\)) indicates quadratic fields in \(Q^{-}(\text{choired},p\pm)\) with the additional property that \(p\) splits (resp. remains inert), cf. Definition 4.3. **Theorem 4.14**.: _Fix a pair \((\mathsf{E}_{/\mathbb{Q}},p)\) so that_ 1. \(\mathsf{E}_{/\mathbb{Q}}\) _is an elliptic curve with square-free conductor_ \(N_{\mathsf{E}}=\prod_{i=1}^{r}q_{i}\)_, and_ 2. \(p>3\) _is a prime at which_ \(\mathsf{E}\) _has good ordinary reduction,_ \(a_{p}\not\equiv 1\ (\mathrm{mod}\ p)\)_,_ \(\overline{\rho}_{\mathsf{E},p}\) _is surjective, and_ \(k<r\)_._ _Then the proportion of imaginary quadratic fields with \(\gcd\left(\left|D_{\mathbb{Q}(\sqrt{d})}\right|,pN_{\mathsf{E}}\right)=1\) and \(\operatorname{Sel}_{p^{\infty}}(\mathsf{E}/\mathbb{Q}(\sqrt{d})_{\text{ac}})\)\(\Lambda\)-cotorsion with \(\mu\)-invariant equal to zero is at least_ \[\delta\left(Q^{-}(\text{choired},p\pm)\right)=\frac{pN_{\mathsf{E}}}{2^{k+1}( p+1)\prod_{q_{i}|N_{\mathsf{E}}}(q_{i}+1)}.\] Proof.: For Hypothesis choired to be satisfied, we need * \(N_{\mathsf{E}}^{-}\) is a product of an odd number of primes _and_ * none of the primes \(q_{i}\) for \(i\leq k\) divide \(N_{\mathsf{E}}^{-}\). * \(a_{p}\not\equiv\pm 1\mod p\). However, the third condition involving the Fourier coefficients does not depend on the imaginary quadratic field \(\mathbb{Q}(\sqrt{d})\). In other words, this condition does not alter the count of \(Q^{-}(\text{choired},p\pm)\). Hence, for each subset the calculations go through verbatim from the previous section. Let \(\mathcal{N}\) be the collection of all \(\Pi^{-}\subseteq\{q_{k+1},\ldots,q_{r}\}\) for which \(\#\Pi^{-}\) is odd. Analogously to the supersingular case, to each such \(\Pi^{-}\) we associate a partition of \(\Pi\) of \(\Pi_{p}(N_{\mathsf{E}})=\{p,q_{1},\ldots,q_{r}\}\) by \[\Pi=\begin{cases}\Pi^{-}\sqcup\Pi^{+},&\text{letting $p\in\Pi^{+}$ if $p$ splits in $\mathbb{Q}(\sqrt{d})$}\\ \left(\Pi^{-}\sqcup\{p\}\right)\sqcup\Pi^{+},&\text{letting $p\not\in\Pi^{+}$ if $p$ is inert in $\mathbb{Q}(\sqrt{d})$}.\end{cases}\] The proof then proceeds analogously to the proof of Theorem 4.6, noting that adding \(p\) to the collection of prescribed inert primes doesn't change the argument. Thus, \[\delta\left(Q^{-}(\text{choired},\,p+)\right)=\delta\left(Q^{-}(\text{choired },\,p-)\right)=\frac{pN_{\mathsf{E}}}{2^{k+2}(p+1)\prod_{q_{i}|N_{\mathsf{E}}}(q _{i}+1)}.\] This completes the proof of the theorem.
2307.09659
Dynamics Around an Asteroid Modeled as a Mass Tripole
The orbital dynamics of a spacecraft orbiting around irregular small celestial bodies is a challenging problem. Difficulties to model the gravity field of these bodies arise from the poor knowledge of the exact shape as observed from the Earth. In order to understand the complex dynamical environment in the vicinity of irregular asteroids, several studies have been conducted using simplified models. In this work, we investigate the qualitative dynamics in the vicinity of an asteroid with an arched shape using a tripole model based on the existence of three mass points linked to each other by rods with given lengths and negligible masses. We applied our results to some real systems, namely, asteroids 8567, 243 Ida and 433 Eros and also Phobos, one of the natural satellites of Mars.
L. B. T. Santos, L. O. Marchi, P. A. Sousa-Silva, D. M. Sanchez, S. Aljbaae, A. F. B. A. Prado
2023-07-18T22:00:28Z
http://arxiv.org/abs/2307.09659v1
# Dynamics around an Asteroid Modeled as a Mass Tripole ###### Abstract The orbital dynamics of a spacecraft orbiting around irregular small celestial bodies is a challenging problem. Difficulties to model the gravity field of these bodies arise from the poor knowledge of the exact shape as observed from the Earth. In order to understand the complex dynamical environment in the vicinity of irregular asteroids, several studies have been conducted using simplified models. In this work, we investigate the qualitative dynamics in the vicinity of an asteroid with an arched shape using a tripole model based on the existence of three mass points linked to each other by rods with given lengths and negligible masses. We applied our results to some real systems, namely, asteroids 8567, 243 Ida and 433 Eros and also Phobos, one of the natural satellites of Mars. Methods: numerical -- Minor planets -- Asteroids: general space vehicles ## 0.1 Introduction Small-body explorations, such as asteroids and comets, have become an essential subject in deep space exploration. They involve multiple disciplines, such as science and control engineering, aerospace science and technology, celestial mechanics, astronomy among others. The combination of non-spherical gravitational attraction together with the rapid rotation of the asteroids around their axis govern the dynamics of the spacecraft near its surface. Thus, the analysis of the orbits of a spacecraft around these bodies is one of the current challenges in astrodynamics. Developing mathematical models to represent the gravitational field around irregular bodies is an important research topic in orbital dynamics. Usually, spherical harmonics expansion is used to model the Earth and other Planets, as these more massive celestial bodies (when compared to asteroids) have a shape that resembles a sphere (Elipe & Riaguas, 2003). On the other hand, when the body does not resemble a sphere, this expansion is no longer convenient and, in some cases, convergence cannot be guaranteed (Elipe & Riaguas, 2003). Generally, when the field point is located within the circumscribing sphere, the series diverge (Lan et al., 2017; Elipe & Riaguas, 2003). Furthermore, the expansion of low-order Legendre coefficients often does not provide a good approximation for the motion of a spacecraft due to the fact that higher order terms can generate divergence after several iterations (Riaguas et al., 1999; Jiang & Baoyin, 2018). The shape of a celestial body, its rotation period, and other physical characteristics can be obtained by light curve and radar analysis. From these observations, it is possible to use the solid polyhedron method to determine the dynamics around irregular bodies, including gravitational fields, stationary state solutions (equilibrium points, periodic orbits, quasiperiodic orbits, and chaotic motion), stability, bifurcation, etc (Werner, 1994; Scheeres et al., 1996; Jiang & Baoyin, 2018; Chanut et al., 2015; Jiang et al., 2014; Yu & Baoyin, 2012; Tsoulis & Petrovic, 2001). However, this approach requires large computational effort depending on the quantity of polyhedral shapes. This problem was partially solved in Chanut et al. (2015), where the authors considerably reduced the computation time (\(\sim 30\) times) applying the Mascon gravity framework, as presented in Geissler et al. (1996), using a shaped polyhedral source to model the external gravitational field of a small celestial body. For more details about this approach, we also refer the readers to Venditti. (2013) and Aljbaae et al. (2017). The gravitational potential can be obtained with high accuracy using the polyhedral model, but from this model it is difficult to understand the effect of certain parameters (mass ratio (\(\mu\)), shape, among others) on the dynamics. This happens because, in the polyhedron model, the parameters mix and produce a mixed influence on the gravitational field of irregular bodies. Therefore, to study the effect of a single parameter, it is often necessary to model irregular bodies using simplified models. By using simplified models, it is possible to perform semi analytical studies to understand which parameters affect stability, appearance of equilibrium points, bifurcations, etc. Thus, simplified models help to understand the dynamics around irregular bodies, and allow us to design orbits (Wang et al., 2017; Zeng & Liu, 2017), feedback control schemes (Yang et al., 2017), as well as the permissible hovering regions (Zeng et al., 2016). An effective way to analyze the surface of an asteroid is to body-fixed hovering in a region close to the asteroid, where the spacecraft maintains its position constant with respect to the asteroid (Wen et al., 2020). A great location for using the body-fixed hovering are the equilibrium points, due to the fact that they are locations that receive minimal disturbance. Jiang et al. (2014) investigated body-fixed hovering at equilibrium points and classified the manifolds close to these points into eight types. Body-fixed hovering can be used to obtain accurate measurements of a region on the surface of the target asteroid and to facilitate the descent and ascend maneuvers of a spacecraft whose mission is to return to Earth with samples (Broschart & Scheeres, 2005). Such maneuvers were used in the Hayabusa mission (Scheeres, 2004). Several bodies with different shapes can be described using simplified mathematical models. For example, Elipe & Lara (2003); Riaguas et al. (1999, 2001), analyze the motion of a particle under the gravitational field of a massive straight segment. A simple planar plate (Blesa. 2006), a rotating homogeneous cube (Liu et al. 2011) and a triaxial ellipsoid (Gabern et al. 2006) have also been used to model bodies with irregular shapes. Zeng et al. (2015) proposed that certain classes of elongated small bodies can be modeled by a double-particle-linkage called the dipole model. After that, Zeng et al. (2016) investigated the dynamical properties in the vicinity of an elongated body (using the dipole model) in order to analyze the influence of the force ratio (\(k\)), the mass ratio (\(\mu\)) and the oblateness (\(A_{2}\)) of the primary in the distribution of the equilibrium points in the \(xy\) plane. Through this dynamical analysis, Zeng et al. (2016) observed that the non-collinear equilibrium points exist only for \(0.37<k<2.07\), and that these equilibria do not depend on \(\mu\). In Zeng et al. (2016), the influence of parameters \(k\), \(\mu\) and \(A_{2}\) (oblateness of the second primary) on the positions of the of out-of-plane equilibrium points and on the topological structure of the zero velocity curves were analyzed. Zeng et al. (2016) noted that the oblateness of the second primary greatly influences the distribution of equilibrium points outside the plane. These works, among others, showed that using that simplified model it is possible to identify the main parameters governing the dynamics around certain asteroid systems (Barbosa Torres dos Santos et al. 2017a,b; Zeng et al. 2018). Inspired by the double-particle-linkage model, Lan et al. (2017) proposed that small arched bodies can be modeled by a triple-particle-linkage model determined by five parameters: \(M\), \(\omega\), \(l_{1}\), \(\tau\) and \(\beta\). Analyzing asteroids 433 Eros, 243 Ida, and the Martian moon M1 Phobos, they validated the so called tripole model, by verifying that the gravitational field distribution of unstable annular regions is similar to the one found with the polyhedral model. Later, Yang et al. (2018) proposed the non-axisymmetric triple particle-linkage model as a further step to improve the modeling towards a more realistic scenario. The authors analyzed the non-axisymmetric tripole model using three different elongated asteroids (243 Ida, 433 Eros, and (8567) 1996 HW1) and verified that the asymmetrical tripole model is more accurate than its predecessors, the dipole and the symmetrical tripole model. We consider different geometries for the tripole to compute the gravitational potential and we compute the positions of the equilibrium points for the different combinations of relevant parameters of the model. Additionally, we analyze the conditions for linear stability. We find that the existence of some equilibrium points depends on the azimuthal angle and that the stability conditions depend on the rotation of the asteroids around their axis (\(k\)), on the azimuthal angle (\(\Phi\)), and on the mass ratio of the system (\(\mu^{*}\)). Also, we investigate the influence of \(\Phi\) on the topological structure of the zero velocity curves. Finally, we find the relationship between the Jacobi constant and the azimuthal angle of the asteroid for all equilibrium points outside the asteroid's body. Although the works found in the literature deal with the validation of the symmetric and the asymmetric tripole model, a semi-analytical analysis of the tripole model has not yet been performed. So, The main goal of the present work is to perform a dynamical analysis around arched asteroids and investigate which parameters (\(k\), \(\mu^{*}\) and \(\Phi\), where \(\Phi\) determines the degree of arching of the asteroid) influence in the distribution of the equilibrium points, in the topological structure of the zero velocity curves as well as the stability condition of stationary solutions. The tripole model has additional degrees of freedom when compared to the dipole model. So, it is possible to identify new parameters, such as the azimuthal angle, and to investigate their influence on the dynamical properties around an arched system. With this, the results can be applied to investigate elongated natural arched bodies, such as some asteroid systems, comet nuclei and planet's moons. We note that, from a dynamical point of view it should be interesting to explore the effect of the shape on the inner equilibria also. However, since we focus on the applicability of the solutions, we restrict the investigation to the points outside the body of the asteroid. This article is organized as follows. The model and the methodology are discussed in Section 2. The results are analyzed and discussed in Section 3. In section 4, we investigate and compare the stability conditions of the model adopted in this study with real systems of small bodies. In section 5, some final considerations are made. ## 2 Mathematical Framework In this section, we describe the Restricted Four-Body Problem using the rotating mass tripole model. In our investigations, we use the rotating mass tripole model shown in Fig. 1. This model consists in three mass points, \(M_{1}\), \(M_{2}\), and \(M_{3}\), arranged inside an irregularly shaped asteroid. All the equations developed in this work refer to the asteroid-particle system (where particle is a body with negligible mass), i.e., the perturbations from other bodies are not taken into account. The rods connecting \(M_{1}\) to \(M_{3}\) and \(M_{2}\) to \(M_{3}\) have negligible mass and the same length \(L=1\), which is the canonical unit. The distance between \(M_{1}\) and \(M_{2}\) is denoted by \(l_{1}\), while the distance between \(M_{2}\) and the \(x\)-axis, which contains \(M_{3}\), is denoted by \(l_{2}\). The parameter \(\tau\) is defined as the ratio of \(l_{2}\) to \(l_{1}^{*}\), where \(l_{1}^{*}=l_{1}/2\), i.e. \(\tau=l_{2}/l_{1}^{*}\). The origin of the reference system (\(xy\)) is at the center of mass of the asteroid. The angle formed by each rod with the \(x\)-axis is called the azimuthal angle and is denoted by \(\Phi\). We assume that both rods make the same angle with the horizontal axis. The geometric configuration of the asteroid depends on this angle. The more arched the shape of the asteroid, the larger is the azimuthal angle. Note that when \(\Phi=0^{\circ}\) the length of the asteroid is maximum and equals to two canonical units. The equations that describe the motion of the particle in the \(xy\) plane around the tripole are written in the rotating frame that rotates with constant angular velocity \(\omega=1\), in canonical units. The unit of time is defined such that the period of rotation of the tripole is equal to \(2\pi\). We consider that \(M_{1}\), \(M_{2}\), and \(M_{3}\) have equal masses, i.e., \(m_{1}=m_{2}=m_{3}\). ### Equations of Motion Consider that the body with negligible mass (particle) is located at \(P(x,\)\(y)\) and its motion is governed exclusively by the gravitational forces due to the primary bodies \(M_{1}\), \(M_{2}\), and \(M_{3}\). \(M_{1}\) and \(M_{2}\) have masses \(m_{1}=m_{2}=\mu^{*}\), and \(M_{3}\) has mass \(m_{3}=1-2\mu^{*}\), where \(\mu^{*}\) is mass ratio defined as \[\mu^{*}=\frac{m_{2}}{m_{1}+m_{2}+m_{3}} \tag{1}\] The coordinates of the primaries, in canonical units, are, respectively, given by: \[x1=-cos(\Phi),\quad y_{1}=sin(\Phi)-2\mu^{*}sin(\Phi),\quad z_{1}=0 \tag{2}\] \[x2=cos(\Phi),\quad y_{2}=sin(\Phi)-2\mu^{*}sin(\Phi),\quad z_{2}=0 \tag{3}\] \[x_{3}=0,\quad y_{3}=-2\mu^{*}sin(\Phi),\quad z_{3}=0 \tag{4}\] Using the canonical units mentioned above the Hamilton function of the system is written as (Broucke 1968): \[H=\frac{(p_{x}+y)^{2}+(p_{y}+x)^{2}}{2}-\frac{x^{2}+y^{2}}{2}-k\left(\frac{ \mu^{*}}{r_{1}}+\frac{\mu^{*}}{r_{2}}+\frac{1-2\mu^{*}}{r_{3}}\right) \tag{5}\] where \[r_{1}=\sqrt{(x-x_{1})^{2}+(y-y_{1})^{2}+z^{2}}, \tag{6}\] Figure 1: Schematic representation of the asteroid modeled by a tripole. \[r_{2} =\sqrt{(x-x_{2})^{2}+(y-y_{2})^{2}+z^{2}}, \tag{7}\] \[r_{3} =\sqrt{(x-x_{3})^{2}+(y-y_{3})^{2}+z^{2}}, \tag{8}\] and \(p_{x}\) and \(p_{y}\) are the components of the angular momentum of the particle with respect to the \(x\)-axis and the \(y\)-axis, respectively. The dimensionless parameter \(k\) is the force ratio, given by the ratio between the gravitational force and the centrifugal force (Zeng et al., 2018, 2016) given by \[k=\frac{G^{*}M}{\omega^{*2}l_{1}^{*3}} \tag{9}\] The value of \(k\) depends on the angular velocity of the asteroid (\(\omega^{*}\)) in the international system of units, the total mass of the body (\(M\)) in kg and the length \(l_{1}^{*}\) is the distance between \(M_{1}\) and \(M_{2}\), in meters, and \(G^{*}\) is the universal gravitational constant in the international unit system. So \(k\) can be computed after obtaining the length of the segment \(l_{1}^{*}\). (Zeng et al., 2018; Lan et al., 2017; Zeng et al., 2016) From the Hamilton function, it is possible to obtain the equations of motion of the particle in the rotating reference system: \[\dot{x}=\frac{\partial H}{\partial p_{x}}=p_{x}+y \tag{10}\] \[\dot{y}=\frac{\partial H}{\partial p_{y}}=p_{y}-x \tag{11}\] The remaining dynamical equations are \[\dot{p}_{x}=-\frac{\partial H}{\partial x}=p_{y}-x+\Omega_{x}, \tag{12}\] \[\dot{p}_{y}=-\frac{\partial H}{\partial y}=p_{x}-y+\Omega_{y}, \tag{13}\] where \(\Omega_{x}\) and \(\Omega_{y}\) is the partial derivatives of \(\Omega\) with respect to \(x\) and \(y\), respectively, that given by \[\Omega=\frac{x^{2}+y^{2}}{2}+k\left(\frac{\mu^{*}}{r_{1}}+\frac{\mu^{*}}{r_{2 }}+\frac{1-2\mu^{*}}{r_{3}}\right), \tag{14}\] Equation 14 is a scalar function, also known as the pseudo-potential, which accounts for the acceleration experienced by the particle in a non-inertial reference system. The equations of motion in \(xy\) plane in the Lagrangian formulation are (Szebehely, 1967; Murray & Dermott, 1999; McCuskey, 1963; Scheeres, 2012): \[\ddot{x}-2\dot{y}=\Omega_{x}, \tag{15}\] \[\ddot{y}+2\dot{x}=\Omega_{y}, \tag{16}\] which have the same appearance as the equations of the Classical Restricted Three-Body Problem (CRTBP) (Moulton, 1914; Szebehely, 1967; Murray & Dermott, 1999; McCuskey, 1963). Considering the motion in the \(xy\) plane and multiplying Eq. 15 by \(2x\) and Eq. 16 by \(2y\), and adding all of them, we have that \[2\dot{x}\ddot{x}+2\dot{y}\ddot{y}=2\dot{x}\frac{\partial\Omega}{\partial y}+2 \dot{y}\frac{\partial\Omega}{\partial y} \tag{17}\] which can be rewritten as \[\frac{\mathrm{d}(\dot{x}^{2}+\dot{y}^{2})}{\mathrm{d}t}=2\frac{\partial\Omega }{\partial t} \tag{18}\] Integrating Eq. 18 with respect to time, we find that \[v^{2}=2\Omega-C^{*} \tag{19}\] where \(v\) is the velocity of the particle and \(C^{*}\) is a constant of integration. In this paper, \(C^{*}\) is called the modified Jacobi constant, where modified means that it is different from the constant studied by Jacobi for the case of the Classical Restricted Three-Body Problem. A special case occurs when \(k=1\), since the modified Jacobi constant has the same value as the Jacobi constant, corresponding to the CRTBP. Looking at Eq. 19, we note that the velocity of the particle depends only on the pseudo-potential and the integration constant \(C^{*}\). The constant \(C^{*}\) is determined numerically in terms of the initial position and velocity of the particle. ### Equilibrium Points Equilibrium solutions are points in which the particle has zero acceleration and zero velocity in the rotating frame. They are good locations in space to insert the spacecraft because they are located in regions where external perturbations are minimal, reducing the fuel consumption required for station-keeping maneuvers (Barbosa Torres dos Santos et al., 2017). The locations of the equilibrium points are explicitly defined in terms of \(\mu^{*}\) (and implicitly by \(\Phi\)). Making the right side of Eqs. 15 and 16 equal to zero, that is, \(\dot{x}=\dot{y}=0\), implies null accelerations: \[\begin{split} x-k\frac{\mu^{*}(x-x_{1})}{[(x-x_{1})^{2}+(y-y_{1} )^{2}]^{\frac{3}{2}}}-k\frac{\mu^{*}(x-x_{2})}{[(x-x_{2})^{2}+(y-y_{2})^{2}]^{ \frac{3}{2}}}\\ -k\frac{(1-2\mu^{*})(x-x_{3})}{[(x-x_{3})^{2}+(y-y_{3})^{2}]^{ \frac{3}{2}}}=0,\\ y-k\frac{\mu^{*}(y-y_{1})}{[(x-x_{1})^{2}+(y-y_{1})^{2}]^{\frac{3} {2}}}-k\frac{\mu^{*}(y-y_{2})}{[(x-x_{2})^{2}+(y-y_{2})^{2}]^{\frac{3}{2}}}\\ -k\frac{(1-2\mu^{*})(y-y_{3})}{[(x-x_{3})^{2}+(y-y_{3})^{2}]^{ \frac{3}{2}}}=0.\end{split} \tag{20}\] The solutions of this system of equations can be determined numerically using an iterative method. ### Linear Stability Analysis The linear stability analysis of the equilibrium points (\(x_{0}\),\(y_{0}\)) is performed by displacing the origin of the coordinate system to the position of the libration points so that the equations of motion are linearized around the origin. Equation 15 and 16 can be written as, respectively \[\begin{split}\ddot{\xi}-2\dot{\eta}&=\Omega_{xx}(x_{ 0},y_{0})\xi+\Omega_{xy}(x_{0},y_{0})\eta,\\ \ddot{\eta}+2\dot{\xi}&=\Omega_{xy}(x_{0},y_{0})\xi+ \Omega_{yy}(x_{0},y_{0})\eta,\end{split} \tag{21}\] where the partial derivatives in (\(x_{0}\),\(y_{0}\)) means that the value is computed at the libration point that is being investigated. \(\xi\) and \(\eta\) represent the coordinates of the particle with respect to the equilibrium point (\(x_{0}\), \(y_{0}\)), and \(\Omega_{xx}\), \(\Omega_{xy}\), \(\Omega_{xy}\), and \(\Omega_{yy}\) are the partial derivatives calculated at this point, given by \[\begin{split}\Omega_{xx}=k\frac{3(1-2\mu^{*})x^{2}}{(x^{2}+(y-y_ {3})^{2})^{5/2}}-k\frac{1-2\mu^{*}}{(x^{2}+(y-y_{3})^{2})^{3/2}}-k\frac{\mu^{* }}{((x-x_{1})^{2}+(y-y_{1})^{2})^{3/2}}\\ +k\frac{3\mu^{*}(x-x_{1})^{2}}{((x-x_{1})^{2}+(y-y_{1})^{2})^{5/2 }}-k\frac{\mu^{*}}{((x-x_{2})^{2}+(y-y_{2})^{2})^{3/2}}\\ +k\frac{3\mu^{*}(x-x_{2})^{2}}{((x-x_{2})^{2}+(y-y_{2})^{2})^{5/2 }}+1,\end{split}\] \[\begin{split}\Omega_{yy}=k\frac{3(1-2\mu^{*})(y-y_{3})^{2}}{(x^{2 }+(y-y_{3})^{2})^{5/2}}-k\frac{1-2\mu^{*}}{(x^{2}+(y-y_{3})^{2})^{3/2}}+k\frac{ 3\mu^{*}(y-y_{1})^{2}}{((x-x_{1})^{2}+(y-y_{1})^{2})^{5/2}}\\ -k\frac{\mu^{*}}{((x-x_{1})^{2}+(y-y_{1})^{2})^{3/2}}+k\frac{3\mu ^{*}(y-y_{2})^{2}}{((x-x_{2})^{2}+(y-y_{2})^{2})^{5/2}}\\ -k\frac{\mu^{*}}{((x-x_{2})^{2}+(y-y_{2})^{2})^{3/2}}+1,\end{split}\] \[\begin{split}\Omega_{xy}=\Omega_{yx}=k\frac{3(1-2\mu^{*})x(y-y_ {3})}{(x^{2}+(y-y_{3})^{2})^{5/2}}+k\frac{3\mu^{*}(x-x_{1})(y-y_{1})}{((x-x_{1} )^{2}+(y-y1)^{2})^{5/2}}\\ +k\frac{3\mu^{*}(x-x_{2})(y-y_{2})}{((x-x_{2})^{2}+(y-y_{2})^{2}) ^{5/2}}.\end{split} \tag{22}\] The nontrivial roots of Eq. 21 are obtained from the solution of the characteristic equation of order four in \(\lambda\): \[\lambda^{4}+(4-\Omega_{xx}^{0}-\Omega_{yy}^{0})\lambda^{2}+\Omega_{xx}^{0} \Omega_{yy}^{0}-(\Omega_{xy}^{0})^{2}=0. \tag{23}\] In Equation 23, \(\Omega_{xx}^{0}\), \(\Omega_{xy}^{0}\) and \(\Omega_{yy}^{0}\) refer, respectively, to \(\Omega_{xx}(x_{0},\ y_{0})\), \(\Omega_{xy}(x_{0},\ y_{0})\) and \(\Omega_{yy}(x_{0},\ y_{0})\). The equilibrium point is linearly stable if all the four roots (or eigenvalues \(\lambda\)) of Eq. 23 are purely imaginary, or complex with negative real parts (Olle et al. 2004). However, if one or more of the eigenvalues have a positive real part, the equilibrium point is classified as unstable (Moulton 1914; Szebehely 1967; Murray & Dermott 1999; McCuskey 1963). ## 3 Results In this section we will show the numerical results obtained from our numerical simulations. The goal is to get a general view of the dynamics of the problem, which will allow us to get some conclusions. ### Influence of [\(k\), \(\mu^{*}\), \(\Phi\)] on equilibrium points We start by computing the equilibrium points of the system. Figure 2(a) shows the points of mass \(M_{1}\) (green circle on the left side), \(M_{2}\) (green circle on the right side) and \(M_{3}\) (green middle circle), and six equilibrium points (red) for \(\Phi=0^{\circ}\), \(\mu^{*}=1/3\) and \(k=1\). The equilibrium points between \(M_{1}\) and \(M_{3}\) and between \(M_{2}\) and \(M_{3}\) overlap with the rod that connects the spheres (see Fig. 1). Therefore, we assume that these equilibrium points are inside the body of the asteroid. Figure 2(b) is similar to Fig. 2(a), but now \(\mu^{*}\) = 1/3 and \(k\) = 1, but \(\Phi=45^{\circ}\). In this case, there are eight equilibrium points, all of them off the \(x\)-axis. The position shift occurs because a new configuration is necessary to fulfill the equilibrium conditions as the positions of the primaries change, modifying the value of the azimuthal angle. We performed numerical investigations to understand how the coordinates of the external equilibrium points change when \(\mu^{*}\), \(k\) and \(\Phi\) are varied. To facilitate this analysis, we identified five regions, A, B, C, D, and E, as shown in Fig. 3. We note that the regions are symmetric with respect to the \(y\)-axis. Observe that regions A, B and E are symmetric with respect to the \(y\)-axis, i.e., if the equilibrium point (in regions A, B or E) has coordinates (\(x\), \(y\)), then there will be another equilibrium point in the coordinates (-\(x\), \(y\)). Due to this symmetric property of the regions, we will only analyze the situations for which \(x\) is negative. Figure 3 displays the equilibrium points when \(\mu^{*}\) = 1/3, \(k=1\) and \(\Phi\) is varying. It illustrates how the equilibrium points move as this parameter is varied. The corresponding azimuthal angles are given in the caption of the plot. One can note the "path" followed by the equilibrium points as \(\Phi\) increases. For \(\Phi\) = 90\({}^{\circ}\), the equilibrium points are equivalent to a dipole aligned at \(x\) = 0. For region A, we plot the behavior of the equilibrium points in the \(x\) and \(y\) plane as a function of \(\mu^{*}\), \(k\) and \(\Phi\), as shown in Fig. 4. Figure 4 shows how the coordinates of the equilibrium points varies with \(k\), \(\mu^{*}\) and \(\Phi\). Note that the graphs show the variation in the mass of the body \(M_{3}\), given by 1-2\(\mu^{*}\). That is, if the mass of \(M_{3}\) increases, consequently the mass of \(M_{1}\) (and \(M_{2}\)), given by \(\mu^{*}\), decreases. The color bar represents the value of the azimuthal angle. First, we investigate the solutions when we vary \(k\) and keep \(\mu^{*}\) and \(\Phi\) constant. Note from Fig. 4 that as the rotation of the asteroid decreases, that is, as \(k\) becomes larger, the equilibrium points move away from the center of mass of the system. This is because increasing \(k\) implies in decreasing the angular velocity of the asteroid around its own axis (see Eq. 9), thus making the value of the centrifugal force smaller. The condition of the existence of an equilibrium point is that the resulting force at one point in space is zero, that is, the gravitational force and the centrifugal force. The force at the center of the system is zero, that is, the gravitational force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the gravitational force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the gravitational force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the gravitational force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system. The force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and the centrifugal force at the center of the system is zero, that is, the centrifugal force and centrifugal force at the center of the system is zero, that, the centrifugal force and the centrifugal force at the center force must have the same value in the module, but in the opposite direction. So, to keep the centrifugal force at a value that counteracts the gravitational force, the distance from the center of mass to the position of the equilibrium points increases. As \(k\) increases, equilibrium points appear farther from the center of mass. Next, keeping the values of \(k\) and \(\Phi\) constant and varying \(\mu^{*}\), Fig. 4 shows that, as \(\mu^{*}\) becomes smaller, the equilibrium points on the \(x\)-axis inside region A approach the center of mass of the system. On the other hand, as 1-2\(\mu^{*}\) decrease (i.e. \(\mu^{*}\) increase), the equilibrum points move away from asteroid. This happens because, as the mass of the bodies \(M_{1}\) and \(M_{2}\) becomes smaller, the gravitational force on the asteroid edge decreases on the \(x\)-axis, making it necessary to reduce the centrifugal force of the system on this axis. Conversely, the positions of the libration points to move away from the center of mass along the \(y\)-axis as the gravitational force on this axis becomes larger due to the increase in the mass of \(M_{3}\). Finally, as we increase \(\Phi\), the equilibrium points in region A along the \(x\)-axis come nearer to the center of mass of the system, while the ones along the \(y\)-axis move away from the center of mass of the system. These equilibrium points only exist when the azimuthal angle is between \(0^{\circ}\) and \(76^{\circ}\). Beyond this value, the configuration of the tripole does not allow the existence of equilibrium points in region A. For region B, the variations of the \(x\) and \(y\) coordinates of the equilibrium Figure 3: Equilibrium points separated by regions of A to E. points as a function of \(k\), \(\mu^{*}\) and \(\Phi\) are shown in Figs. 5 (a) and (b), respectively. For a better view of the path taken by the equilibrium points when we vary the parameters \(k\), \(\mu^{*}\) and \(\Phi\), we insert a curve (black line) in the yellow region (\(\Phi\) = 65\({}^{\circ}\)). We note that, as \(k\) becomes smaller, the positions of the equilibrium points in region B shift away from the center of mass of the system. This is true for the equilibrium points on both the \(x\)-axis and the \(y\)-axis and it occurs for the same reason as for the equilibrium points in region A. The equilibrium points in region B occur for \(\Phi\)\(>\) 26\({}^{\circ}\). When the mass \(m_{3}\) increases, the equilbirium points tend to move away from primary body on Figure 4: Coordinates of the equilibrium points in region A as a function of \(k\), \(\mu^{*}\) and \(\Phi\). (_a_) Position of the equilibrium points on \(x\)-axis. (_b_) Position of equilibrium the points on \(y\)-axis. both the \(x\) and \(y\) axis. Analyzing the disposition of equilibrium points in region B as we increase \(\Phi\), we find that the equilibrium points move upwards along the \(y\)-axis and may cross to the positive semi-plane. Unlike what happens for solutions in region A, the equilibrium points in region B move away from the system's center of mass along the \(x\)-axis increases. Table 1 summarizes the direction of the displacement of the equilibrium points in region A and B with respect to the body's center of mass as \(k\), \(\mu^{*}\) and \(\Phi\) vary. The symbol \(\nearrow\) indicates that the corresponding parameter is increasing, while \(\equiv\) is used to indicate parameters that are fixed. Directional arrows denote the direction of the displacement of the equilibrium points. For Figure 5: Coordinates of the equilibrium points in region B as a function of \(k\), \(\mu^{*}\) and \(\Phi\). (_a_) Position of the equilibrium points on \(x\)-axis. (_b_) Position of equilibrium the points on \(y\)-axis. example, when we keep fixed the values of \(k\) and \(\mu^{*}\), and increase \(\Phi\), the equilibrium points of region A, \(x\) and \(y\), move to the right (approaching the system center of mass) and up (moving away from the system center of mass), respectively. TABLE 1 VARIATION TRENDS OF COORDINATES FOR THE EQUILIBRIUM POINTS OF THE A AND B REGIONS \begin{tabular}{|c|c|c|c|c|} \hline Summary, variation of & \multicolumn{2}{|c|}{Equilibrium point} & \multicolumn{2}{|c|}{Equilibrium point} \\ parameters. & motion. A region & \multicolumn{2}{|c|}{motion. B region} \\ \hline & \(x_{0}\) & \(y_{0}\) & \(x_{0}\) & \(y_{0}\) \\ \hline \(k\nearrow\), 1-2\(\mu^{*}\)\(\equiv\), \(\Phi\equiv\) & \(\leftarrow\) & \(\uparrow\) & \(\leftarrow\) & \(\downarrow\) \\ \(k\equiv\), 1-2\(\mu^{*}\)\(\nearrow\), \(\Phi\equiv\) & \(\rightarrow\) & \(\uparrow\) & \(\leftarrow\) & \(\uparrow\) \\ \(k\equiv\), 1-2\(\mu^{*}\equiv\), \(\Phi\nearrow\) & \(\rightarrow\) & \(\uparrow\) & \(\leftarrow\) & \(\uparrow\) \\ \hline \end{tabular} Next, we investigate regions C and D. In these two regions, the coordinates of the equilibrium points on the \(x\)-axis is zero for all points. Figure 6 shows the \(y\) coordinate of the equilibrium points as a function of \(k\), \(\mu^{*}\) and \(\Phi\). When \(k\) increases the centrifugal force becomes smaller, so the equilibrium points move downwards away from \(M_{3}\). As the mass of \(M_{3}\) increases, the gravitational force in the \(y\) direction becomes stronger, causing the positions of the equilibrium points to change. As \(\mu^{*}\) decreases and (1-2\(\mu^{*}\)) becomes larger, the equilibrium points in region C move in the negative direction of the \(y\)-axis. Finally, as \(\Phi\) increases, the equilibrium points move in the downwards along the \(y\)-axis. Figure 6 illustrates that the \(y\) coordinate of the equilibrium points of the C region depends on \(\Phi\), and that \(y_{C}(\Phi)\) becomes smaller as we increase the azimuthal angle. This happens because, as we increase \(\Phi\), \(M_{1}\) and \(M_{2}\) move upwards along the \(y\)-axis. Then, to keep the center of mass of the system at the origin, \(M_{3}\) must be in the semiplane with negative \(y\)-axis. Moreover, as \(\Phi\) increases, the coordinate of \(M_{3}\) becomes increasingly negative, so the equilibrium points in region C move away from \(M_{3}\) in the negative direction to maintain the balance between the gravitational and centrifugal forces. Figure 7 shows how the equilibrium points in region D depend on \(k\), \(\mu^{*}\) and \(\Phi\). As \(k\) increases, the equilibrium points move upwards away from the center of mass of the system. As the mass of \(M_{3}\) increases, the gravitational force in the \(y\) direction becomes larger, changing the positions of the equilibrium points. As (1-2\(\mu^{*}\)) becomes larger, the equilibrium points in region D move in the positive direction of the \(y\)-axis, away from the center of mass of the system. Finally, we investigated the behavior of the equilibrium points on the \(y\)-axis when we increase \(\Phi\). Initially, when we increase \(\Phi\), the equilibrium points on the \(y\)-axis approaches the center of mass of the system. This happens because, as we increase the azimuthal angle, \(M_{3}\) moves downward, consequently the gravitational force on the positive \(y\)-axis becomes weaker. In contrast, as we increase \(\Phi\), the bodies \(M_{1}\) and \(M_{2}\) move upward with respect to the \(y\)-axis. This causes the gravitational force to increase in region D, now causing the equilibrium points to move upwards. For a better understanding, we constructed a figure using \([\mu^{*},\,k]=[1/3,\,1]^{T}\), which shows the equilibrium point behavior in the D region when we vary \(\Phi\), as shown in Fig. 8. Then, as we increase the value of \(\Phi\), the equilibrium point values in region D decrease, approaching the center of mass of the system, it reaches a mini Figure 6: Behavior of the equilibrium points on \(y\)-axis of region C as a function of parameters \(k\), \(\mu^{*}\) and \(\Phi\). Figure 7: Behavior of the equilibrium points on \(y\)-axis of region D as a function of parameters \(k\), \(\mu^{*}\) and \(\Phi\). mum in \(\Phi_{D}=30.32^{\circ}\) and the \(y\) position of the D region that depends on \(\Phi\) is \(y(\Phi)_{D-min}=0.6664\) then it increase again, moving away from the center of mass of the system. Table 2 summarizes the direction in the displacement of the equilibrium points in region D relative to the asteroid's center of mass when \(k\), \(\mu^{*}\) and \(\Phi\) vary. ### Influence of azimutal angle on zero velocity curves The azimuthal angle is one of the mains parameters that govern the topological structure of the zero velocity curves around the tripole system. In this section, this effect is investigated. For the numerical simulations we keep [\(k\), \(\mu^{*}\)]\({}^{T}=\) [1, 1/3]\({}^{T}\) and we vary the angle \(\Phi\) in the interval [0, 90\({}^{\circ}\)]. Equation 19 relates the square of the velocity and the position of the infinitesimal mass body in a rotating coordinate system. Note that when the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Summary, variation of & \multicolumn{2}{|c|}{Equilibrium point} & \multicolumn{2}{|c|}{Equilibrium point} \\ parameters. & motion. C region & \multicolumn{2}{|c|}{motion. D region} \\ \hline & \(x_{0}\) & \(y_{0}\) & \(x_{0}\) & \(y_{0}\) \\ \hline \(k\nearrow\), 1-2\(\mu^{*}\)\(\equiv\), \(\Phi\)\(\equiv\) & 0 & \(\downarrow\) & 0 & \(\uparrow\) \\ \(k\)\(\equiv\), 1-2\(\mu^{*}\)\(\nearrow\), \(\Phi\)\(\equiv\) & 0 & \(\downarrow\) & 0 & \(\uparrow\) \\ \(k\)\(\equiv\), 1-2\(\mu^{*}\)\(\equiv\), \(\Phi\)\(\nearrow\) & 0 & \(\downarrow\) & 0 & \(\downarrow\uparrow\) \\ \hline \end{tabular} \end{table} Table 2: VARIATION TRENDS OF COORDINATES FOR THE EQUILIBRIUM POINTS OF THE C AND D REGIONS Figure 8: \(y\)-coordinate of the equilibrium points in region D as a function of \(\Phi\) for [\(\mu^{*}\), \(k\)] = [1/3, 1]\({}^{T}\). integration constant \(C^{*}\) is numerically determined by the initial conditions, Equation 19 gives the speed with which the infinitesimal mass body moves. In particular, if \(v\) is assigned zero, Equation 19 defines the curves at which the velocity is zero. The equation that gives the zero velocity curves, in cartesian coordinates, is: \[x^{2}+y^{2}+\frac{2\mu^{*}}{r_{1}}+\frac{2\mu^{*}}{r_{2}}+\frac{2(1-2\mu^{*})}{r _{3}}=C^{*} \tag{24}\] where \(r_{1}\), \(r_{2}\) and \(r_{3}\) are as shown in equations 6, 7 and 8. The zero velocity curves in the \(xy\) plane for six different values of \(\Phi\) are shown in Fig. 9. Each curve in frames a) to f) of Fig. 9 corresponds to the value of the Jacobi constant for which the contacts between the ovals occur and the equilibrium points appear. The tripole is not illustrated in the figure. Figure 9 a) shows the zero velocity curves when the azimuthal angle is \(0^{\circ}\). Note that, for this azimuthal angle, \(M_{1}\), \(M_{2}\), and \(M_{3}\) are aligned on the \(x\)-axis. On the other hand, Figure 9 b) shows the zero velocity curves when the azimuthal angle is \(20^{\circ}\). For small values of \(x\) and \(y\) that satisfies Eq. 24, the first two terms are virtually irrelevant and the equation can be written as: \[-\frac{\mu^{*}}{r_{1}}-\frac{\mu^{*}}{r_{2}}-\frac{2(1-2\mu^{*})}{r_{3}}= \frac{C^{*}}{2}-\frac{(x^{2}+y^{2})}{2}=\frac{C^{*}}{2}-\epsilon. \tag{25}\] This equation gives the equipotential curves for the three centers of force \(\mu^{*}\), \(\mu^{*}\) and 1-2\(\mu^{*}\), as shown in Fig. 9 a) and b). For large values of \(C^{*}\), ovals consist of closed curves around each of the body. If we decrease \(C^{*}\), the ovals around \(M_{1}\), \(M_{2}\) and \(M_{3}\) (inner ovals) expand, and the outer contours (outer ovals) move towards the center of mass of the system. The inner ovals connect with the outer ovals, resulting in the equilibrium points in region A (black curve) and the ovals between the bodies also connect, resulting in the equilibrium points in region E (red curve). See Figures 9 a) and b). If \(C^{*}\) is further decreased, the regions where movement is allowed become larger. This happens because the oval around the masses increases and merges with the outer oval, leaving only a small confined area (regions C and D), where the movement is impossible. Note from Fig. 9 a) that, due to the symmetry of the problem, equilibrium points in regions C and D appear for the same value of \(C^{*}\) (green curve). On the other hand, when the azimuthal angle is different from \(0^{\circ}\), the equilibrium points in the C and D regions appear for different Jacobi constant values (green and blue curves, respectively) shown in Fig. 9 b). Figure 9 c) shows the zero velocity curves when the azimuthal angle is \(40^{\circ}\). The change in the topological structure of the zero velocity curves is evident as the azimuthal angle is varied. Note from Figure 9 c) that, in addition to the contact points shown in Figs. 9 a) and b), new contact points emerge (red curves), in region B. Through numerical simulations, we observe that the B regions arises when the azimuthal angle is greater than \(26^{\circ}\). When we consider the azimuthal angle of \(60^{\circ}\), \(M_{1}\), \(M_{2}\) and \(M_{3}\) form an equilateral triangle relative to the rotating reference system. Figure 9: Influence of the azimuthal angle on zero-velocity curves in the \(xy\) plane. (_a_) Zero velocity curves for a \(0^{\circ}\) azimuthal angle. (_b_) Zero velocity curves for a \(20^{\circ}\) azimuthal angle. (_c_) Zero velocity curves for a \(40^{\circ}\) azimuthal angle. (_d_) Zero velocity curves for a \(60^{\circ}\) azimuthal angle. (_e_) Zero velocity curves for a \(80^{\circ}\) azimuthal angle. (_f_) Zero velocity curves for a \(90^{\circ}\) azimuthal angle. Thus, the zero velocity curves has a symmetrical shape. When the azimuthal angle is \(60^{\circ}\), the equilibrium points in regions A and C arise for \(C_{A-C}=2.946725190\). Likewise, the (\(C_{B-D}=3.35803516\)) is required for contacts between ovals in regions B and D. If the masses of \(M_{1}\) and \(M_{2}\) are different, the symmetrical property of the equilibrium points and zero velocity curves with respect to the \(xy\) axis is not valid. Figure 9 e) shows the zero velocity curves for an azimuthal angle of \(80^{\circ}\). In Figure 9 e), we observe that regions A cease to exist, leaving only regions B, C, D and E. This means that, just as regions B depend on the azimuthal angle to emerge or disappear, so does regions A. Regions A ceases to exist for\(\Phi\)\(>76^{\circ}\). Finally, considering an azimuthal angle of \(90^{\circ}\), \(M_{1}\) and \(M_{2}\) overlap, which means that they behave as a single body with mass \(m=m_{1}+m_{2}\). For this configuration, the system is similar to the Classical Restricted Three-body Problem with a mass ratio of \(\mu^{*}=1/2\). Note from Figures 9 a) - f) that, as we increase the azimuthal angle from 0 to \(90^{\circ}\), noticeable changes in the zero velocity curves can be observed near the arched asteroid. Note that the regions that connect the ovals move along the \(xy\) plane as we vary the azimuthal angle. Some fixed points also emerge or disappear. The values of the modified Jacobi constants at the contact points in each region in Fig. 9 are shown in Fig. 10. Figures 10 a) - d) show how the values of the Jacobi constant at regions A, B, C, and D (\(C_{A}\), \(C_{B}\), \(C_{C}\), and \(C_{D}\), respectively) vary as a function of the azimuthal angle \(\Phi\). In Fig. 10 - a), we see that the values of the Jaccobi constant \(C_{A}(\Phi)\) decrease as the azimuthal angle \(\Phi\) increases. For \(C_{B}(\Phi)\), one notes that, initially the value of the Jacobi constant increases with increasing azimuthal angle and then decreases, as shown in Fig. 10 b). This behavior causes a maximum value for \(C_{B}(\Phi)\), which happens at \(C_{B}=2.989303755\), for \(\Phi=46.524234^{\circ}\). On the other hand, the values of the function \(C_{C}(\Phi)\) increase as we increase \(\Phi\). Finally, for \(C_{D}\), as we increase \(\Phi\), initially, the values of \(C_{D}\) become smaller, reaching a minimum value of \(C_{D}=2.4120014\), when the azimuthal angle is approximately \(\Phi\) = \(19.987^{\circ}\), and then it increases. ### Stability conditions Now, we focus on the analysis of the stability conditions for the equilibrium points in regions D and C, (\(L_{D}\) and \(L_{C}\)), respectively, i.e., points that have null \(x\) coordinate. We describe how the stability conditions for the equilibrium points \(L_{D}\) (and \(L_{C}\)) depend on the azimuthal angle (\(\Phi\)), the force ratio (\(k\)) and the mass ratio (\(\mu^{*}\)). Indeed, if any of these parameters are changed, the stability condition (unstable or stable) of these equilibrium points may also change. First let's look at the stability condition for region D. Figure 11 shows plots of \(\Phi\) versus \(\mu^{*}\), showing the stability transition. We see from Fig. 11 a) that, when the azimuthal angle increases and \(k=1\), the mass ratio required to maintain the equilibrium point \(L_{D}\) stable decreases. When the angle is \(0^{\circ}\), the maximum mass ratio to allow linear stability of the system studied is \(\mu^{*}=0.0742683\). If the mass ratio is greater than this value, the system is unstable for every azimuthal angle. Note that, when \(\Phi\to 90^{\circ}\), the two masses of the tripole (\(m_{1}\) and \(m_{2}\)) collapse into a mass point with the mass ratio \(2\mu^{*}\). In this case, the point \(L_{D}\) is similar to the equilibrium point \(L_{3}\) of the Classical Restricted Three-Body Problem. Therefore this equilibrium point is linearly unstable for any mass ratio, which is in agreement with the literature (Moulton 1914; Szebehely 1967; Murray & Dermott 1999; McCuskey 1963). Figures 11 b) to d) show \(\Phi\) versus \(\mu^{*}\), which illustrate the stability regions Figure 10: Jacobi constant behavior in regions A, B, C and D, respectively, as a function of azimuthal angle. (_a_) Values of the Jacobian constant (\(C_{A}\)) at the equilibrium points versus \(\Phi\). (_b_) Values of the Jacobian constant (\(C_{B}\)) at the equilibrium points versus \(\Phi\). (_c_) Values of the Jacobian constant (\(C_{C}\)) at the equilibrium points versus \(\Phi\). (_d_) Values of the Jacobian constant (\(C_{D}\)) at the equilibrium points versus \(\Phi\). when \(k>1\). We see from Fig. 11 (b) that, for \(\Phi<70^{\circ}\), the stability transition is similar to the case when \(k=1\), but the bifurcation occurs when \(\Phi\sim 70^{\circ}\). Notice in the graph that a narrow vertical strip appears, causing the \(L_{D}\) equilibrium point stable for any value of \(\mu^{*}\). As \(\Phi\) increases, the stability conditions change again, making the equilibrium point stable only for high values of \(\mu^{*}\). So, observe that, when the system has low values of \(\mu^{*}\), the equilibrium points are linearly stable for \(\Phi<76^{\circ}\). On the other hand, for a \(\Phi<70^{\circ}\), the stability condition is not stable for \(\Phi<70^{\circ}\). Figure 11: Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) for the stability condition of the equilibrium point \(L_{D}\) considering different values of \(k\). (_a_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k=1\) for the stability condition of the equilibrium point \(L_{D}\). (_b_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k=3\) for the stability condition of the equilibrium point \(L_{D}\). (_c_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k=5\) for the stability condition of the equilibrium point \(L_{D}\). (_d_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k=7\) for the stability condition of the equilibrium point \(L_{D}\). very arched asteroid (\(\Phi>76^{\circ}\)), the equilibrium point \(L_{D}\) is linearly stable when the mass ratio of the system is high. Figure 11 c) shows the stability transition curve for k = 5. We observed that when \(\Phi~{}<~{}60^{\circ}\), the stability transition curve is similar to the previous cases. We also notice that narrow vertical strip appears (around \(\Phi~{}\approx~{}65^{\circ}\)) and has a larger area with respect to the previous case. This means that we can also find stable regions when we consider high values of \(\Phi\) (\(\Phi~{}>~{}60^{\circ}\)) and \(\mu^{*}\). As we increase the value of \(\Phi\) (when \(\Phi~{}>70^{\circ}\)), the equilibrium point \(L_{D}\) becomes linearly stable only for high values of \(\mu^{*}\). For low values of \(\mu^{*}\), the equilibrium point \(L_{D}\) is stable when \(\Phi~{}<~{}70^{\circ}\). The letters S and U shown in Fig. 11 c) are abbreviations for Stable and Unstable condition, respectively. Finally, Fig. 11 d) shows the stability transition when \(k=7\). Note that, as in the previous cases, when we consider \(k=7\), narrow vertical strip appears (around \(\Phi~{}60^{\circ}\)), allowing the equilibrium point \(L_{D}\) to be linearly stable for any value of \(\mu^{*}\). If we gradually increase \(\mu\) and \(\Phi\), the stable regions remain until \(\Phi~{}=~{}89.6^{\circ}\). On the other hand, if we decrease \(\mu\) as we increase \(\Phi\) (from \(66^{\circ}\)), the stable region extends to \(\Phi~{}=~{}67^{\circ}\). Note in Figures 11 b) - d) that the area of the narrow vertical strip becomes larger as we increase the k value. This means that, the higher the value of k, the larger the region that allows linear stability of equilibrium point \(L_{D}\) to any values of \(\mu^{*}\). A similar analysis was performed for the equilibrium point \(L_{C}\) and the results are shown in Fig. 12. Unlike Figure 11 a), when \(k=1\), Figure 12 a) shows that there are two stability transition limits. The first limit (lower transition, left-hand curve) exists for small azimuthal angles, starting at \(0^{\circ}\), with a mass ratio of 0.07427949. Above \(18.351^{\circ}\), numerical evidence shows that another stability transition arises, as shown by the right-hand curve in Fig. 11 a). Figures 12 b) - d) show \(\Phi\) versus \(\mu^{*}\), which illustrates the stability regions, when \(k>1\). Figure 12 b) shows two stability transitions. Note that the first transition starts when \(\Phi=0^{\circ}\) and \(\mu^{*}\) is approximately 0.074. The second stability transition starts when \(\Phi=25^{\circ}\), when the asteroid is \(8\circ\) more arched than the previous case, so the equilibrium point \(L_{C}\) has a wider stable region compared to when \(k=1\). For \(\Phi>57.5^{\circ}\), the equilibrium point \(L_{C}\) is unstable for any mass ratio. If we further increase the value of \(k\) to \(k=5\), the stability region becomes even larger, as shown in Fig. 12 c). The first stability transition arises when \(\Phi=0^{\circ}\) and \(\mu^{*}=0.08\). In contrast, the second curve arises when \(\mu^{*}=0\) and \(\Phi=28^{\circ}\), thus limiting the region that allows the equilibrium point \(L_{C}\) to be stable. If the azimuthal angle is greater than \(68^{\circ}\), the equilibrium point \(L_{C}\) becomes unstable for any mass ratio. Finally, we made an analysis considering \(k=7\). Note from Figure 12 d) that, due to the low rotation of the asteroid, it results in a larger area on the graph that makes the \(L_{C}\) equilibrium point linearly stable. For \(k=7\), the first transition starts when \(\Phi=0^{\circ}\) and \(\mu^{*}=0.09\). In contrast, the second stability transition starts when \(\Phi=29^{\circ}\) and \(\mu^{*}=0\). This shows that, when we increase the value of \(k\) (ie, the angular velocity of the asteroid becomes slower), the two stability transition curves intersect at a larger azimuthal angle, ranging from approximately, \(\Phi\) = 35\({}^{\circ}\) when \(k\) = 1, until \(\Phi\) = 75\({}^{\circ}\) when \(k\) = 7. This shows that, as we increase the force ratio \(k\), the stability region becomes larger. ## 0.4 Application To validate the equations and results developed in this article, we compared the results obtained with four celestial bodies, (i) 243 Ida, (ii) 433 Eros, Figure 12: Values of the mass ratio (\(\mu^{*}\)) versus the elevation angle (\(\Phi\)) for the stability condition of the equilibrium point \(L_{C}\) considering different values of \(k\). (_a_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k\) = 1 for the stability condition of the equilibrium point \(L_{C}\). (_b_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k\) = 3 for the stability condition of the equilibrium point \(L_{C}\). (_c_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k\) = 5 for the stability condition of the equilibrium point \(L_{C}\). (_d_) Values of the mass ratio (\(\mu^{*}\)) versus the azimuthal angle (\(\Phi\)) when \(k\) = 7 for the stability condition of the equilibrium point \(L_{C}\). (iii) 1996(HW1) and (iv) M1 Phobos. The parameters \(k\), \(\Phi\) and \(\mu^{*}\) were taken from Lan et al. (2017) (for Ida and M1 Phobos) and Yang et al. (2018) (for Eros and 1996 HW1). The linear stability of the equilibrium points of the celestial bodies mentioned above were obtained by Wang et al. (2014) and used in this study for comparison purposes. In Wang et al. (2014), regions \(C\) and \(D\) are the equilibrium points \(E_{4}\) and \(E_{2}\), respectively. The optimized parameters of the bodies under analysis in this article are shown in Table 3, where \(\Phi\) is determined by doing \(\Phi=\arctan\left(2\sigma\right)\) in which \(\sigma\) is given by \(l_{2}/l_{1}\) and was determined in Lan et al. (2017) and Yang et al. (2018). Knowing the parameters for each celestial body, it is possible to find the stability conditions for equilibrium points \(E_{4}\) and \(E_{2}\) from Equations 22 and 23. Figure 13 shows \(\mu^{*}\) versus \(\Phi\) and illustrates the stability regions for equilibrum points \(L_{C}\) and \(L_{D}\) for asteroids 1996 HW1, 243 Ida and 433 Eros and M1 Phobos. Figure 13 a) and b) plot \(\Phi\ vs.\ \mu^{*}\) (27.43, 0.44) for the asteroid 1996 HW1. We observe that the point is outside the region that allows the stability of the equilibrium points \(E_{2}\) and \(E_{4}\), showing that these equilibrium points are unstable, a result that coincides with the results obtained by Wang et al. (2014) Figures 13 c) and d) show the stability region of the equilibrium points \(E_{2}\) and \(E_{4}\) when \(k=22\). We plotted the ordered pair (56.09, 0.396) for the M1 Phobos. Due to the characteristics (shape, density and rotation) of M1 Phobos, the equilibrium points \(E_{2}\) and \(E_{4}\) are within the stability region, making these equilibrum points linearly stable. The stability of the equilibrium points depends on the bulk density, the shapes, and the angular velocities of the asteroids. The bulk density is obtained from the composition of the asteroid, a characteristic that is hard to change. The shapes of the asteroids are shaped in the long-term in space. On the other hand, the angular velocities of asteroids are altered due to the accelerations caused by the YORP effect (Paddack, 1969). \begin{table} \begin{tabular}{c c c c} \hline Asteroid & \(k\) & \(\mu^{*}\) & \(\Phi\) \\ \hline 243 Ida & 0.402 & 0.237 & 19.94\({}^{\circ}\) \\ M1 Phobos & 22.003 & 0.396 & 56.09\({}^{\circ}\) \\ 433 Eros & 0.434 & 0.260 & 18.95\({}^{\circ}\) \\ 1996 (HW1) & 3.158 & 0.443 & 27.43\({}^{\circ}\) \\ \hline \end{tabular} \end{table} Table 3THE OPTIMAL PARAMETERS FOR THE TRIPOLE MODELS Figure 13: Values of the \(\mu^{*}\)_versus_\(\Phi\) for the stability condition of the equilibrium point \(L_{D}\) (\(E_{2}\)) and \(L_{C}\) (\(E_{4}\)) for a specific \(k\) value. (_a_) \(k\) = 3.15 for the equilibrium point \(L_{D}\) (\(E_{2}\)) of the 1996 HW1 asteroid. (_b_) \(k\) = 3.15 for the equilibrium point \(L_{C}\) (\(E_{4}\)) of the 1996 HW1 asteroid. (_c_) \(k\) = 22 for the equilibrium point \(L_{D}\) (\(E_{2}\)) of the M1 Phobos. (_d_) \(k\) = 22 for the equilibrium point \(L_{C}\) (\(E_{r}\)) of the M1 Phobos. _e_) \(k\) = 3.15 for the equilibrium point \(L_{D}\) (\(E_{2}\)) of the 243 Ida and 433 Eros asteroids. (_f_) \(k\) = 3.15 for the equilibrium point \(L_{C}\) (\(E_{4}\)) of the 243 Ida and 433 Eros asteroids. Observe that the equilibrium points \(E_{2}\) and \(E_{4}\) of M1 Phobos are close to the boundary that guarantees the condition of stability (see Fig. 13 c and d). If the angular velocity of this body increases, as predicted by the YORP effect, \(k\) will decrease, making the equilibrium point to be unstable. This result shows the importance of carrying out a generalized analysis with the aim of globally understanding the dynamic properties in the vicinity of celestial bodies. Finally, Fig. 13 e) and f) provide information regarding the stability condition for 243 Ida and 433 Eros asteroids. In Table 3 we see that \(k\) for asteroids 243 Ida and 433 Eros are very close. Because of this, we will show the results for these two asteroids on the same graph, in Figures 13 e) and f). We plotted (\(\phi\), \(\mu^{*}\)) = (18.95, 0.26) and (\(\phi\), \(\mu^{*}\)) = (19.94, 0.23) for 433 Eros and 243 Ida asteroids, respectively. We observed that the equilibrium points \(E_{2}\) and \(E_{4}\) (Fig. 13 e) and f), respectively) of asteroids 243 Ida and 433 Eros are unstable due to their physical and dynamical characteristics. These results show that our generalized analysis coincides with the results obtained for a given asteroid that can be modeled as a rotating mass tripole. ## 0.5 Conclusion Dynamic properties of the rotating mass tripole were addressed in this article. The rotating mass tripole consists of three point masses whose geometric shape depends on the shape of the asteroids under analysis. We observed that the gravitational potential depends on three free parameters, which are: the force ratio, the mass ratio and the azimuthal angle. We note that the amount of equilibrium points that arise depends on the combination of these free parameters, it can be found from five to eight equilibrium points. The tendency to vary the location of the equilibrium points according to the free parameters is determined. We also analyzed the topological structure of the zero velocity curves with respect of the azimuthal angle. We observed that the zero velocity curves around the rotating mass tripole have significant changes due to the arched shape of the asteroid. Analyzing the linearized equations, we observed that the condition of stability of the equilibrium points in region C and D depends of \(k\), \(\mu^{*}\) e \(\Phi\). For region C, we observed the appearance of bifurcations when \(k>1\). On the other hand, the stability of the equilibrium points in region D has two stability transition limits for any value of \(k\). For both regions (C and D), it was observed that, as we increase the value of \(k\), the region of stability becomes larger. Understanding the dynamics of a particle that is subject to the gravitational field of an elongated asteroid is extremely important for the exploration of these bodies. The results presented here provided a global characterization of the dynamic behavior of an infinitesimal mass body around an asteroid modeled as a rotating mass tripole. This allowed a better understanding of the main factors that influence of the topological structure of the gravitational field in the vicinity of asteroids that have an arched shape. More complex models, such as the polyhedral method, are much more accurate and are widely used in the analysis of a specific asteroid, but the present model proved to be useful in providing general information about families of asteroids similar to the tripole model. ## 0.6 Acknowledgements The authors wish to express their appreciation for the support provided by: grants 406841/2016-0, 140501/2017-7, 150678/2019-3, 422282/2018-625 9 and 301338/2016-7 from the National Council for Scientific and Technological Development (CNPq); grants 2016/24561-0, 2018/00059-9 and 2016/18418-0, from Sao Paulo Research Foundation (FAPESP); grant 88887.374148/2019-00 from the National Council for the Improvement of Higher Education (CAPES) and to the National Institute for Space Research (INPE).
2310.15867
Complex Poynting vector in gyromagnetic media and its impact on power flow in guided modes
In this paper, we show the relation between the time-varying spinning nature of the instantaneous Poynting vector and the phasor form of the complex Poynting vector in the bulk of the gyromagnetic medium. We show the presence of a transverse reactive power component in the bulk of the gyrotropic medium, even for plane wave propagation. We use a simple quantification technique of Poynting vector spin to analyze the rotation of the instantaneous Poynting vector using the complex phasor form of the time-averaged Poynting vector. For a transverse electric mode, we show the similarity between the Poynting vector spin and the photonic spin of the magnetic field. The Poynting vector spin is then used to represent the transverse power transfer across a ferrite-air interface supporting TE surface wave modes. Considering a gyromagnetic ferrite-filled rectangular waveguide following the TE mode profile, we show the correspondence between the Poynting vector spin and the overall positive and backward power flow. We analytically propose a mechanism to engineer the region of the waveguide supporting backward and forward power propagation.
Rajarshi Sen, Sarang Pendharker
2023-10-24T14:23:44Z
http://arxiv.org/abs/2310.15867v1
# Complex Poynting vector in gyromagnetic media and its impact on power flow in guided modes ###### Abstract In this paper, we show the relation between the time-varying spinning nature of the instantaneous Poynting vector and the phasor form of the complex Poynting vector in the bulk of the gyromagnetic medium. We show the presence of a transverse reactive power component in the bulk of the gyrotropic medium, even for plane wave propagation. We use a simple quantification technique of Poynting vector spin to analyze the rotation of the instantaneous Poynting vector using the complex phasor form of the time-averaged Poynting vector. For a transverse electric mode, we show the similarity between the Poynting vector spin and the photonic spin of the magnetic field. The Poynting vector spin is then used to represent the transverse power transfer across a ferrite-air interface supporting TE surface wave modes. Considering a gyromagnetic ferrite-filled rectangular waveguide following the TE mode profile, we show the correspondence between the Poynting vector spin and the overall positive and backward power flow. We analytically propose a mechanism to engineer the region of the waveguide supporting backward and forward power propagation. ## I Introduction Gyrotropic materials form an important class of materials encompassing naturally occurring and engineered materials. Gyrotropic materials can be broadly classified into gyroelectric and gyromagnetic materials. Among these two classes, gyromagnetic materials have seen widespread applications due to their availability in the microwave frequency region. Microwave ferrites have been extensively used to realize isolators[1; 2], circulators[3; 4; 5], and phase shifters[6; 7; 8]. Apart from these conventional applications, they have been recently used to realize nonreciprocity in numerous applications, such as topological insulators[9; 10], thin films[11; 12], and mode-converting waveguides[13; 14]. Despite the extensive use of gyromagnetic media in various microwave and millimeter wave applications, an understanding of important physical phenomena corresponding to such media was still missing. Researchers consider isofrequency surfaces helpful in investigating the wave propagation phenomena in complex media. These isofrequency surfaces have been used to investigate wave propagation characteristics in both natural[15; 16; 17] as well as engineered materials[18; 19; 20; 21]. We recently performed a thorough investigation of the nonreciprocal photonic spin profile in the bulk of the media using isofrequency surfaces for gyrotropic media[22]. This investigation widened our understanding of the impact of gyrotropy over manipulating the plane wave propagation in the bulk of the media and the existence of asymmetrical photonic spin profiles for guided modes. However, a comprehensive understanding of power propagation in gyrotropic media is still missing. We believe a detailed investigation of the Poynting vector investigation will help us reveal key insights into the wave propagation behavior in gyrotropic materials and provide governing principles for engineering novel applications. Poynting vector, which corresponds to the power flux density, has been used extensively in different media and structures [23; 24; 25; 26; 27; 28; 29; 30; 31; 32] to observe the propagation of power. Conventionally, the Poynting vector is represented using the instantaneous and time-averaged forms. The time-averaged Poynting vector gives us an idea about the overall power flow in the medium. The most general form of this time-averaged Poynting vector is its complex form, consisting of both real and imaginary components, where the imaginary component corresponds to the reactive power component. However, we mostly consider only the real power component because, for most applications, the investigation of real power itself is sufficient, such as reflection of power from metasurfaces[33; 34], coupling of power for guided modes[35; 36; 37], and trapping of light for zero and negative Poynting vector[38; 39]. However, neglecting the imaginary (reactive) part of the complex Poynting vector leads to an incomplete picture of the power flow in the medium. It has been shown that the complex Poynting vector, which includes both real and reactive power component plays an important role in the investigation of propagating beams[40], the near field of Hertzian dipoles[41], evanescent waves[42; 43] and optical forces over particles [44; 45; 40]. Recently, we have seen research efforts being made to link this complex time-averaged Poynting vector with the time-varying instantaneous Poynting vector. Litvin has demonstrated the relationship between complex time-averaged Poynting vector and the elliptical rotation of the instantaneous Poynting vector in the case of Gaussian and Bessel beams[46]. Kim et al. made similar observations in [47], where the longitudinal and transverse components of the time-averaged Poynting vector are used to explain the spinning nature of the instantaneous Poynting vector across an interface-supporting surface wave. This transverse reactive power is crucial as it can explain the source of backward power propagation observed in guided modes. Such investigation into the nature of the Poynting vector in gyrotropic materials is missing. In this paper, we show that gyrotropic material sup ports a complex Poynting vector in the bulk for plane wave propagation. We further analyze the complex Poynting vector with the help of isofrequency surfaces and link it with the spinning nature of the instantaneous Poynting vector for plane waves in the bulk of the medium. We quantify this rotation of the instantaneous Poynting vector by introducing a Poynting vector spin. In the special case of transverse electric mode, we show that the Poynting vector spin is directly proportional to the photonic spin of the magnetic field (\(\mathrm{Im}(\vec{H}^{*}\times\vec{H})\)). We then use this Poynting vector spin to explain the instantaneous transverse power flow across an interface between gyromagnetic ferrite and air. Further, considering a ferrite-filled rectangular waveguide, we show the correspondence between the backward propagation of power density and the opposite sense of Poynting vector spin. We provide guiding principles, using which the zero-power density point in the cross-section of the waveguide can be engineered. The results presented in this paper play a crucial role in understanding the nature of the instantaneous Poynting vector in gyrotropic media and the corresponding relation between the reactive and real components of the time-averaged Poynting vector. This paper is organized as follows. In section II, we analyze the power flow phenomenon in the bulk of the gyromagnetic medium. Considering the transverse electric mode, in section III, we quantify the temporal rotation of the instantaneous Poynting vector. In section IV, using the understanding of the temporal behavior of the Poynting vector for TE mode in gyromagnetic media, we consider two different cases of surface wave propagation. We reveal the role of transverse reactive power in supporting the unidirectional surface wave propagation in gyromagnetic ferrite. Further, in section V, we report the presence of negative power flow in a ferrite-filled waveguide, which is validated using full-wave simulations. Our analytical expressions help in engineering the Poynting vector distribution across the cross-section of the waveguide. Finally, we conclude our results in section VI. ## II Poynting vector in bulk of gyromagnetic media For a comprehensive investigation of power flow over gyrotropic material, we select YIG ferrite as our material of choice. The material specifications are given in Table 1. Using the YIG ferrite material parameters, we can find the permeability and gyrotropy terms, \(\mu^{\prime}\) and \(\kappa^{\prime}\), respectively, corresponding to the operating frequency. We can express the anisotropic permeability tensor for the \(\hat{z}\)-biased ferrite using these anisotropy terms as \[\overset{\leftrightarrow}{\mu}_{r}=\begin{bmatrix}\mu^{\prime}&-j\kappa^{ \prime}&0\\ j\kappa^{\prime}&\mu^{\prime}&0\\ 0&0&1\end{bmatrix}. \tag{1}\] Considering the uniaxial nature of gyromagnetic ferrite, we can simplify the investigation by analyzing the isofrequency surfaces corresponding to a 2D propagation scenario. However, the selected 2D plane should be parallel to the bias-axis, which in this case is the \(\hat{z}\)-axis. Here, we select the \(X-Z\) plane as the plane of wave propagation, and correspondingly, we will have wave vector component \(k_{y}=0\) and non-zero value of in-plane wave-vector components \(k_{x}\) and \(k_{z}\). For this work, we exclude all the losses for simplicity. Thus, the material parameters \(\mu^{\prime}\) and \(\kappa^{\prime}\), and the wave-vector components \(k_{x}\) and \(k_{z}\) along the isofrequency curves will be purely real quantities. Once we have computed the isofrequency contour, we can find the field vector intensities for the bulk of the medium using the method followed in [22]. Using the field intensities, we can finally compute both the instan Figure 1: (a) Graphical illustration of the temporal variation of the instantaneous Poynting vector (blue conical trace) in an elliptical manner around the time-averaged Poynting vector (red arrow). (b), (c) Isofrequency curves corresponding to the elliptical and hyperbolic regimes at frequencies 6 and 11 GHz, respectively. The isofrequency curves are color-mapped with the transverse reactive power. The arrows represent the time-averaged Poynting vector normal to the isofrequency surfaces. \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Specification \\ \hline Magnetic saturation (\(4\pi M_{s}\)) & 1800 Gauss \\ Magnetic bias (\(H_{0}\)) & 3570 Oersted \\ Dielectric permeability (\(\epsilon_{f}\)) & 14 \\ \hline \hline \end{tabular} \end{table} Table 1: YIG ferrite material specifications taneous pointing vector \(\vec{P}_{ins}=\text{Re}(\vec{E})\times\text{Re}(\vec{H})\) and time-averaged Poynting vector \(\vec{P}_{avg}=0.5\vec{E}^{*}\times\vec{H}\) in its Phasor form (see Appendix A). \[\vec{P}_{ins}=H_{0}^{2}k_{0}^{2}\Bigg{(}\Bigg{(}\frac{k_{x}(\epsilon_ {f}k_{0}^{2}\kappa^{\prime 2}(k_{x}^{2}-\epsilon_{f}k_{0}^{2})^{2}+k_{z}^{2}(k_{x}^{2}+k_{z}^{2} -\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2})}{2\omega\epsilon_{0}(k_{x}^{2}- \epsilon_{f}k_{0}^{2})^{2}(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{ \prime})^{2}}\] \[+\frac{k_{x}(-\epsilon_{f}k_{0}^{2}\kappa^{\prime 2}(k_{x}^{2}- \epsilon_{f}k_{0}^{2})^{2}+k_{z}^{2}(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2} \mu^{\prime})^{2})}{2\omega\epsilon_{0}(k_{x}^{2}-\epsilon_{f}k_{0}^{2})^{2}(k _{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2}}\cos(2\omega t) \Bigg{)}\hat{x}-\frac{\kappa^{\prime}k_{x}(\epsilon_{f}k_{0}^{2}-k_{x}^{2}-k_{ z}^{2})}{2\omega\epsilon_{0}(\epsilon_{f}k_{0}^{2}-k_{x}^{2})(k_{x}^{2}+k_{z}^{2}- \epsilon_{f}k_{0}^{2}\mu^{\prime})}\sin(2\omega t)\hat{y}\] \[+\Bigg{(}\frac{k_{z}((k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2} \mu^{\prime})^{2}+\epsilon_{f}k_{0}^{2}\kappa^{\prime 2}(\epsilon_{f}k_{0}^{2}-k_{ x}^{2}))}{2\omega\epsilon_{0}(\epsilon_{f}k_{0}^{2}-k_{x}^{2})(k_{x}^{2}+k_{z}^{2} -\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2}}+\frac{k_{z}((k_{x}^{2}+k_{z}^{2}- \epsilon_{f}k_{0}^{2}\mu^{\prime})^{2}-\epsilon_{f}k_{0}^{2}\kappa^{\prime 2}( \epsilon_{f}k_{0}^{2}-k_{x}^{2}))}{2\omega\epsilon_{0}(\epsilon_{f}k_{0}^{2}-k_{ x}^{2})(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2}}\cos(2\omega t) \Bigg{)}\hat{z}\Bigg{)} \tag{2}\] \[\vec{P}_{avg}=H_{0}^{2}k_{0}^{2}\Bigg{(}\frac{k_{x}(\epsilon_{f}k_ {0}^{2}\kappa^{\prime 2}(k_{x}^{2}-\epsilon_{f}k_{0}^{2})^{2}+k_{z}^{2}(k_{x}^{2}+k_{z}^{ 2}-\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2})}{2\omega\epsilon_{0}(k_{x}^{2}- \epsilon_{f}k_{0}^{2})^{2}(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{ \prime})^{2}}\hat{x}\\ +j\frac{\kappa^{\prime}k_{x}(\epsilon_{f}k_{0}^{2}-k_{x}^{2}-k_{ z}^{2})}{2\omega\epsilon_{0}(\epsilon_{f}k_{0}^{2}-k_{x}^{2})(k_{x}^{2}+k_{z}^{2}- \epsilon_{f}k_{0}^{2}\mu^{\prime})}\hat{y}+\frac{k_{z}(\epsilon_{f}k_{0}^{2} \kappa^{\prime 2}(\epsilon_{f}k_{0}^{2}-k_{x}^{2})+(k_{x}^{2}+k_{z}^{2}- \epsilon_{f}k_{0}^{2}\mu^{\prime})^{2})}{2\omega\epsilon_{0}(\epsilon_{f}k_{0 }^{2}-k_{x}^{2})(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{\prime})^{2}} \hat{z}\Bigg{)} \tag{3}\] Here, \(k_{0}\) denotes the free-space propagation constant. We observe that the instantaneous Poynting vector for real power \(\vec{P}_{inst}\) in the bulk YIG ferrite contains both time-varying and time-invariant terms, which we can see in eq. (2). Interestingly, the time-invariant terms exist only along the plane of wave propagation, i.e., having \(x\)- and \(z\)-components only. On the contrary, the time-varying component of \(\vec{P}_{inst}\) exists along the \(X-Z\) plane as well as transverse \(\hat{y}\)-direction. The transverse component, however, is \(90^{\circ}\) out of phase with respect to the in-plane component, as indicated by their sine and cosine terms, respectively. This phase difference between the orthogonal components leads to an elliptical sweep of \(\vec{P}_{inst}\) with respect to time. This spatio-temporal sweep of \(\vec{P}_{inst}\) is illustrated in Fig. 1(a). For simplicity, we have considered only one direction of propagation having positive values of \(k_{x}\) and \(k_{z}\). The rotational phenomenon is depicted in this figure with the help of a dashed green curve around the elliptical trace. Also, we have taken four time-samples \(t=0\), \(t=T/8\), \(t=T/4\), and \(t=3T/8\), which correspond to the phases \(0\), \(\pi/4\), \(\pi/2\), and \(3\pi/4\), respectively. The \(\vec{P}_{inst}\) at these four time-instances are shown along the elliptical trace. At this point, we make an important observation from eq. (2) and Fig. 1(a). On time-averaging the instantaneous Poynting vector, the net real power component along the transverse direction vanishes, and only a real component remains along the plane of wave propagation. This non-zero real component of time-averaged power is represented using a red arrow in Fig. 1(a). However, on considering the complex time-averaged \(\vec{P}_{avg}\) of eq. (3), we find that there exists a purely imaginary transverse reactive component along \(\hat{y}\)-axis. The magnitude of this transverse reactive power component of \(\vec{P}_{avg}\) in eq. (3) is similar to the magnitude of the transverse component of \(\vec{P}_{inst}\) in eq. (2), and is important in defining the axis of the elliptical trace along \(\hat{y}\)-axis. Thus, we can say that, by considering both the real and reactive power components of the complex \(\vec{P}_{avg}\), the complete time-varying behavior of \(\vec{P}_{inst}\) can be understood. Considering the chosen ferrite medium, we now compute the time-averaged Poynting vector \(\vec{P}_{avg}\) for two frequencies \(f=6\) GHz and \(11\) GHz using eq. (3) while considering \(H_{0}=1\) for simplicity. These two frequencies correspond to the elliptical and hyperbolic regimes with positive and negative values of \(\mu^{\prime}\), respectively. The (\(\mu^{\prime}\), \(\kappa^{\prime}\)) values for these two frequencies are (\(1.79\), \(0.47\)), and (\(-1.38\), \(-2.62\)), respectively. In these two regimes, the values of \(\mu^{\prime}\) and \(\kappa^{\prime}\) are suitable for supporting two independent isofrequency surfaces[22]. Hence, with the help of these two cases, we can completely observe the power flow mechanism in the bulk of gyromagnetic media. The real part of \(\vec{P}_{avg}\) denoting the real power in the medium is shown using arrows over the isofrequency curves, and the imaginary component corresponding to the reactive power is represented using colormap in Fig. 1(b) and (c) for these two frequencies, respectively. We can observe the direction of the real component of \(\vec{P}_{avg}\) is always normal to the isofrequency curve. This geometrical relation of the time-average real-power propagation being normal to the isofrequency surface can be verified throughout the isofrequency surface covering all propagation directions. Further, we observe that the time-average Poynting vector for closed isofrequency curves such as ellipses diverges outward. Whereas, for open curves such as hyperbolas, the Real time-average Poynting vector tends to converge along the transverse axis of the hyperbola. This phenomenon of time-averaged Poynting vector (which only has a real component) being normal to the isofrequency surface has already been investigated for non-gyrotropic anisotropic media [48; 49; 50]. However, for gyrotropic material, the scenario is more complex with the presence of the imaginary component of \(\vec{P}_{avg}\), which accounts for the overall reactive power in the bulk of the medium. The transverse reactive power (aligned along \(\hat{y}\)-axis) for propagation along \(X-Z\) plane changes its sign for forward and backward propagation corresponding to \(+\hat{x}-\) and \(-\hat{x}-\)direction, respectively. In uniaxial media such as gyromagnetic ferrite, considering the symmetry of the isofrequency surface around the bias axis, the 2D analogous isofrequency curves portrayed the complete information. Now, since the transverse reactive power follows opposite sign along the opposite sides of the 2D isofrequency curve, the rotation of the isofrequency curve to form a complete isofrequency surface will give us an impression of the reactive power flux to encircle the bias axis of the gyromagnetic medium. This intrinsic reactive power in plane waves in the bulk of the medium results from the gyrotropy of the material. The imaginary reactive power component of eq. (3) is linearly proportional to the gyrotropy term \(\kappa^{\prime}\). Thus, we can infer that in the absence of gyrotropy, the reactive power in the bulk of the medium vanishes. In the next section, we shall quantify the rotation of the instantaneous Poynting vector \(\vec{P}_{inst}\). ## III Quantification of Poynting Vector Rotation The temporal behavior of \(\vec{P}_{inst}\) has a strong relation with the phasor form of complex time-averaged Poynting vector \(\vec{P}_{avg}\). Understanding the time-varying nature of \(\vec{P}_{inst}\) in complex media such as gyrotropic media is a challenge. To solve this challenge, we quantify the rotation of \(\vec{P}_{inst}\) using the definition of Poynting vector spin \(\vec{S}_{p}=\text{Im}(\vec{P}_{avg}^{*}\times\vec{P}_{avg})\). This method of defining the Poynting vector spin is similar to the way photonic spin is defined for the electric and magnetic fields as \(S_{e}=\text{Im}(\vec{E}^{*}\times\vec{E})\) and \(S_{h}=\text{Im}(\vec{H}^{*}\times\vec{H})\), respectively. Note that \(\vec{S}_{p}\) shall be non-zero only when \(\vec{P}_{avg}\) is complex, which signifies the quantification of the rotation of the instantaneous Poynting vector \(\vec{P}_{inst}\) using the real and reactive power components. Let us now consider a special condition of wave propagation in the bulk of YIG ferrite when the direction of propagation is along \(\pm\hat{x}\)-axis. Under this condition, one of the propagation modes corresponds to the transverse electric (TE) mode, supporting transverse electric \(E_{z}\), transverse magnetic \(H_{y}\), and longitudinal magnetic field components \(H_{x}\). In this special case of TE propagation, the \(\vec{S}_{p}\) becomes proportional to \(\vec{S}_{h}\) (see Appendix B), as electric field spin in this mode is absent. This absence of electric field spin simplifies our investigation as the computation of photonic spin corresponding to the magnetic field shall give us the Poynting vector spin \(\vec{S}_{p}\). We will now use this approach of quantification of rotation of the Poynting vector to investigate the real and reactive power propagation in ferrite-associated surface waves and waveguide modes. ## IV Poynting Vector Spin Across Ferrite-Air Interface Surface wave modes propagate along the interface between two media while being evanescent in the direction transverse to the interface. This surface wave confinement is typically realized in an interface between a dielectric and a medium supporting either negative permittivity[51; 52; 53] or permeability value[16; 54]. In [47], Kim et al. considered surface wave propagation along an interface between a dielectric and a negative permittivity medium. They showed the relation between the opposite sense of rotation of \(\vec{P}_{inst}\) and the direction of real power flow \(\text{Re}(\vec{P}_{avg})\) across the interface. However, such negative permittivity materials are reciprocal. Gyromagnetic ferrite supports negative \(\mu_{eff}=(\mu^{\prime 2}-\kappa^{\prime 2})/\mu^{\prime}\) corresponding to the TE propagation mode for a particular frequency region, which depends on the material parameters and applied bias. We can facilitate highly nonreciprocal surface wave propagation along a ferrite-air interface in this frequency range. We now consider two separate interfaces for investigating power flow for surface wave propagation: (a) negative permeability medium (NPM)-air and (b) ferrite-air interface. The medium parameters considered for the NPM-air interface are \(\epsilon_{np}=1\), \(\mu_{np}=-3\). Whereas, for the YIG ferrite, we consider material parameters given in Table 1, except the \(4\pi M_{s}\), which is taken as 3750 Gauss. The magnetic bias applied in the ferrite material is along the \(-\hat{z}\)-axis. For this material configuration of the YIG ferrite, the effective permeability in ferrite for TE mode of propagation \(\mu_{eff}\) becomes negative within a particular frequency band. We shall only consider this special frequency band in this investigation of surface wave modes along the ferrite-air interface. The schematic representation for the surface wave propagation along the NPM-air and ferrite-air interface is shown in 2(a) and (f), respectively. The \(X-Z\) plane is the plane of the interface at \(y=0\). The waves will encounter evanescence conditions along \(|y|>0\). Next, we see the dispersion relation in these two cases. First we find the field components \(E_{z}\), \(H_{x}\), and \(H_{z}\) corresponding to TE mode, The field components are assigned \(e^{\alpha_{y}y+j(k_{x}x-\omega t)}\) and \(e^{-\alpha_{y}y+j(k_{x}x-\omega t)}\) for \(y\leq 0\) and \(y\geq 0\), respectively. Here, \(k_{x}\) is the longitudinal propagation constant, \(\alpha_{yd}\) is the attenuation constant in the air, and \(\alpha_{yu}\) is the attenuation constant for the NPM and ferrite regions. Next, we find the dispersion relations by applying the boundary conditions. For the NPM-air interface, the dispersion curve is simpler, and we sion for \(k_{x}\) as \[k_{x}=\pm k_{0}\sqrt{\frac{\mu_{a}\mu_{np}(\epsilon_{np}\mu_{a}-\epsilon_{a}\mu_{ np})}{\mu_{a}^{2}-\mu_{np}^{2}}} \tag{4}\] whereas for the ferrite-air interface, we get the following characteristic equation \[H_{0}k_{0}(k_{x}(\alpha_{yu}+\alpha_{yd}\mu^{\prime})+\kappa^{\prime}(\alpha_{ yd}\alpha_{yu}-\epsilon_{f}k_{0}^{2}))=0 \tag{5}\] where \(\alpha_{yd}\) and \(\alpha_{yu}\) are functions of \(k_{x}\), \(k_{0}\) and the medium parameters (Appendix D). \(k_{0}\) is the free space wavelength. We now find the wave-vector value \(k_{x}\) for the propagation mode along the NPM-air interface and ferrite-air interface from eq. (4) and (5), respectively. The dispersion curves for these two cases are shown in Fig. 2(b) and (g), respectively. We observe that the dispersion relation for the NPM-air interface is linear and reciprocal, whereas there is significant nonreciprocity for the ferrite-air interface. Backward propagation (\(k_{x}<0\)) exists only for a small frequency region, and most of this special frequency region supports unidirectional surface wave propagation. We now select frequencies \(f_{1}=5\) GHz and \(f_{2}=15.31\) GHz to investigate further the Poynting vector corresponding to the NPM-air and ferrite-air interface, respectively. These frequencies are marked in the dispersion curves using dotted lines. The electric field component \(E_{z}\) for the NPM-air and ferrite-air interfaces are shown in Fig. 2(c) and (h), respectively. We can observe strong confinement along the interface for the surface mode in both cases. To investigate the time-varying nature of \(\vec{P}_{inst}\), we select five time-steps \(t=0\), \(0.08T\), \(0.16T\), \(0.34T\), and \(0.42T\). \(\vec{P}_{inst}\) corresponding to these time-steps are overlayed over the electric field plot. Interestingly, we observe a contrasting behavior for the sense of rotation of \(\vec{P}_{inst}\). The sense of rotation of the instantaneous Poynting vector is opposite across the NPM-air interface. Whereas, for the ferrite-air interface, we observe that \(\vec{P}_{inst}\) rotation follows the same sense. Using the expression for the field intensities, we find the analytical expressions for real and reactive components of the time-averaged Poynting vector. These real and imaginary power components for the NPN-air interface are (Appendix C) \[\text{Re}(\vec{P}_{avg})=\begin{cases}\frac{k_{0}^{2}\mu_{a}(\alpha_{yd}^{2}+ \epsilon_{a}k_{0}^{2}\mu_{a})e^{2\alpha_{yd}y}}{2\omega_{0}\epsilon_{x}\alpha_ {yd}^{2}}\hat{x}&\text{if }y\leq 0\\ \frac{k_{0}^{2}\mu_{np}(\alpha_{yd}^{2}+\epsilon_{np}k_{0}^{2}\mu_{np})e^{-2 \alpha_{yu}y}}{2\omega_{0}\epsilon_{x}\alpha_{yu}^{2}}\hat{x}&\text{if }y\geq 0 \end{cases} \tag{6}\] \[\text{Im}(\vec{P}_{avg})=\begin{cases}-\frac{k_{0}^{2}\mu_{a}e^{2\alpha_{yd}y }}{2\omega_{0}\sigma_{yd}}\hat{y}&\text{if }y\leq 0\\ \frac{k_{0}^{2}\mu_{a}e^{-2\alpha_{yu}y}}{2\omega_{0}\sigma_{0}}\hat{y}&\text{ if }y\geq 0\end{cases} \tag{7}\] and for the ferrite-air interface (Appendix D) \[\text{Re}(\vec{P}_{avg})=\begin{cases}\frac{k_{0}^{2}(\alpha_{yx}^{2}+ \epsilon_{a}k_{0}^{2})e^{2\alpha_{uy}y}}{2\omega_{0}\epsilon_{x}\alpha_{y}^{2 }}\hat{x}&\text{if }y\leq 0\\ \frac{k_{0}^{2}(\alpha_{yx}^{2}+\epsilon_{f}k_{0}^{2}\hat{y}^{\prime})(\alpha _{uy}\kappa^{\prime}+k_{x}\mu^{\prime})e^{-2\alpha_{phy}y}}{2\omega_{0}( \epsilon_{f}k_{0}^{2}\kappa^{\prime}-\alpha_{yu}k_{x})^{2}}\hat{x}\\ &\text{if }y\geq 0\end{cases} \tag{8}\] \[\text{Im}(\vec{P}_{avg})=\begin{cases}-\frac{k_{0}^{2}e^{2\alpha_{uy}y}}{2 \omega_{0}\alpha_{y}\hat{y}}&\text{if }y\leq 0\\ -\frac{k_{0}^{2}(\alpha_{yx}^{2}+k_{x}\mu^{\prime})e^{-2\alpha_{yn}y}}{2\omega _{0}(\epsilon_{f}k_{0}^{2}\kappa^{\prime}-\alpha_{yu}k_{x})^{2}}\hat{y}&\text{ if }y\geq 0\end{cases} \tag{9}\] From eq. (6) and (8), the real component of \(\vec{P}_{avg}\) is completely along \(\hat{x}\)-axis. This real component of \(\vec{P}_{avg}\) is Figure 2: Surface wave propagation along (a-e) NPM-air and (f-j) ferrite-air interfaces. (a,f) Schematic representation of the surface wave propagation, (b,g) dispersion curves, (c,h) Re(\(E_{z}\)) as colormap showing transverse electric field component for the surface wave and traces of \(\vec{P}_{inst}\), (d,i) Real power component Re(\(P_{avg}^{\rightarrow}\)), and (e,j) Poynting vector spin \(\vec{S}_{p}\) shown in Fig. 2(d) and (i) corresponding to the NPM-air and ferrite-air interfaces, respectively. Additionally, for the NPM-air interface, the analytical results are verified using CST Microwave Studio simulations. We observe that for the NPM-air interface, the real power propagates along opposite directions on either side of the interface. In contrast, the direction of real power propagation remains consistent for the ferrite-air interface. This behavior of real power flow being opposite in the two opposite sides across the NPM-air interface and in the same direction for the ferrite-air interface is in agreement with the corresponding sense of rotation of \(\vec{P}_{inst}\). This matching behavior of propagation direction of real power and rotational sense of \(\vec{P}_{inst}\) requires the computation of Poynting vector spin across the NPM-air interface (\(\vec{S}_{p,np}\)) and ferrite-air interface (\(\vec{S}_{p,f}\)). The analytical equations for \(\vec{S}_{p,np}\) and \(\vec{S}_{p,f}\) after normalizing the magnetic field spatially are \[\vec{S}_{p,np}=\begin{cases}-\frac{2\alpha_{yak}k_{x}(\alpha_{y}^{2}+\epsilon_ {a}\mu_{y}k_{0}^{2})}{\alpha_{y}^{4}+\epsilon_{a}^{2}\mu_{y}^{2}k_{0}^{4}+ \alpha_{y}^{2}(\epsilon_{a}k_{0}^{2}+2\epsilon_{a}\mu_{y}k_{0}^{2})}\hat{z}& \text{if $y\leq 0$}\\ \frac{2\alpha_{yak}k_{x}(\alpha_{y}^{2}+\epsilon_{a}\mu_{y}\mu_{y}k_{0}^{2})} {\alpha_{y}^{4}+\epsilon_{a}^{2}\mu_{y}^{2}k_{0}^{4}+\alpha_{yw}^{2}(k_{x}^{2 }+2\epsilon_{w}\mu_{np}k_{0}^{2})}\hat{z}&\text{if $y\geq 0$}\end{cases} \tag{10}\] \[\vec{S}_{p,f}=\begin{cases}-\frac{2\alpha_{yak}k_{x}(\alpha_{y}^{2}+\epsilon_ {a}k_{0}^{2})}{\alpha_{y}^{4}+\epsilon_{a}^{2}k_{0}^{4}+\alpha_{yw}^{2}( \epsilon_{a}k_{0}^{2}+\epsilon_{x}^{2})}\hat{z}&\text{if $y\leq 0$}\\ -\frac{2(\epsilon_{x}\ell_{x}k_{0}^{2}\kappa^{\prime}-\alpha_{yw}k_{x})( \alpha_{yw}^{2}+\epsilon_{x}k_{0}^{2}\mu^{\prime})}{\alpha_{yw}^{4}-2\alpha_{ yw}\ell_{y}^{2}\kappa^{\prime}k_{x}+\alpha_{yw}^{2}(k_{x}^{2}+2\epsilon_{x}k_{0}^{ 2}\mu^{\prime})+\epsilon_{y}^{2}k_{0}^{4}(\mu^{2}+\kappa^{\prime 2})}\hat{z}& \text{if $y\geq 0$}\end{cases} \tag{11}\] The subsequent plots of the Poynting vector spin \(\vec{S}_{p,np}\) and \(\vec{S}_{p,f}\) across the NPM-air and ferrite-air interfaces are shown in Fig. 2(e) and (j), respectively. As expected from the sense of rotation of \(\vec{P}_{inst}\), we observe the correspondence between the sign of \(\vec{S}_{p}\) and the direction of \(\text{Re}(\vec{P}_{avg})\). For the ferrite-air interface in the specific frequency range having negative \(\mu_{eff}\), \(\vec{S}_{p}\) maintains the same sign but a different magnitude across either side of the interface. The inequality of the magnitude of \(\vec{S}_{p}\) corresponds to the discontinuity of the real power propagation at the interface. Further, the \(\vec{S}_{p}\) will also indicate the propagation of imaginary reactive power across the interface. Previously, it was observed that the material-induced photonic spin is locked to ferrite material for bidirectional wave propagation[22]. Similarly, the Poynting vector spin also remains locked in the ferrite medium irrespective of the direction of propagation. In the next section, we consider a ferrite-filled rectangular waveguide and observe this phenomenon of opposite signs of \(\vec{S}_{p}\) for forward and backward real power propagation in the waveguide. ## V Power flow dynamics in ferrite-filled waveguide Previously, we have performed a detailed study of the role of gyrotropy in imposing a dominant photonic spin over the cross-section of a YIG ferrite-filled waveguide (section II.C of [22]). The dominant photonic spin results from the interaction of the material-induced and structure-induced photonic spins. Here, we understand the resulting effect of the gyrotropy over the power flow in the waveguide. Let us consider a rectangular waveguide of cross-section \(a=5\), \(b=3\) mm, and length \(l=30\) mm as shown in Fig. 3(a), which is filled with \(\hat{z}\)-biased YIG ferrite (specifications similar to Table 1). The operating frequency is selected as 7 GHz. The waveguide walls are assigned PEC boundary conditions to avoid losses. In this waveguide, \(\text{TE}_{10}\) is the dominant mode, and we shall select this mode for our power-flow analysis. Correspondingly, the field variations for the \(\text{TE}_{10}\) mode shall be along the broad dimension of the waveg Figure 3: (a) Illustration of a \(\hat{z}\)-biased ferrite waveguide with cross-section of dimension \(a\times b\). The homogeneous bias \(H_{0}\) is represented using red arrows. The direction of propagation is along \(\hat{x}\)-axis. (b) and (c) shows the \(\vec{S}_{p}\) and \(\text{Re}(\vec{P}_{avg})\) along the cross-section plane for \(\text{TE}_{10}\) mode. The plots include both analytical and simulation results. uide, i.e., along \(\hat{y}\)-axis. The field equations for this YIG ferrite-filled waveguide in \(\mathrm{TE}_{10}\) mode of operation can be written as \[E_{z}=\frac{A\omega\cos k_{y}y}{k_{y}}e^{jk_{x}x} \tag{12}\] \[H_{x}=\frac{jA(\kappa^{\prime}k_{x}\cos k_{y}y-\mu^{\prime}k_{y}\sin k_{y}y)}{k _{y}\mu_{0}(\kappa^{\prime 2}-\mu^{\prime 2})}e^{jk_{x}x} \tag{13}\] \[H_{y}=\frac{A(\mu^{\prime}k_{x}\cos k_{y}y-\kappa^{\prime}k_{y}\sin k_{y}y)}{k _{y}\mu_{0}(\kappa^{\prime 2}-\mu^{\prime 2})}e^{jk_{x}x} \tag{14}\] Similar to a dielectric-filled waveguide, we have \(k_{y}=\pi/a\) in this ferrite-filled waveguide. The normal component of magnetic flux density is zero on the surface of the PEC wall, i.e., \(\hat{y}\cdot\vec{B}=0\). This condition would also make the normal component of the magnetic flux density equal to zero along the waveguide wall in dielectric-filled waveguides. However, in a gyromagnetic ferrite medium, due to the off-diagonal terms of the permeability tensor, we have a non-zero normal component of magnetic field intensity along the waveguide wall at \(y=\pm a/2\). This non-zero component can be observed in eq. (14) by putting \(y=\pm a/2\). At \(y=\pm a/2\) we have \(\sin k_{y}y=\pm 1\) and \(\cos k_{y}y=0\). Hence, the non-zero \(H_{x}\) and \(H_{y}\) at the waveguide wall correspond to non-zero Poynting vector spin \(\vec{S}_{p}\) along the waveguide wall. The mathematical expression for the \(\vec{S}_{p}\) along the \(\hat{y}\)-axis of the waveguide is \[\begin{split}\vec{S}_{p}=&\frac{2A^{2}}{k_{y}^{2} \mu_{0}^{2}(\mu^{\prime 2}-\kappa^{\prime 2})^{2}}(k_{y}\mu^{\prime}\sin k_{y}y \\ &-k_{x}\kappa^{\prime}\cos k_{y}y)(k_{x}\mu^{\prime}\cos k_{y}y- k_{y}\kappa^{\prime}\sin k_{y}y)\hat{z}\end{split} \tag{15}\] Using this equation of \(\vec{S}_{p}\), we analytically compute the Poynting vector spin across the waveguide cross-section, which is further validated using CST Microwave Studio in Fig. 3(b). We observe a contrasting behavior of the Poynting vector spin in this ferrite-filled waveguide with respect to the conventional dielectric waveguide. In the latter case, an equal amount of positive and negative \(\vec{S}_{p}\) exists across the center of the cross-section of the waveguide. Whereas, in this ferrite-filled waveguide, the \(\vec{S}_{p}\) crosses the zero-line in two places. The first cross-over point is now offset from the center of the waveguide, and the second cross-over point (non-existent in the case of a dielectric-filled waveguide) exists in the other half of the waveguide. The analytical expressions for the first (\(y_{a}\)) and second cross-over (\(y_{c}\)) points along the cross-section are \[y_{a}=\frac{a}{\pi}\tan^{-1}\left(\frac{\kappa^{\prime}}{\mu^{\prime}}\sqrt{ \frac{\epsilon_{f}\mu_{eff}k_{0}^{2}a^{2}}{\pi^{2}}-1}\right) \tag{16}\] \[y_{c}=\frac{a}{\pi}\tan^{-1}\left(\frac{\mu^{\prime}}{\kappa^{\prime}}\sqrt{ \frac{\epsilon_{f}\mu_{eff}k_{0}^{2}a^{2}}{\pi^{2}}-1}\right) \tag{17}\] From eq. (16) and (17), we can understand the role of gyrotropy over this shift of the cross-over points. For a non-gyrotropic material, the gyrotropic parameter \(\kappa\) would be \(0\), which will lead to \(y_{a}=0\) and \(y_{c}=a/2\), similar to the case of a dielectric-filled waveguide. These cross-over points having \(\vec{S}_{p}=0\) exist due to the absence of transverse reactive power and longitudinal real power at \(y_{a}\) and \(y_{c}\), having \(H_{x}=0\) and \(H_{y}=0\), respectively. We are more interested in the second cross-over point \(y_{c}\) where the real power is absent, and beyond which \(\vec{S}_{p}\) makes a second transition into the negative region. This point is marked using dotted lines in Fig. 3(b). Correspondingly, we plot the real power propagation (\(\mathrm{Re}\vec{P}_{avg}\)) along the forward \(\hat{x}\)-axis of the waveguide using analytical equation and CST Microwave Studio in Fig. 3(c). The cross-over point \(y_{c}\) is marked using a dotted line. We observe that the region supporting additional negative \(\vec{S}_{p}\) beyond \(y_{c}\) contains a backward propagation of the real power. This backward power is absent for a dielectric-filled waveguide, as \(y_{c}\) itself shall lie in the outermost wall at \(a/2\). Using the expression of \(y_{c}\) in eq. (17), we have the means to engineer this cross-over point and alter the ratio of negative to positive propagation of real power. Figure 4: (a) 2D cross-section view of the waveguide depicting the arrangement of the real power and the null-power point \(y_{c}\). The variation of the null-power point \(y_{c}\) as a ratio with the half broad-side width \(a/2\), corresponding to (b) the variation of broad-side width \(a\) and (c) frequency of operation \(f\). Both analytical and simulation results are shown. In Fig. 4(a), the movement of the cross-over point \(y_{c}\) along \(\hat{y}\)-axis is graphically illustrated. The variation of the broad-width dimension \(a\) and the frequency of operation \(f\) changes \(y_{c}\). At this point, we have to note that the second cross-over point can be moved leftward maximum up to the center of the waveguide \(y_{c}=0\). At this point, there is an equal amount of forward and backward propagation of real power, and a cut-off condition is enforced in the waveguide. This gyrotropy-enforced cut-off condition was first reported in Section IV(C) of [22]. Figure 4(b) and (c) show the analytical and CST Microwave Studio plots of the ratio of \(y_{c}\) with half the broad width dimension \(a/2\) for variation of \(a\) and \(f\), respectively. We can see significant movement of \(y_{c}\) using variations of both parameters. Note that this zero cross-over point will be located on the other side of the waveguide for a backward propagating wave. The tunability of the cross-over point \(y_{c}\) will be beneficial for engineering nonreciprocal gyrotropic guiding structures. ## VI Conclusion This paper establishes the strong relationship between the phasor form of the time-averaged Poynting vector and its instantaneous form, which has an intrinsic sense of rotation in a bulk gyrotropic media. We capture the sense of rotation of the instantaneous Poynting vector using the Poynting vector spin, which can be computed using the complex time-averaged Poynting vector. Our results reveal the existence of reactive power components in the bulk of gyrotropic material for plane waves, which can be represented aptly by the Poynting vector spin. For the specific case of transverse electric mode propagation, the Poynting vector spin is shown to be identical to the photonic spin of the magnetic field component. The photonic spin of the magnetic field is induced by the gyrotropy. Further, we investigate the TE-guided modes (i) at the ferrite-air interface and (ii) in the ferrite-filled waveguide to reveal the impact of Poynting vector spin on the positive and negative power flow in these guiding structures. We show that the gyrotropy-induced asymmetry and nonreciprocity in Poynting vector spin and the resultant negative power flow in the ferrite-filled waveguide can be controlled by the waveguide and gyrotropic parameters. This work provides the analysis using which new avenues of designing gyrotropic material-based guiding-wave structures can open up. ## Appendix A Derivation of Poynting vector for bulk ferrite Using the mechanism used in [22], we find the magnetic field in the bulk of ferrite \[\vec{H}_{bf}=H_{0}(\hat{x}+((j\epsilon_{f}k_{0}^{2}\kappa^{\prime })/(k_{x}^{2}+k_{z}^{2}-\epsilon_{f}k_{0}^{2}\mu^{\prime}))\hat{y}\\ +((k_{x}k_{z})/(k_{x}^{2}-\epsilon_{f}k_{0}^{2}))\hat{z})e^{j(k_{ x}+k_{z}z-\omega t)} \tag{10}\] The corresponding electric field vector can be found using Maxwell's equation as \(\vec{E}_{bf}=-(1/(\omega\epsilon_{0}\epsilon_{f})\overleftrightarrow{k}\cdot \vec{H}_{bf})\) Now we find the time-averaged Poynting vector \(\vec{P}_{avg}\) as \[\vec{P}_{avg}=\frac{\vec{E}_{bf}^{*}\times\vec{H}_{bf}}{2} \tag{11}\] Using the instantaneous form of the magnetic field and electric field component, we can find the instantaneous Poynting vector \(\vec{P}_{inst}\) as \[\vec{P}_{inst}=\text{Re}(\vec{E}_{bf}e^{-j\omega t})\times\text{Re}(\vec{H}_{ bf}e^{-j\omega t}) \tag{12}\] The real and imaginary components of equation 11 give us the time average real and reactive power in the medium. In the instantaneous form, the time-varying nature of real power flow in the medium can be analyzed using eq. 12. ## Appendix B Poynting Vector and Photonic spin The rotation of the field intensity vector (\(\vec{H}\) or \(\vec{E}\)) can be properly measured using the definition of Photonic spin and mathematically expressed as \(\vec{S}_{h}=\Im(\vec{H}^{*}\times\vec{H})\) for magnetic field and \(\vec{S}_{e}=\Im(\vec{E}^{*}\times\vec{E})\) for the electric field. This definition of Photonic spin is analogous to computing the third Stoke's Parameter. Considering a transverse electric (TE) mode having field components \(E_{z}\), \(H_{x}\), and \(H_{y}\) propagating along \(\hat{x}\) axis, the photonic spin for the electric field is absent, and only \(\vec{S}_{h}\) remains. Further, the components of the time-averaged Poynting vector \(\vec{P}_{avg}\) for this TE mode will exist only in the \(\hat{x}\)\(\hat{y}\) axis, the latter being the reactive power direction. The time-averaged Poynting vector in phasor form can be expressed as \[\vec{P}_{avg}=\frac{1}{2}(\vec{E}^{*}\times\vec{H})=P_{x}\hat{x}+P_{y}\hat{y} \tag{13}\] Now, following an approach similar to computing the photonic spin, we can quantify the rotation of the instantaneous Poynting vector and call it Poynting vector spin (\(\vec{S}_{p}\)). As we have real and imaginary component of \(\vec{P}_{avg}\) along \(\hat{x}\) and \(\hat{y}\)-directions, respectively, \(\vec{S}_{p}\) can be defined as \[\begin{split}\vec{S}_{p}&=\Im(\vec{P}_{avg}^{*} \times\vec{P}_{avg})\\ &=\Im(P_{x}^{*}P_{y}-P_{y}^{*}P_{x})\\ &=0.253(-E_{x}^{*}H_{y}^{*}E_{z}H_{x}+E_{z}^{*}H_{x}^{*}E_{z}H_{ y})\\ &=0.25|E_{z}|^{2}\Im(H_{x}^{*}H_{y}-H_{y}^{*}H_{x})\\ &=0.25|E_{z}|^{2}\Im(\vec{H}^{*}\times\vec{H})\\ &=0.25|E_{z}|^{2}\Im_{h}\end{split} \tag{14}\] Since we only have non-zero \(H_{x}\) and \(H_{y}\) components, the photonic spin \(\vec{S}_{h}\) is along \(\hat{z}\)-axis, similarly, the Poynting vector spin \(\vec{S}_{p}\) is also along \(\hat{z}\)-axis. We have the Poynting vector spin \(\vec{S}_{p}\) related with the photonic spin \(\vec{S}_{h}\) as \(\vec{S}_{p}=0.25|E_{z}|^{2}\vec{S}_{h}\). As the photonic spin is normalized spatially, the term \(|E_{z}|\) does not hold any significant information, and we can safely relate \(\vec{S}_{p}\sim\vec{S}_{h}\). ## Appendix C Derivations for air-NPM interface To support the wave confinement in the interface \(y=0\), we define the TE mode field components as \[E_{z}=\begin{cases}\frac{jH_{0}k_{0}^{2}\mu_{\text{z}}e^{\alpha_{y}yd}}{\omega \omega_{\text{z}}\omega_{\text{z}}}&\text{if }y\leq 0\\ -\frac{jH_{0}k_{0}^{2}\mu_{\text{z}}e^{-\alpha_{y}yy}}{\omega\epsilon_{0} \alpha_{y}}&\text{if }y\geq 0\end{cases} \tag{10}\] \[H_{x}=\begin{cases}H_{0}e^{\alpha_{y}y}&\text{if }y\leq 0\\ H_{0}e^{-\alpha_{y}y}&\text{if }y\geq 0\end{cases} \tag{11}\] \[H_{y}=\begin{cases}-\frac{jH_{0}(\alpha_{y}^{2}+\epsilon_{x}k_{0}^{2}\mu_{ \text{z}})e^{\alpha_{y}y}}{\alpha_{y}k_{x}}&\text{if }y\leq 0\\ \frac{jH_{0}(\alpha_{y}^{2}+\epsilon_{y}k_{0}^{2}\mu_{\text{z}})e^{-\alpha_{y} yy}}{\alpha_{y}k_{x}}&\text{if }y\geq 0\end{cases} \tag{12}\] For simplicity, we omitted the basis function \(e^{k_{x}x-\omega t}\) from the above field equations. On application of the boundary condition of \(H_{x}\) being continuous at the interface \(y=0\), we obtain the dispersion relation as \[k_{x}=\pm k_{0}\sqrt{\frac{\mu_{\text{z}}\mu_{\text{z}}(\epsilon_{\text{z}} \mu_{\text{z}}-\epsilon_{a}\mu_{\text{z}})}{\mu_{\text{z}}^{2}-\mu_{\text{z} }^{2}}} \tag{13}\] Next, we compute the real (\(0.5\Re(\vec{E}^{*}\times\vec{H})\)) and reactive power (\(0.5\Im(\vec{E}^{*}\times\vec{H})\)) using the field equations (eq. 10-10) as \[0.5\Re(\vec{E}^{*}\times\vec{H})=\begin{cases}\frac{H_{0}^{2}k_{0}^{2}\mu_{ \text{z}}(\alpha_{y}^{2}d+\epsilon_{x}k_{0}^{2}\mu_{\text{z}})e^{2\alpha_{y}y }}{2\omega\epsilon_{0}k_{x}\alpha_{y}^{2}d}\hat{x}&\text{if }y\leq 0\\ \frac{H_{0}^{2}k_{0}^{2}\mu_{\text{z}}(\alpha_{y}^{2}+\epsilon_{x}\mu_{\text{ z}})\hat{x}(\alpha_{y}^{2}+\epsilon_{y}k_{0}^{2}\mu_{\text{z}})e^{-2\alpha_{y}yy}}{2 \omega\epsilon_{0}k_{x}\alpha_{y}}\hat{x}&\text{if }y\geq 0\end{cases} \tag{14}\] \[0.5\Im(\vec{E}^{*}\times\vec{H})=\begin{cases}-\frac{H_{0}^{2}k_{0}^{2}\mu_{ \text{z}}e^{2\alpha_{y}y}}{2\omega\epsilon_{0}\alpha_{y}d}\hat{y}&\text{if }y\leq 0\\ \frac{H_{0}^{2}k_{0}^{2}\mu_{\text{z}}e^{-2\alpha_{y}yy}}{2\omega\epsilon_{0} \alpha_{y}}\hat{y}&\text{if }y\geq 0\end{cases} \tag{15}\] We can simplify the time-averaged Poynting vector equation by substituting \(H_{0}=1\) A/m. Subsequently, we compute the Poynting vector spin \(\vec{S}_{p,f}=\Im(\vec{P}_{avg}^{*}\times\vec{P}_{avg})\) after normalizing the magnetic field vector \(\vec{H}\) at every spatial coordinate.
2307.01205
Heuristic Algorithms for RIS-assisted Wireless Networks: Exploring Heuristic-aided Machine Learning
Reconfigurable intelligent surfaces (RISs) are a promising technology to enable smart radio environments. However, integrating RISs into wireless networks also leads to substantial complexity for network management. This work investigates heuristic algorithms and applications to optimize RIS-aided wireless networks, including greedy algorithms, meta-heuristic algorithms, and matching theory. Moreover, we combine heuristic algorithms with machine learning (ML), and propose three heuristic-aided ML algorithms, namely heuristic deep reinforcement learning (DRL), heuristic-aided supervised learning, and heuristic hierarchical learning. Finally, a case study shows that heuristic DRL can achieve higher data rates and faster convergence than conventional deep Q-networks (DQN). This work provides a new perspective for optimizing RIS-aided wireless networks by taking advantage of heuristic algorithms and ML.
Hao Zhou, Melike Erol-Kantarci, Yuanwei Liu, H. Vincent Poor
2023-06-26T18:37:48Z
http://arxiv.org/abs/2307.01205v2
# Heuristic Algorithms for RIS-assisted ###### Abstract Reconfigurable intelligent surfaces (RISs) are a promising technology to enable smart radio environments. However, integrating RISs into wireless networks also leads to substantial complexity for network management. This work investigates heuristic algorithms and applications for optimizing RIS-aided wireless networks, including greedy algorithms, meta-heuristic algorithms, and matching theory. Moreover, we combine heuristic algorithms with machine learning (ML), and propose three heuristic-aided ML algorithms, namely heuristic deep reinforcement learning (DRL), heuristic-aided supervised learning, and heuristic hierarchical learning. Finally, a case study shows that heuristic DRL can achieve higher data rates and faster convergence than conventional DRL. This work aims to provide a new perspective for optimizing RIS-aided wireless networks by taking advantage of heuristic algorithms and ML. Reconfigurable intelligent surfaces, heuristic methods, machine learning, optimization. ## I Introduction Reconfigurable intelligent surfaces (RISs) have emerged as an attractive technology for envisioned 6G networks, enabling smart radio environments to improve energy efficiency, network coverage, channel capacity, and so on [1]. Many existing studies have demonstrated the capability of RISs, but incorporating RISs into existing wireless networks significantly increases the network management complexity. In particular, each small RIS element requires sophisticated phase-shift control, resulting in a very large solution space. In addition, due to their low hardware costs and energy consumption, RISs can be combined with other technologies, such as unmanned aerial vehicle (UAV) and non-orthogonal multiple access (NOMA), leading to joint optimization problems with coupled control variables. Therefore, efficient optimization techniques are crucial to realizing the full potential of RISs. Existing RIS optimization techniques can be categorized into three main approaches: convex optimization, machine learning (ML), and heuristic algorithms [2]. Convex optimization algorithms have been widely applied for optimizing RIS-aided wireless networks, e.g., using block coordinate descent for joint active and passive beamforming, and applying fractional programming to decouple the received signal strength with interference and noise [3]. However, RIS-related optimization problems are usually non-convex and highly nonlinear, which makes the application of convex optimization requires case-by-case analysis, resulting in considerable difficulties for the algorithm design. By contrast, ML algorithms have fewer requirements on problem formulations. For instance, when using a Markov decision process (MDP) for RIS passive beamforming, channel state information (CSI), RIS phase shifts, and objective functions are defined as states, actions, and rewards, respectively. Then, reinforcement learning algorithm is deployed to maximize the reward by trying different action combinations. However, ML algorithms are usually computationally demanding, which means many iterations and tedious model training. The low training efficiency may hamper the system efficiency and result in slow convergence [4]. Different from convex optimization and ML algorithms, heuristic algorithms obtain low-complexity solutions efficiently, trading optimality and accuracy for speed. There are multiple advantages of applying heuristic algorithms to RIS-aided wireless networks. Firstly, heuristic rules can be easily applied to various scenarios with few requirements on problem formulations. The high generalization capability indicates that heuristic algorithms can adapt well to different RIS applications, such as UAV-RIS, NOMA-RIS, and so on [5]. Moreover, heuristic algorithms are usually computationally efficient, which is crucial to handling real-time network dynamics. Additionally, heuristic algorithms are particularly useful for handling nondeterministic polynomial (NP)-hard problems, which exist in discrete RIS phase-shift design and resource allocation problems. Despite the advantages, heuristic rules produce only locally optimal solutions, e.g., particle swarm optimization (PSO) usually produces locally optimal results. Therefore, sub-optimality is considered to be the main issue with heuristic algorithms. However, these heuristic rules can be combined with other techniques, such as ML, offering a new opportunity for developing low-complexity and efficient algorithms. Convex optimization and ML approaches have been investigated by a few studies for optimizing RIS-aided networks. For example, Liu _et al._ overviewed joint beamforming and resource management problems [1], Pan _et al._ summarized optimization techniques on signal processing for RIS-aided wireless systems [3], and Faisal _et al._ focused on ML techniques for RIS-related applications [6]. In these studies, heuristic algorithms are usually considered as baselines or supplements. However, due to the low-complexity feature, heuristic algorithms may provide a new perspective for optimizing RIS-aided wireless networks. Moreover, heuristic algorithms can be combined with other techniques for joint optimization, enabling higher flexibility for RIS control. Our former work [2] summarized model-based, heuristic, and ML approaches for optimizing RIS-aided wireless networks, but combining heuristic algorithms with ML was not investigated, which provides the main motivation of this work. Unlike existing studies, this work systematically summarizes heuristic algorithms and their applications for optimizing RIS-aided wireless communications, and the main contributions are two-fold. First, we present comprehensive analyses of applying heuristic algorithms to RIS-aided wireless networks, including greedy algorithms, meta-heuristic algorithms, and matching theory. Then, we combine heuristic algorithms with ML approaches, and propose three novel heuristic-aided ML techniques: 1) combining greedy algorithms with deep reinforcement learning (DRL) to accelerate DRL model training; 2) using heuristic algorithms to produce fine-grained datasets for supervised learning; and 3) deploying heuristic algorithms in hierarchical learning for fast decision-making. Finally, a case study demonstrates that heuristic ML can achieve faster convergence and better network performance than conventional ML. ## II Heuristic Algorithms for Optimizing RIS-aided Wireless Networks This section introduces heuristic algorithms and applications for optimizing RIS-aided wireless networks, including greedy algorithms, meta-heuristic algorithms, and matching theory. ### _Greedy Algorithms_ Greedy algorithms have been widely used as heuristic optimization approaches in many problems, and Fig. 1 shows the application of greedy algorithms in RIS-related optimization problems. Phase-shift optimization is the core of RIS control, and one of the main challenges is that the phases of all RIS elements are usually jointly optimized to achieve global optimality. Instead of complicated joint optimization, greedy algorithms are applied as low-complexity alternative approaches. As shown in Fig. 1(a), it decides the optimal phase shift of one element at each time by observing the improvement on the objective function, while the phase shifts of other elements are fixed [7]. Such element-by-element greedy control is more efficient than convex optimization or ML algorithms with a linear time complexity. Let us take Fig. 1(b) as another example, which applies a greedy policy for RIS element on/off control. Specifically, it applies an element-wise greedy policy and decides the on/off status of one element at each step. Such sequential decision-making leads to a lower time complexity than optimizing all elements simultaneously. The last application in Fig. 1(c) is to apply the greedy local search for joint active and passive beamforming. Given an initial solution, the local search will find an optimal neighbouring solution that is better than the current solution, and the search is repeated solution-by-solution until it cannot find a better neighbouring solution. Despite the low complexity, Fig. 1(a) and (c) also reveal that greedy algorithms usually obtain locally optimal results. In Fig. 1(a), the greedy decisions (indicated by blue lines) are likely to be different from the globally optimal solution (shown by red lines). Similarly, since the search direction is dominated by the quality of neighbouring solutions, greedy local search in Fig. 1(c) may select a wrong searching direction and miss the global optimality. ### _Meta-heuristic Algorithms_ Meta-heuristic algorithms are iterative procedures that utilize specific heuristic strategies to guide the search process. As illustrated in Fig. 1(d), meta-heuristic algorithms can be classified in various ways, including population-based, trajectory-based, nature-inspired, evolutionary-based, swarm intelligence-based, and so on [8]. For example, genetic algorithm and PSO are well-known evolutionary-based and swarm intelligence-based meta-heuristic algorithms, respectively. One of the main advantages of meta-heuristic algorithms is the low-design difficulty. Fig. 1(e) includes an example of using PSO for RIS phase-shift control. The RIS phase shift is considered as the particle's position, and the optimization objective is defined as the fitness function. In PSO, the movement of each particle is guided by its own best-known position and the entire swarm's best-known position. Then, these particles will constantly update their positions until the best solution converges, achieving a near-optimal RIS phase-shift solution [9]. Similarly, genetic algorithm applies evolutionary strategies to select elite individual solutions, and produces new solutions by crossover and mutation, guaranteeing that each generation of solutions will be improved until convergence. Moreover, the trajectory-based Tabu search is used for joint active and passive beamforming in Fig. 1(f). Compared with the greedy search in Fig. 1(c), the main difference is that Tabu search can dynamically adjust the search direction by using the Tabu list, which is a set of rules and banned solutions to filter neighbouring solutions. Due to the Tabu rules, Tabu search can explore the solution space more efficiently than greedy search, which is also the main advantage of meta-heuristic algorithms over greedy algorithms. However, meta-heuristic algorithms can be sensitive to key parameter settings, e.g., mutation probability in genetic algorithm and inertia weight in PSO, which have a considerable impact on algorithm performance. ### _Matching Theory_ Matching theory is particularly useful for resource allocation and association problems, including sub-channel allocation, base station (BS)-RIS-user association, device-to-device (D2D)-user pairing in RIS-aided D2D networks, UAV association in RIS-UAV networks, and so on [10]. These problems are usually NP-hard and cannot be efficiently solved by conventional optimization algorithms. We consider matching theory as a heuristic algorithm because it applies heuristic rules for optimization, which is to find stable matching pairs and then exchange them. Matching theory converts the resource allocation into a matching problem, which includes two sets of players with utility functions. The core idea of matching theory is to switch matching pairs formed by two sets of players and maximize the total utility function. Consider one set of players are users and another set of players are RISs, and then matching theory is to manipulate the RIS-user associations to maximize the sum-rate or energy efficiency. Specifically, it searches for two exchangeable matching pairs that can improve their utility function by switching the associations, while other players' utility is unaffected. In addition, the association in wireless networks may affect the interference level and network delay. Therefore, the utility function of one player depends not only on its own preference, but also on the matching associations of other users. A similar framework may be applied to RIS-aided D2D and UAV networks as in Fig. 1(i), significantly reducing the optimization complexity. However, note that matching theory is an iterative approach to search for matching pairs, and the time cost may exponentially increase when more players are involved. ## III Heuristic-aided Machine Learning for Optimizing RIS-aided Wireless Networks Section II reveals that heuristic algorithms have lower complexity, but the obtained results may be locally optimal. Meanwhile, ML algorithms are the most state-of-the-art techniques for optimizing wireless networks, but tedious model training and slow convergence are well-known issues. There Fig. 1: Meta-heuristic algorithms for optimizing RIS-aided wireless networks. fore, this section investigates the combination of heuristic algorithms and ML techniques, combining each technique's advantages to obtain low-complexity and high-quality solutions. In the following, we propose three heuristic-aided ML algorithms, namely heuristic DRL, heuristic-aided supervised learning, and heuristic hierarchical learning. In addition, we analyze heuristic and heuristic-aided ML algorithms in terms of features, advantages, disadvantages, difficulties, and RIS optimization applications. ### _Heuristic Deep Reinforcement Learning_ One of the main difficulties of RIS phase-shift optimization is the large number of RIS elements, leading to a huge action space for reinforcement learning algorithms. Subsequently, a large action space will degrade the exploration efficiency and slow the convergence. To this end, we integrate heuristic algorithms with DRL to improve the exploration efficiency of DRL. In particular, as illustrated in Fig. 2, we first apply greedy algorithms for fast RIS phase-shift selection, which applies an element-by-element greedy policy for phase-shift selection as in Fig. 1(a). Then, we use the initial results of the greedy algorithm to find the actions that are most frequently visited in each RIS element, which becomes a reduced action space for this element. The reduced phase-shift space becomes a new action space for DRL, which considerably improves the exploration efficiency. Meanwhile, reducing the action space can mitigate the burden of neural network training, since there are fewer state-action values that need to be predicted. Meanwhile, heuristic algorithms can also be used for the action selection of DRL. Given current states, meta-heuristic algorithms can be used to search for local optimal actions to maximize the instant reward of the current episode. Compared Fig. 3: Heuristic-aided supervised learning for RIS phase-shift optimization. Fig. 2: Heuristic DRL for RIS phase-shift optimization. with random exploration such as \(\epsilon\)-greedy policy, heuristic exploration can better adapt to a large action space of RIS phase shifts. It is expected that heuristic DRL will save the model training efforts and accelerate the convergence of DRL. Such fast training is critical to capturing RIS-aided network dynamics and making real-time responses. However, note that heuristic rules may harm the optimality of DRL, since the potential optimal action may be eliminated by heuristic decisions. ### _Heuristic-aided Supervised Learning_ Supervised learning is another important approach for optimizing RIS-aided wireless networks. As shown in Fig. 3, it considers various inputs, e.g., CSI, pilot signals, and user positions, and then predicts RIS phase shifts or data rates. Supervised learning maps the input to the desired output, but it relies on fine-grained labelled datasets for model training. The accessibility of high-quality datasets may prevent the application of supervised learning, especially for complicated and highly-dynamic wireless environments. Many existing studies have used simulators to produce datasets by exhaustive search, but building specific scenarios in the simulator indicates extra complexity [11]. Another approach is applying convex optimization algorithms to produce data [12]. It generates high-quality datasets, but designing convex optimization algorithms requires dedicated analyses. Therefore, we propose to apply heuristic algorithms to overcome such dataset availability issues in supervised learning. Specifically, we deploy heuristic algorithms for dataset generation, which will be further used to train the supervised learning models. Due to the high generalization capability, heuristic algorithms can adapt fast to target scenarios and produce fine-grained datasets. Compared with the simulator-based method, the heuristic approach has higher flexibility and dataset quality. Meanwhile, it has lower complexity than using convex optimization to produce datasets. Given the dataset, various supervised learning models are selected, such as feedforward neural networks, recurrent neural networks, or convolutional neural networks. Finally, the trained neural networks are used for RIS phase-shift or data rate prediction. ### _Heuristic Hierarchical Learning_ RIS phase-shift optimization is usually a short-term instant decision, which requires fast responses to network dynamics. Meanwhile, other network policies may focus on long-term network performance. For example, compared with the instant RIS phase-shift optimization, the BS-RIS-user association is usually a long-term decision, or BS sleep control is also a long-term policy that depends on average traffic demand [13]. Given such short-term and long-term decisions, coordinating these network functions with different timescales is critical, which enables more flexible and intelligent network management. In addition, such hierarchical intelligence and decision-making may be further generalized to other scenarios such as open radio access network (O-RAN) [14]. Hence, Fig. 4 shows a hierarchical learning framework for the management of RIS-aided wireless networks. The lower layer indicates a sub-controller for instant decision-making in Fig. 4: Hierarchical framework with heuristic algorithms. each \(\Delta t\), and various heuristic algorithms may be applied, e.g., PSO for RIS beamforming and matching theory for resource allocation. These heuristic algorithms are deployed to produce short-term instant decisions due to their low complexity and high generalization capability. Then, the achieved network performance is sent to the ML-enabled meta-controller. Based on the average performance from \(t\) to \(t+N\Delta t\), the meta-controller applies ML algorithms to decide policies for the next \(N\Delta t\), e.g., BS sleep/active status, expected average power consumption level, and required quality of service level. ML algorithms can produce more stable results than greedy algorithms, which fits well with the requirement of long-term network management. Meanwhile, it can adjust the policies dynamically based on the feedback of the sub-controller, and the decision interval \(N\Delta t\) provides plenty of time for ML model training and tuning. With the hierarchical learning architecture, short-term and long-term decisions are jointly optimized, benefiting the overall coordination of RIS-aided wireless networks. Additionally, note that the sub-controller can also apply convex optimization or ML approaches without loss of generality. However, here we focus on heuristic algorithms due to their low-complexity features. ### _Comparison and Analyses_ Table I summarizes heuristic algorithms and heuristic-aided ML techniques, including features, advantages, disadvantages, difficulties and RIS optimization applications. Greedy algorithms make locally optimal choices at the current step of solving the problem, disregarding the effect on the following steps. Therefore, it achieves lower complexity but may produce poor results, since local optimality in each step cannot guarantee global optimality. Greedy algorithms are widely used for RIS phase-shift design and network management as low-complexity alternatives, e.g., element-by-element RIS phase-shift optimization and on/off control. Compared with greedy algorithms, meta-heuristic algorithms include more advanced policies to search for near-optimal solutions. The main advantage is the low design complexity, which indicates that control variables, constraints, and objectives can be easily transformed into the meta-heuristic context. Heuristic rules, such as Tabu list in Tabu search and evolutionary selection in genetic algorithm, can guarantee the quality of obtained solutions. However, some key parameters, e.g., mutation probability in genetic algorithm and acceptance probability in simulated annealing, will greatly affect the algorithm performance, which should be carefully selected. Matching theory specializes in solving resource allocation and association problems. These NP-hard allocation problems are transformed into matching pairs, and then improve the utility function by switching pairs. Matching theory is easy to implement with stable output, but it relies on extensive searching to improve the matching associations between players. Hence, matching theory may be time-consuming when involving more players, e.g., BS-RIS-user association problem with a considerable number of users. For heuristic-aided ML algorithms, heuristic DRL is expected to achieve faster convergence than conventional DRL by reducing the action space and heuristic exploration. However, one disadvantage is that the potential optimality may be degraded, which means that heuristic DRL trades optimality for speed and efficiency. Subsequently, how to reduce the action space efficiently without degrading the optimality is critical for heuristic DRL. Compared with conventional supervised learning, the main difference of the proposed heuristic supervised learning is to utilize heuristic approaches to generate high-quality datasets. It indicates that fine-grained datasets can be easily produced for specific scenarios, lowering the difficulty of supervised learning deployment. Finally, hierarchical learning provides a framework for joint network management. It coordinates short-term and long-term network control, enabling a more flexible management scheme. But the stability between the meta-controller and sub-controller should be carefully maintained. ## IV Case Study on RIS Phase-shift Control This section presents a case study on RIS phase-shift optimization with one multi-antenna BS and 5 single-antenna users. We assume that the direct link between BS and users is blocked by obstacles, and the BS-RIS-user channel follows the Rician fading. The BS-RIS distance is 50 m, and the RIS-user distance is randomly distributed between 50 to 60 m. We consider 10 RIS elements with 2 bits as a group to ease the computational complexity, which means each group makes the same phase-shift decision [15]. The simulations involve 4 algorithms, namely heuristic DRL, conventional DRL, exhaustive search, and random phase shift. Heuristic DRL has been introduced in Fig. 2, which applies greedy element-by-element phase-shift optimization to reduce the action space. Exhaustive search is the optimal baseline, while random phase shift means selecting phase shifts randomly. The simulation is implemented in MATLAB to obtain the average performance of 10 runs, and the results are shown in Fig. 5. Fig. 5(a) shows the sum-rate of all users under various algorithms and RIS numbers. When the number of RIS elements is 20 or 30, the action space is still small, and both heuristic and Fig. 5: Simulation results comparison conventional DRL can explore different phase-shift combinations and then find the optimal action. Therefore, heuristic and conventional DRL achieve comparable performance as the exhaustive search method, and the random strategy shows the worst performance. However, when the number of RIS elements increases to 40, the action space will exponentially expand. Then, the low sampling efficiency of conventional DRL has difficulty in exploring the whole action space, leading to degraded performance. In contrast, heuristic DRL applies greedy algorithms to reduce the action space, and then the compressed action space is explored efficiently. As a result, heuristic DRL still maintains a comparable sum-rate as the exhaustive search method. Moreover, Fig. 5(b) illustrates the convergence performance of heuristic DRL under various numbers of RIS elements. It shows that the average reward increases with iterations and fast convergence. In addition, Fig. 5(b) compares the convergence of heuristic DRL and conventional DRL, demonstrating that heuristic DRL obtains higher average reward and faster convergence than conventional DRL. Fig. 5(c) presents the runtime complexity of heuristic and conventional DRL. Note that the simulation time may be affected by the hardware settings of simulation platforms, and our simulation platform is MATLAB with Intel core i7-7770 CPU with 16 G memory size. Fig. 5(c) reveals a clear trend that heuristic DRL can save up to 60% lower simulation time than conventional DRL. The superiority can still be explained by the reduced action space, indicating that neural networks in heuristic DRL have less training and prediction burden. Finally, Fig.5(d) shows the performance trade-off of the proposed heuristic DRL algorithm. Specifically, it demonstrates that the percentage of phase-shift space reduction may affect the algorithm performance. The optimal space reduction percentage is around 0.7, which is used for obtaining simulation results in 5(a) and 5(b). In summary, the simulation results in Fig. 5 demonstrate that integrating heuristic algorithms with ML algorithms can achieve higher data rates and faster convergence. These combinations can make full use of each technique's advantage, providing a new perspective for optimizing RIS-aided wireless networks. ## V Conclusion It is anticipated that RIS technology may be a key enabler for 6G networks, and our work has investigated heuristic algorithms and their applications for RIS-aided wireless networks, including greedy algorithms, meta-heuristic algorithms, and matching theory. In addition, we have integrated heuristic algorithms with ML techniques, namely heuristic-aided ML. The case study shows that heuristic-aided ML can significantly improve the convergence of DRL and achieve higher data rates. In the future, we plan to study the performance of heuristic-aided hierarchical learning, and further integrate convex optimization, heuristic algorithms, and ML techniques for joint network management. ## Acknowledgment This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Canadian Collaborative Research and Training Experience Program (CREATE) under Grant 497981, the Canada Research Chairs Program, and the U.S National Science Foundation under Grant CNS-2128448.
2308.00946
Teaching Smaller Language Models To Generalise To Unseen Compositional Questions
We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training. To do so we propose a combination of multitask supervised pretraining on up to 93 tasks designed to instill diverse reasoning abilities, and a dense retrieval system that aims to retrieve a set of evidential paragraph fragments. Recent progress in question-answering has been achieved either through prompting methods against very large pretrained Language Models in zero or few-shot fashion, or by fine-tuning smaller models, sometimes in conjunction with information retrieval. We focus on the less explored question of the extent to which zero-shot generalisation can be enabled in smaller models with retrieval against a corpus within which sufficient information to answer a particular question may not exist. We establish strong baselines in this setting for diverse evaluation datasets (StrategyQA, CommonsenseQA, IIRC, DROP, Musique and ARC-DA), and show that performance can be significantly improved by adding retrieval-augmented training datasets which are designed to expose our models to a variety of heuristic reasoning strategies such as weighing partial evidence or ignoring an irrelevant context.
Tim Hartill, Neset Tan, Michael Witbrock, Patricia J. Riddle
2023-08-02T05:00:12Z
http://arxiv.org/abs/2308.00946v2
# Teaching Smaller Language Models To Generalise ###### Abstract We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training. To do so we propose a combination of multi-task supervised pretraining on up to 93 tasks designed to instill diverse reasoning abilities, and a dense retrieval system that aims to retrieve a set of evidential paragraph fragments. Recent progress in question-answering has been achieved either through prompting methods against very large pretrained Language Models in zero or few-shot fashion, or by fine-tuning smaller models, sometimes in conjunction with information retrieval. We focus on the less explored question of the extent to which zero-shot generalisation can be enabled in smaller models with retrieval against a corpus within which sufficient information to answer a particular question may not exist. We establish strong baselines in this setting for diverse evaluation datasets (StrategyQA, CommonsenseQA, IIRC, DROP, Musique and ARC-DA), and show that performance can be significantly improved by adding retrieval-augmented training datasets which are designed to expose our models to a variety of heuristic reasoning strategies such as weighing partial evidence or ignoring an irrelevant context. ## 1 Introduction We are inspired by recent progress with pretrained large Language Models (LLM), which when prompted with task demonstrations (Brown et al., 2020), instructions (Sanh et al., 2021; Wei et al., 2021; Ouyang et al., 2022) or reasoning chains (Wei et al., 2022), show an ability to answer questions unlikely to have been encountered during training. However a diversity of potential applications exist in the broad domain of reasoning systems and considerations such as latency, cost, energy efficiency, physical compute size and internet connectivity requirements are relevant in determining the most appropriate approach for a given situation. Rather than encoding all knowledge in the parameters of a LLM, an alternative approach has been to transform the original question-answering problem into a reading comprehension (RC) problem by retrieving relevant information for answering a particular query from an external corpus, and training a smaller1 model (QA Model) to reason over the concatenation of the query and retrieved information to derive an answer e.g. Chen et al. (2017). In this paper2 we extend retrieval methods as described in section 1.1 in conjunction with a supervised multitask pretraining regime for the QA Model involving 79 tasks for our baseline and 93 tasks for the improved model. Footnote 1: We define smaller language models as generative Transformer models with 400 million to 1 billion parameters, i.e those that are large enough to be effective reasoners whilst being able to perform training and inference with reasonable latency, cost and energy efficiency. Footnote 2: Code, models and datasets will be released at [https://github.com/timhartill/unseen_questions](https://github.com/timhartill/unseen_questions) The viability of this approach outside of fine-tuned settings is currently subject to limitations, both in the retrieval component, as discussed below, and with respect to the inabilities of smaller language models to perform the reasoning function as well as larger models. We aim to quantify performance limitations and evaluate mitigations for some of them. There are at least two significant challenges in retrieval to be overcome. Firstly, no matter how large the corpus is, there will always be missing information, particularly so in our setting where neither datasets nor corpus have been normalised such that sufficient information is in the corpus to make each question answerable through deductively valid means. Secondly, as long as humans ask questions with ambiguous references e.g. "Who is the spouse of the Green performer?" (Trivedi et al., 2022), retrieval will necessarily be imperfect even where sufficient knowledge exists in the corpus and the retrieval method is otherwise perfect. We evaluate a method for addressing these issues. Specifically, we measure the effect of adding datasets to our QA Model training regime that are designed to impart heuristic strategies for reasoning to a plausible rather than an entailed answer. We construct these datasets by building contexts for training questions using our retrieval system against a fixed corpus of English Wikipedia paragraphs. The resulting samples ("retrieval-augmented training datasets", abbreviated to _RATD_) are included in training our QA Model irrespective of whether they contain partial, full, or no evidence. Our approach carries the advantage that a diversity of reasoning strategies may be imparted. Such strategies include ignoring an irrelevant context completely or weighing partially evidential facts; e.g. reasoning toward answering "Do teenagers always rebel against their parents?" (Talmor et al., 2021) can be aided by the retrieval of knowledge that "Adolescents who have a good relationship with their parents are less likely to engage in various risk behaviours", even though there is no entailment implied. Generally our method is most applicable to question-answering tasks where the desired answer is short i.e. from a word to a short sentence, and the question itself does not come already supplied with a fully evidential context. We also assume that it is possible to retrieve sufficient information from our corpus so as to make a question answerable within a modest sequence length (we limit ourselves to a 512 token maximum) e.g. we are unlikely to be able to answer a question such as "How many songs have a person's name in the title?" even through retrieving every instance is theoretically possible. We focus our study on a set of unseen evaluation datasets that meet the following criteria: (1) Datasets collectively involve diverse textual and numerical reasoning strategies. (2) Questions are generally readily answerable by humans with access to a web browser and without specialised knowledge. (3) Questions tend to be compositional. (4) Relevant comparison with prior work exists. In regards to defining compositionality, others e.g. (Dankers et al., 2022; Hupkes et al., 2020) have noted challenges in singularly defining compositionality as it relates to NLP; for our purposes we pragmatically define a question as compositional if it is unlikely to be answerable by our QA Model with a memorised answer from a similar training example, and requires reasoning over a context by utilising at least one reasoning operation (e.g. conjunction) involving more than one textual fact, and/or at least one numerical operation involving more than one number. This criteria leads us to select six evaluation datasets: StrategyQA (Geva et al., 2021) contains commonsense samples requiring diverse multi-hop reasoning strategies. On average samples require content from 2.33 separate paragraphs to answer when considering retrieval from Wikipedia. Musique (Trivedi et al., 2022) is a multi-hop dataset focused on factual questions that require retrieval of content from two to four paragraphs. IIRC (Ferguson et al., 2020) contains questions where an initial paragraph is given and answers depend upon reasoning over this plus one to over four additional paragraphs that must be retrieved. ARC-DA (Bhakthavatsalam et al., 2021) is a question-only subset of ARC (Clark et al., 2018). The Worldtree database provides explanatory fact sets for ARC samples which average six facts per sample (Xie et al., 2020). DROP (Dua et al., 2019) is a RC dataset wherein answering each question requires numerical or temporal reasoning over a provided context to reach an often abstractive answer e.g. "How many field goals were scored in the first quarter?...The Lions scored first...with a 23-yard field goal...The Bucaneres tied it up with a 38-yard field goal...then took the lead...The Lions responded with a 28-yard field goal.." The answer is 3 which isn't explicit in the context. CommonsenseQA (Talmor et al., 2019) contains samples that are often unlikely to be answerable by finding a singular fact e.g. "I'm crossing the river, my feet are wet but my body is dry, where am I? (A) waterfall (B) bridge (C) valley (D) bank (E) island" is answered by considering knowledge related to each option. These datasets are discussed more fully in section 3.1. In addition to the possibility of answer leakage from directly memorised samples, it has been shown that models are able to utilise more subtle cues such as the writing style of a particular annotator who contributed to both train and test splits for better results than are achievable where the test split is truly independent of the training split (Geva et al., 2019). To minimise such issues as well as to facilitate comparison in a similar setting as other zero/few shot studies which have varying definitions of "seen-ness", we simply define an unseen question as one from an evaluation dataset that is disjoint from our training datasets. Against this definition, two of our evaluation datasets, ARC-DA and Musique, are "partially seen" as discussed further below. In summary the major contributions of this paper are: (A) We offer what is to our knowledge the most comprehensive set of baselines evaluating smaller Language Model zero-shot reasoning abilities published to date. (B) We show that augmenting the training regime with _RATD_ datasets significantly improves performance from the baselines. (C) We demonstrate that training for numerical literacy and unanswerability is brittle in the unseen setting in the absence of sufficiently similarly formatted training examples. (D) We propose effective extensions to the retrieval approach as described below. ### System Components and Related Work **Retrieval**. Chen et al. (2017) first used sparse retrieval, namely TF-IDF (Sparck Jones, 1972), against Wikipedia in the context of open domain question-answering. In dense retrieval, query and corpus documents are embedded into the same vector space and retrieval is typically performed through maximum inner product search (MIPS) over the resulting dense vectors. Several such approaches e.g. Karpukhin et al. (2020) focus on retrieving the single most relevant document sufficient for answering a single-hop query. Xiong et al. (2021) introduce multi-hop dense retrieval (MDR), to retrieve _multiple_ documents necessary to answer a Figure 1: Major system components: The Iterator (green boxes) and QA Model. An initial query for hop \(t\)=0 is input into the Retriever. The Reranker scores each of the retrieved \(k\) paragraphs and constituent sentences. Top-\(x\) sentences (Evidence Set\({}_{\leq t}\)) are selected from top-ranked sentences from the Reranker and from the prior hop Evidence Set\({}_{<t}\). The query \(+\) Evidence Set\({}_{\leq t}\) are input into the Evidence Set Scorer which computes an overall Evidence Set Relevance Score \(e\) and individual sentence relevance scores. Paragraphs associated with the top five sentences of Evidence Set\({}_{\leq t}\) are appended to the query and the process repeats t\({}_{\max}\) times. Finally, paragraph fragments recovered from the Evidence Set for hop t=\(\arg\max(e)\) are concatenated with the original query and input into the QA Model for answer generation. complex multi-hop question. They focus on the two-hop situation where a maximum of two documents are sufficient to answer a question. In this situation training samples are input to a shared question and document encoder as: (1) Input \(\langle q_{i}\rangle\) with an objective of minimizing distance to the vector representing \(d_{i,0}\) (hereafter denoted \(\langle q_{i}\rangle\to d_{i,0}\)), where \(q_{i}\) and \(d_{i,\text{t}}\) are the _i-th_ question and the _t-th_ supporting document of \(q_{i}\) to retrieve respectively. (2) Input \(\langle q_{i},d_{i,0}\rangle\to d_{i,1}\). We extend the MDR training regime and loss computation to enable retrieval of an arbitrary maximum number of documents i.e. \(\langle q_{i},d_{i,0},...,d_{i,\text{t}}\rangle\to d_{i,\text{t+1}}\). Wang et al. (2018) introduced the concept of a Reranker that refines retrieved results. IRRR (Qi et al., 2021) combined sparse retrieval and reranking into an iterative single model that can also answer multi-hop questions that have extractive answers. Baleen (Khattab et al., 2021), is also iterative but uses a dense retrieval system based upon encoding a dense vector per input token. Their two-stage condenser system comprises a Reranker that scores the relevance of each sentence for each retrieved document followed by an additional module that scores relevance of each sentence from the top-scoring sentences selected over multiple documents from the first stage. It then generates a compressed context of relevant sentences, to be utilised by a separate QA Model. We take inspiration from Baleen's two-stage approach but other than using our own retriever, we differ most notably in that we introduce an Evidence Set Score into the second stage with the goal of quantifying the sufficiency of the entire set of selected sentences for answering a query, in addition to scoring the relevance of individual sentences. Sparse retrieval offers the advantage that it can perform well in zero-shot settings where lexical overlap is sufficient to identify relevant documents. Several studies evaluate methods that improve the performance of dense retrieval models in zero-shot settings. A number of these use diverse unsupervised techniques involving creating queries and positive passages from unlabelled text e.g. (Lee et al., 2019; Ram et al., 2022; Izacard et al., 2022). In a different approach, Chen et al. (2021) trained a dense retriever to imitate a lexical-based model with good results. Thakur et al. (2021) created the BEIR benchmark to further the study of retrieval in the zero-shot setting and a number of more recent papers report results against this benchmark. We are unable to do so as some of our retriever training datasets are BEIR components, however we note as a future direction that our retriever training might benefit further from applying techniques that have been effective on BEIR. **Multitask Pretraining**. Raffel et al. (2020) showed that when trained using self-supervised pretraining followed by supervised multitask training, a single text-to-text Transformer model without task-specific architectural modification was capable of performing well on all the diverse tasks it had been trained upon. Since then, a number of studies have shown the efficacy of supervised multitask training in facilitating generalisation in question-answering tasks (Khashabi et al., 2020; Sanh et al., 2021; Wei et al., 2021; Khashabi et al., 2022). Orthogonal to our approach, many studies e.g. Sanh et al. (2021); Wei et al. (2021); Ouyang et al. (2022) make use of instruction-based tuning to facilitate generalisation. In order to focus on evaluation of differing training data regimes, we make use of a similar fixed prompting format to Khashabi et al. (2020, 2022) and utilise many of their converted QA datasets. Perhaps most similar to our work, Wang et al. (2022) combines multitask training over multi-choice datasets with external retrieval which they use to augment the training set. However their implementation diverges from ours in that they use sparse retrieval and then a fusion-based method similar to Izacard & Grave (2021) wherein multiple retrieved document vectors are used with gated cross-attention to focus on salient information. Their evaluation datasets are disjoint with ours and don't cover broader reasoning skills like numeracy, so comparison must be left to future work. Longpre et al. (2021) created a synthetic dataset by substituting entity names in existing dataset contexts and updating corresponding labels to produce new unfactual but logically consistent samples. They show that training on the new dataset plus the original causes their model to rely on reasoning over the context more, and less on knowledge encoded in parameters. Recently, Li et al. (2022) extended this approach to a fine-tuning framework for LLMs wherein the model is trained on relevant, irrelevant, and counterfactual but logically consistent contexts. Their approach differs from ours in that our _RATD_ datasets are constructed so as to encourage reasoning to a plausible conclusion whereas theirs are constructed with logical entailment in mind i.e. to ignore contexts where deductively valid reasoning is not possible in favor of knowledge stored in the LLM parameters. **Numerical Literacy**. Yoran et al. (2022), Pi et al. (2022) and Geva et al. (2020) all develop numeracy-focused pretraining datasets that we adapt and utilise. Generally these approaches have concentrated on finetuned settings and to our knowledge we are the first to study their performance against a diversity of unseen evaluation datasets. Recently Trivedi et al. (2022) released numeracy-focused pre-training datasets constructed using QDMR (Wolfson et al., 2020) decompositions. These were released too late for us to include in our pretraining regime but we report comparisons in Table 2. ## 2 Method We develop and train the Retrieval, Reranking, Evidence Set Scoring (collectively the "Iterator"), and QA model components separately as visualised in Figure 1. Comparisons with retrieval systems in our setting are limited since gold paragraph annotation does not exist. Moreover, excepting Khashabi et al. (2020, 2022) papers tend not to report zero-shot results for smaller language models such as the BART (Lewis et al., 2020) QA model we use. Therefore we initially evaluate the performance of components on in-domain settings with comparisons to strong prior work, and report results in this section. In subsequent sections we move to the major focus of our study, namely to evaluate our method of adding _RATD_ datasets to improve reasoning in the setting where questions are unseen, sufficient evidence to deductively answer a query may not be retrievable, and the model is too small to effectively answer open domain questions without a context to reason over. ### Retrieval For the retrieval component of the Iterator, as discussed above we extend MDR from a two hop maximum to enable training on samples with an arbitrary maximum number of hops (\(t_{\text{max}}\)). Training is over a mixture of datasets with questions involving one to four hops to answer; HotpotQA (Yang et al., 2018), Hover (Jiang et al., 2020), Natural Questions (Kwiatkowski et al., 2019), and Musique (Trivedi et al., 2022). Hence in practice we set \(t_{\text{max}}=4\). Multi-hop questions contain multiple possible reasoning paths through the labelled gold paragraphs, some of which the encoder is able to learn to generalise from ("learnable") and some not (Xiong et al., 2021). For example, given a set of supporting documents for a 4-hop \(q_{\text{i}}\) as \(\{d_{i,0},d_{i,1},d_{i,2},d_{i,3}\}\), semantic overlaps between \(q_{\text{i}}\) and the documents might enable learnable reasoning paths of \(\langle q_{\text{i}},d_{\text{i}},0_{\text{i}},1,d_{\text{i}},2,d_{i,3}\rangle\) or \(\langle q_{\text{i}},d_{\text{i}},1,d_{\text{i}},0_{\text{i}},d_{\text{i}},3, d_{i,2}\rangle\) but not \(\langle q_{\text{i}},d_{\text{i}},2,d_{\text{i}},0,d_{\text{i}},1,d_{\text{i}},3\rangle\) or others. Our training regime samples a learnable reasoning path and builds training samples for subsets; e.g. from \(\langle q_{\text{i}},d_{\text{i}},1,d_{\text{i}},0,d_{\text{i}},3,d_{\text{i }},2\rangle\) we would build four single-hop samples \(\langle q_{\text{i}}\rangle\to d_{\text{i},1}\), \(\langle q_{\text{i}},d_{\text{i},1}\rangle\to d_{\text{i},0}\), \(\langle q_{\text{i}},d_{\text{i}},1,d_{\text{i}},0\rangle\to d_{\text{i},3}\) and \(\langle q_{\text{i}},d_{\text{i}},1,d_{\text{i}},0,d_{\text{i}},3\rangle\to d _{\text{i},2}\). We based document sequencing for learnable reasoning paths for Musique on the decomposed reasoning steps provided with that dataset. For HotpotQA and Hover we used the ordering that was used in Xiong et al. (2021) and Khattab et al. (2021) respectively, while Natural Questions is treated as single-hop. For each training sample, positive documents from other training examples in the batch are used as negatives, to which are added two adversarially negative paragraphs specific to that question. Where adversarial negative documents were not otherwise available we created them from our Wikipedia corpus by taking the first paragraph of directly hyperlinked documents from each gold paragraph. Specifically, we used this strategy to create negative documents for Hover as well as to create additional negatives for Musique. We used adversarial negatives for HotpotQA and Natural Questions supplied from Xiong et al. (2021) and Karpukhin et al. (2020) respectively. Our objective function is similar to others e.g. (Xiong et al., 2021; Karpukhin et al., 2020). For hop \(t\) of the \(i-th\) training sample it models the probability of each next document given a query as: \[P(dvec_{\text{i,t+1}}|qvec_{\text{i,t}})=\frac{exp(dvec_{\text{i,t+1}}\cdot qvec _{\text{i,t}})}{\sum_{dvec\in D_{\text{i}}}exp(dvec\cdot qvec_{\text{i,t}})}\] Where \(qvec_{\text{i,t}}=enc(\langle q_{\text{i}},d_{\text{i,0}},...,d_{\text{i,t}}\rangle)\), \(dvec_{\text{i,t+1}}=enc(\langle d_{\text{i,t+1}}\rangle)\), \(enc\) is the shared encoder, \(qvec_{\text{i,t}}\) is the encoded query vector, \(dvec_{\text{i,t+1}}\) is the encoded next document vector, \(D_{\text{i}}\) is the set of positive and negative document vectors for \(q_{\text{i}}\) and \(\cdot\) denotes the inner product operation. ### Reranking and Evidence Set Scoring To refine retrieved documents we implement a two-stage system comprising Reranker and Evidence Set Scoring models. Both models were trained using a mixture of datasets that come with sentence-level annotations, namely HotpotQA, Hover and FEVER (Thorne et al., 2018). Training samples for the Reranker are built from learnable reasoning paths. For two-hop paths, samples are randomly built to one or two hops i.e. \(\langle q_{i},d_{i,0}\rangle\) to score \(d_{i,0}\) relevance, or \(\langle q_{i},d_{i,0},d_{i,1}\rangle\) to score \(d_{i,1}\). To remediate imbalance in hop distribution three and four hop samples are always built to the respective maximum hop count. Each query is paired with both a positive paragraph to be scored, and a substituted negative paragraph. The sampling function implements a form of shared normalization (Clark and Gardner, 2018) such that pairs are positioned in the same training batch. In the Reranker, a paragraph relevance score (\(p\)) in addition to individual sentence relevance scores (\(s_{\text{p}}\)) are learned. The objective function for each is binary cross-entropy with the overall loss being an unweighted summation. Intuitively, a high-scoring sentence in a relevant paragraph is more likely to be evidential than a high scoring sentence in an irrelevant paragraph. We manually observed that \(p\) is often more accurate than \(s_{\text{p}}\) and hence experimented with tuning a weight, \(w\), in a sentence scoring function \(s=wp+(1-w)s_{\text{p}}\). For in-domain datasets such as HotpotQA we found non-zero values of \(w\) that improved both sentence and paragraph recall by over 2%, and F1 score by over 6%, confirming our observation. However the optimal value of \(w\) varied between 0.0 and 0.9 over in-domain datasets and tuning \(w\) for any of our unseen datasets using their gold annotations would compromise our experimental setup. Hence we simply score each sentence in our main experiments as \(s=0.5p+0.5s_{\text{p}}\). For the second stage Evidence Set Scorer, at each hop \(t\) the Evidence Set\({}_{\leq\text{t}}\) is selected from top-ranked sentences from the Reranker and from the prior Evidence Set\({}_{<\text{t}}\), if any. The query and Evidence Set\({}_{\leq\text{t}}\) are input into the Evidence Set Scorer which scores evidence set relevance (\(e\)), and sentence relevance (\(s_{\text{e}}\)) in the context of the evidence set. The sentences for the \(t+1\) evidence set are selected by ranking according to \(0.5p+0.5s_{\text{e}}\) and then taking a maximum of five sentences that score over a threshold. The 0.5 coefficients were chosen after a similar evaluation as was done for the Reranker scoring function described above. We observed instances where the evidence set weakened as well as where it strengthened with additional hops, so we then take the evidence set from hop \(t=\arg\max(e)\) rather than assuming that \(t_{\max}\) always selects the best. We observed that a high-scoring sentence is sometimes contextualized by adjacent sentences and collectively they create a stronger rationale. Hence final context for each query, both for _RATD_ dataset creation and for creating context for unseen evaluation samples, is created by recovering a paragraph fragment for each selected sentence by prepending/appending the preceding and subsequent sentence from the associated full paragraph where these exist, and then concatenating the document title with the resulting fragment. Ordering of paragraph fragments is by \(0.5p+0.5s_{\max}\) where \(s_{\max}\) is the maximum Evidence Set Scorer sentence relevance score per paragraph. Using these paragraph fragments it is possible to fit contexts of approximately 6-7 paragraph fragments within a 512-token maximum sequence length. In the case of datasets such as IIRC (Ferguson et al., 2020) that provide an initial paragraph in addition to the question, the initial paragraph is prepended to the context. The Evidence Set Scoring model is trained with Evidence Sets built as combinations of positive and negative sentences, including replacing positive sentences with negative sentences from positive paragraphs and negative sentences from negative paragraphs. Each question is paired with both a fully evidential set of sentences and a partially evidential (or non-evidential) set of sentences sampled such that pairs are in the same training batch. The objective functions for both \(e\) and \(s_{\text{e}}\) are binary cross-entropy and as with the Reranker the final loss is an unweighted summation. The label for \(e\) is 1.0 if a subset of the Evidence Set is fully evidential, 0.0 otherwise. Further details of the Iterator components are in Appendix D. ### Iterator In-domain Evaluation We initially evaluate performance of the Iterator in an in-domain setting using the Hover development set against the HotpotQA Wikipedia Abstracts Corpus (Yang et al., 2018), since Hover contains samples with up to four hops and it is possible to compare against the published Baleen performance. Here the number of paragraphs retrieved on each hop (\(k\)) is 25. Results (Table 1) indicate that our Iterator is competitive with Baleen in this setting with our two-hop performance better but their four-hop performance dominating. A reason we are stronger overall than Baleen on EM while the reverse is true for F1 is due to our choice of ranking function - Baleen ranks sentences entirely using \(s_{\text{e}}\) whereas we utilise a linear combination of our Reranker paragraph score \(p\) and \(s_{\text{e}}\). Unsurprisingly our retriever performance is progressively better than MDR as the number of hops increases. Our main experiments below use a corpus consisting of English Wikipedia paragraphs from the August 1 2020 dump. Details are in Appendix C. ### QA Models A number of studies have shown the efficacy of supervised multitask training in facilitating generalisation in question-answering tasks (Khashabi et al., 2020; Sanh et al., 2021; Wei et al., 2021; Khashabi et al., 2022). We adopt this approach for training our QA models. To facilitate numerical computation we adapt the QA model tokenizer for digit tokenisation (Wallace et al., 2019; Geva et al., 2020) in all experiments. Noting that some of the numerical pretraining tasks take much longer to train to a reasonable degree of proficiency than our textual question-answering tasks, we continue training our QA models from their original pretraining checkpoint with two additional stages of multitask pretraining. #### 2.4.1 Stage 1 Pretraining In Stage 1 we train using tasks that are aimed at imparting by abstraction a diversity of foundational reasoning skills, with a bias towards simple numerical literacy. Specifically we utilise existing tasks from Yoran et al. (2022), Pi et al. (2022) and Geva et al. (2020) as well as some we create ourselves. Stage 1 training is on a total of 33 tasks. One of these is a version of the original self-supervised masked language modelling task which is sampled with probability \(\lambda=0.35\). The remaining tasks are sampled using an error-based sampling regime (Gottumukkala et al., 2020) whereby tasks with low accuracy in the previous validation step are oversampled in the subsequent training steps and vice-versa. #### 2.4.2 Stage 2 Pretraining In Stage 2, we add five open domain (i.e. question-only) question-answering tasks to the above foundational Stage 1 tasks (for 38 tasks in total, denoted _Group 1_). We add the open domain tasks with the primary aim of teaching the model about the expected form of answer for a given question type e.g. yes or no for "Could an Aardvark use a knife and fork?" noting that it has been shown that smaller models cannot learn such open domain tasks well (Lewis et al., 2021). To avoid the possibility of catastrophic forgetting, we continue to train on _Group 1_ in conjunction with a new set of tasks, _Group 2_, which is sampled with \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model / \# of Hops-\(>\)**} & \multicolumn{3}{c|}{**Sentence EM**} & \multicolumn{3}{c}{**Sentence F1**} \\ & **2** & **3** & **4** & **All** & **2** & **3** & **4** & **All** \\ \hline Baleen 4-hop + FLIPR retriever & 47.3 & 37.7 & **33.3** & 39.2 & 81.2 & **82.5** & **80.0** & **81.5** \\ Iterator + MDR retriever & 64.6 & 39.3 & 14.8 & 40.1 & 81.7 & 72.1 & 59.0 & 71.4 \\ Iterator + our retriever & **66.7** & **45.4** & 27.5 & **46.8** & **82.5** & 75.7 & 68.7 & 75.8 \\ \hline \hline \end{tabular} \end{table} Table 1: In-domain Retrieval and Reranking Evaluation on Hover development set with \(k=25\). Baleen is finetuned on Hover, MDR is trained on HotpotQA, and our retriever is trained on a mixture of HotpotQA, Hover, Musique and Natural Questions. Group 2, described further below, contains tasks aimed at teaching more question-answering specific reasoning skills, with a bias towards RC datasets. Our purpose in having two groups is to enable us to implement differing sampling strategies within a single training regime. For _Group 1_ we utilise uniform sampling over all tasks and for _Group 2_ we use error-based sampling. This combination represents our solution to the issue noted in Yoran et al. (2022), namely that excessive oversampling will occur for tasks that the model cannot learn well. In addition we find uniform sampling useful for regulating the sampling of the tasks that the model has already learned in Stage 1. #### 2.4.3 _Base_ and _Base+Ratd_ Models We now discuss two resulting models, both continuing training from the best Stage 1 checkpoint and using the same _Group 1_ tasks but different in _Group 2_ tasks. The first, our _Base_ model, uses 41 tasks in _Group 2_ for an overall total of 79 tasks (38 _Group 1_ + 41 _Group 2_). _Group 2_ consists of a diverse range of question-answering datasets. Of note, to facilitate an ability to identify relevant information and perform deductively valid reasoning, for HotpotQA, Hover, FEVER, Musique, Natural Questions, CREAK (Onoe et al., 2021) and TriviaQA (Joshi et al., 2017), we construct fully evidential contexts with many irrelevant distractors using a combination of gold and distractor paragraph fragments such that we are as close to our maximum sequence length of 512 tokens as possible without truncating sentences. Since some evaluation samples have a label of "unanswerable", we also create versions of HotpotQA, Hover, FEVER and Musique by similar construction to the fully evidential samples but with key gold sentences or paragraphs removed. These are assigned an "unanswerable" label. For our second model, _Group 2_ consists of the 41 tasks in the above _Base_ Group 2 plus an additional 14 _RATD_ datasets for a total of 55 tasks. Our resulting _Base+RATD_ model is thus trained on a total of 93 tasks (38 _Group 1_ + 55 _Group 2_). As described above, the _RATD_ dataset contexts are constructed using our Iterator against the full Wikipedia corpus. Recalling that none of our original datasets are normalised against the version of Wikipedia we use, the resulting contexts are noisy, often containing partial or no relevant evidence and many distractors. We hypothesise that the utility of these is to impart a variety of heuristic strategies using a context form similar to that which our downstream unseen evaluation datasets will have. Thus our _Base+RATD_ model may be equipped for reasoning to a plausible answer from partial information as well as the deductively valid answer derivable for the majority of datasets used to train the _Base_ model. Details of all datasets utilised in QA Model training are in Appendix E. #### 2.4.4 **QA** Model In-domain Evaluation For comparison with related approaches, we fine-tune our models on DROP (Dua et al., 2019) and separately on IIRC\({}_{\text{G}}\) and IIRC\({}_{\text{R}}\)(Ferguson et al., 2020). IIRC\({}_{\text{G}}\) is an oracle setting, with context consisting of gold sentences and surrounding text. IIRC\({}_{\text{R}}\) has a retrieved context using respective retrieval methods from each \begin{table} \begin{tabular}{l c c c} \hline **Pretraining Regime** & **DROP** & **IIRC\({}_{\text{G}}\)** & **IIRC\({}_{\text{R}}\)** \\ \hline POET-SQL (BART)\({}^{a}\) & 82.2 & & \\ PREasM (T5-large)\({}^{b}\) & 72.3 & 75.0 & 45.1 \\ PREasM w/digit tok. (T5-large)\({}^{c}\) & 80.0 & 73.3 & 40.9 \\ PREasM + Teabreca (T5-large)\({}^{d}\) & 83.2 & 77.9 & 47.6 \\ Teabreca (T5-3B)\({}^{d}\) & **86.7** & 79.5 & 51.0 \\ Ours: _Base_ (BART) & 79.2 & **80.2** & **53.6** \\ Ours: _Base+RATD_ (BART) & 79.6 & 80.1 & 52.8 \\ \hline \end{tabular} \end{table} Table 2: Comparison of our QA Model performance to related pretraining methods in _finetuned_ setting on DROP dev set and IIRC test set (F1). \({}^{a}\) Pi et al. (2022); \({}^{b}\) Yoran et al. (2022) trained without digit tokenisation; \({}^{c}\) from Trivedi et al. (2022b) wherein PREasM is retrained with digit tokenisation; \({}^{d}\) Trivedi et al. (2022b). study. As shown in Table 2 we are competitive with other approaches in this in-domain setting: We are slightly behind on DROP compared to POET (Pi et al., 2022) and Teabreac (Trivedi et al., 2022), however we are state of the art on IIRC\({}_{\text{G}}\) and IIRC\({}_{\text{R}}\). ## 3 Experiments Our experiments are aimed at answering three main research questions: **R1.** What is the impact of adding _RATD_ datasets to the QA Model _Base_ training regime? **R2.** How effective is pretraining for numerical literacy in the unseen setting for smaller language models? **R3.** What is the performance differential between our QA model with differing evaluation dataset context configurations and high-performing comparisons in a similar unseen setting? For each evaluation dataset, where possible we report our results against other zero/few-shot work. If known, we also report the current state of the art. As applicable for each dataset we report results without retrieval, with our retrieval (denoted Dataset\({}_{\text{R}}\)), and with gold context (denoted Dataset\({}_{\text{G}}\) or similar). To facilitate comparison against prior work on DROP (Dua et al., 2019) and IIRC (Ferguson et al., 2020) we use the numeracy-focused F1 calculation introduced in Dua et al. (2019) whereby if the gold label is a number, the predicted answer must contain that number irrespective of other token overlap. For consistency we retain this method for reporting F1 for other datasets noting this is equivalent to standard F1 where the gold answer is not a number and disadvantageous to our results where the gold answer is a number. For datasets with binary labels we adopt the calculation used in Khashabi et al. (2020) where to count as a match the predicted answer must appear in the gold label and the opposing answer must not. For multi-choice evaluation, we take the option with the highest overlap with the predicted answer and then score as exact match. Where comparing performance of our _Base_ against _Base+RATD_ models we use the paired bootstrap test (Efron and Tibshirani, 1993) to test for statistical significance (p \(<\) 0.05). We report results against the dataset splits commonly reported by relevant prior work. ### Unseen Evaluation Datasets **StrategyQA**(Geva et al., 2021) contains binary-labeled commonsense samples requiring a diversity of multi-hop reasoning strategies. The form of questions is generally implicit, meaning they do not leak information as to how they could be decomposed (e.g. "Did Aristotle use a laptop?" versus "Was Aristotle alive at the time that laptops were invented?"). Many samples involve reasoning to a plausible rather than an entailed conclusion even where gold paragraphs are provided (Liang et al., 2022) e.g. "Is greed the most prevalent of the Seven Deadly Sins?". To facilitate comparison with other zero-shot approaches we use the full training set for evaluation as per BIG-bench (Srivastava et al., 2022) (denoted SQA for question-only and SQA\({}_{\text{R}}\) for question plus our retrieval). We also report results with two forms of gold context; using the provided summary notes which have a short paragraph, rationale-like form (SQA\({}_{\text{GF}}\)), and using the full paragraphs from each of three annotators (SQA\({}_{\text{GP}}\)) - for brevity we report the mean score over the three gold paragraph sets. **CommonsenseQA**(Talmor et al., 2019) (CSQA) is a 5-way multi-choice dataset of commonsense questions derived from Conceptnet (Speer et al., 2017). Many of the questions involve commonsense knowledge that is unlikely to be retrievable from a generic corpus ("Where on a river can you hold a cup upright to catch water on a sunny day"). However retrieving specific related examples such as "At the river, I filled my cup at a waterfall" may sometimes be possible (Piktus et al., 2021). CSQA augmented with our retrieval is denoted CSQA\({}_{\text{R}}\). **DROP**(Dua et al., 2019) is a RC dataset wherein answering each question requires simple numerical or temporal reasoning. Questions only make sense with the provided gold paragraph so we do not perform retrieval. Answers may be numbers, dates or text spans. **IIRC**(Ferguson et al., 2020) contains questions where an initial paragraph is given and answers depend upon this plus additional paragraphs that must be retrieved. Each sample is provided with links to all supporting documents, and prior work leverages these to restrict the number of documents to be retrieved from. We instead use our Iterator to augment samples from the full Wikipedia corpus using the concatenation of question and initial paragraph as the query, without reference to the given links (IIRC\({}_{\text{R}}\)). We also report comparison against an oracle context (IIRC\({}_{\text{G}}\)). Answers may be numbers, binary, text spans or labeled unanswerable. For IIRC\({}_{\text{G}}\) unanswerable samples, we construct contexts using the initial paragraph fragment plus 1-2 random distractor paragraphs. **ARC-DA**(Bhakthavatsalam et al., 2021) is a question-only subset of ARC (Clark et al., 2018) where questions have been re-worded to make sense in an open domain context. The original multichoice versions of ARC are part of our training regime, hence compositionality is doubtful and samples are only partially unseen in the sense that the question format is different (and we use the test split). Nonetheless we report results in the interests of exploring diversity. We experiment with Iterator-augmented (ARCDAR\({}_{\text{R}}\)) versions as well as with a gold context that we construct from Worldtree (Xie et al., 2020) (ARCDAR\({}_{\text{G}}\)). **Musique**(Trivedi et al., 2022) is a multihop dataset constructed by combining single-hop questions from existing datasets including SQuAD (Rajpurkar et al., 2016) which is also part of our training regime. Moreover we utilise the training split of Musique in both our retriever and QA model training. However the provided development split has been constructed such that for all samples no single hop question, answer, or associated paragraph is common to the corresponding element of any training sample. Therefore we construct a new development set from the training set and experiment with the original Musique development split as "partially seen", this time where the form of questions is "seen" but the exact questions are not. Prior work generally uses specialised retrieval for Musique where selection is from the set of gold and distractor paragraphs provided for each sample. We experiment with our retrieval (Musique\({}_{\text{R}}\)), and with a gold context constructed from gold paragraphs (Musique\({}_{\text{G}}\)). ### Models The Retriever component of the Iterator is built upon RoBERTa-base (Liu et al., 2019) and both the Reranker and Evidence Set Scorer use ELECTRA-large (Clark et al., 2020). Unless noted otherwise, all results are reported against the same two final QA Models which are based on BART (Lewis et al., 2020). All models use the the Huggingface (Wolf et al., 2020) implementations. ### Experimental Results #### 3.3.1 StrategyQA and CommonsenseQA _Base+RATD_ significantly outperforms _Base_ on StrategyQA (Table 3). On SQA\({}_{\text{R}}\) (which uses our retrieved contexts) our much smaller _Base+RATD_ model slightly exceeds performance of the two 11 billion parameter models and is comparable with OPT 175B (Zhang et al., 2022). Our _Base_ model fails to improve with SQA\({}_{\text{GP}}\) (which has contexts of gold paragraphs) versus the question-only SQA version. The improvement on SQA\({}_{\text{GP}}\) with the addition of _RATD_ draws attention to the fact that outside of our _RATD_ datasets the majority of our multihop training samples are aimed at imparting deductively valid forms of reasoning which, as noted above, are often inapplicable for SQA\({}_{\text{GP}}\). As described in section 3.1, the contexts of SQA\({}_{\text{GF}}\) are of a condensed, rationale-like form, distinct from the standard verbose paragraph form of SQA\({}_{\text{GP}}\). Model performance on SQA\({}_{\text{GF}}\) hugely outperforms our other configurations. This shows that with a context of a form the model has learned to reason with, it is possible to solve challenging implicit questions. As to where our models may have learned to reason with this short context form we note that some of the training datasets contain similar short form contexts e.g. BoolQ (Clark et al., 2019), which like StrategyQA has binary labels. Our _Base_ model has 84.9% development set accuracy on BoolQ. As Table 4 shows, augmenting CommonsenseQA samples with retrieval (CSQA\({}_{\text{R}}\)) yields mixed results. Others e.g. Piktus et al. (2021) have observed that the best zero/few shot performance on this type of dataset has been achieved with much larger models rather than external retrieval and our analysis bears this out. The addition of extra reasoning strategies via the _RATD_ datasets is more successful; as with StrategyQA, performance on CommonsenseQA is improved with the _Base+RATD_ model. #### 3.3.2 DROP and IIRC As with PaLM, our _Base_ and _Base+RATD_ models are trained using digit tokenization. On DROP both our models outperform all models not trained using this method including GPT3 175B and InstructGPT 175B (Ouyang et al., 2022) (Table 5). Performance of our models approaches that of PaLM 8B and PaLM 540B in the zero shot setting but both are superior to ours with a 5-shot prompt. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Params** & **Base** & **Base+RATD** \\ \hline Random & & & 20.0 & 20.0 \\ Prior work (finetuned)\({}^{a}\) & 418M & 91.2 & \\ PaLM - 0/5 shot\({}^{b}\) & 540B & 69.2/81.5 & \\ GPT3 - 0/few shot\({}^{c}\) & 175B & 81.5/85.0 & \\ UnifiedQA v1\({}^{d}\) & 11B & 76.2 & \\ PaLM - 0/5 shot & 8B & 66.0/77.6 & \\ GPT3 - 0/few shot & 760M & 61.8/62.7 & \\ UnifiedQA v1 & 770M & 60.9 & \\ Ours: CSQA & 440M & 61.1 & **64.0** \\ Ours: CSQA\({}_{\text{R}}\) (Our retrieval) & 440M & 62.4 & **63.6** \\ \hline \hline \end{tabular} \end{table} Table 4: CommonsenseQA development set performance comparison (Accuracy). CommonsenseQA contains multi-choice commonsense questions. Bold figures denote the better of our two models. _Base+RATD_ improvement is statistically significant for CSQA but not for CSQA\({}_{\text{R}}\) (adding retrieved context improves _Base_ but not _Base+RATD_). \({}^{a}\) Xu et al. (2021); \({}^{b}\) Chowdhery et al. (2022); \({}^{c}\) Brown et al. (2020); \({}^{d}\) Khashabi et al. (2020) \begin{table} \begin{tabular}{l r r r} \hline \hline **Model** & **All Ans. Types** & **Numeric Ans. Only** \\ \hline Two Stage: +DT +NumLit & **40.0** & **25.4** \\ One Stage: +DT +NumLit & 38.2 & 22.9 \\ Two Stage: -DT +NumLit & 34.7 & 16.6 \\ One Stage: +DT -NumLit & 29.0 & 11.2 \\ \hline \hline \end{tabular} \end{table} Table 6: DROP development set (F1). Ablative results on our QA models trained using _Base+RATD_ datasets trained in one or two stages, with/without digit tokenization (+/-DT), and with/without numerical literacy training datasets (+/-NumLit). Note that the -NumLit setting is only relevant for single-stage training. \begin{table} \begin{tabular}{l r r} \hline \hline **Model** & **Params** & **Base** & **Base+RATD** \\ \hline Prior work: IIRC\({}_{\text{R}}\)\({}^{a}\) & 123M & 51.6 \\ Ours: Finetuned IIRC\({}_{\text{R}}\) (Our retrieval)\({}^{b}\) & 440M & 53.6 \\ Ours: IIRC\({}_{\text{R}}\) (Our retrieval) & 440M & 23.8 \\ \hline Ours: Finetuned IIRC\({}_{\text{G}}\) (Gold context)\({}^{b}\) & 440M & 80.2 \\ Ours: IIRC\({}_{\text{G}}\) (Gold context) & 440M & **59.6** \\ \hline \hline \end{tabular} \end{table} Table 7: IIRC test set evaluation (F1). IIRC tests diverse reasoning requiring retrieval. Both _Base_ to _Base+RATD_ comparisons are statistically significant. \({}^{a}\) Ferguson et al. (2022) use a finetuned QA model and specialised retrieval with corpus restricted to documents linked from each initial paragraph. \({}^{b}\) To the best of our knowledge our _Base_ model finetuned on IIRC\({}_{\text{R}}\) and separately on IIRC\({}_{\text{G}}\) are both SOTA at the time of writing so we report these given unavailability of unseen comparisons. \begin{table} \begin{tabular}{l r r} \hline \hline **Model** & **Params** & **Base** & **Base+RATD** \\ \hline PaLM - 0/5 shot\({}^{a}\) & 540B & 43.7/70.8 \\ GPT3 - 0/few shot\({}^{b}\) & 175B & 23.6/36.5 \\ InstructGPT PPO+ptx - 0/few shot\({}^{c}\) & 175B & 15.2/33.3 \\ UnifiedQA v1\({}^{d}\) & 11B & 32.5 \\ PaLM - 0/5 shot & 8B & 45.1/69.4 \\ UnifiedQA v1 & 770M & 24.6 \\ GPT3 - 0/few shot & 760M & 14.4/24.0 \\ Ours & 440M & **40.7** & 40.0 \\ \hline \hline \end{tabular} \end{table} Table 5: DROP development set performance comparison (F1). DROP primarily tests numeracy in reading comprehension. Reduced performance on _Base+RATD_ versus _Base_ is statistically significant. \({}^{a}\)Chowdhery et al. (2022); \({}^{b}\)Brown et al. (2020); \({}^{c}\)Ouyang et al. (2022); \({}^{d}\) Khashabi et al. (2020) or more frequently such relevant information as to make it _appear_ answerable. Similar to the numerical computation issue, adding sufficiently similar training examples via finetuning enables the model to distinguish unanswerable samples. Appendix G illustrates failure cases for numeric and unanswerable types. #### 3.3.3 ARC-DA and Musique Table 9 shows model performance on our "partially seen" datasets, ARC-DA and Musique. On ARC-DA, adding _RATD_ datasets significantly improves results in both retrieved and gold settings. By contrast, Musique performance significantly degrades with _Base+RATD_. Noting that Musique is the only evaluation dataset for which we create _RATD_ datasets, we hypothesise that in the case of highly similar training examples to particular evaluation samples, the model prediction is the memorised answer of a similar training example. We confirm this by examining the predicted answers of the 1,670 Musique evaluation samples that scored 0 F1 against _Base+RATD_. Of these the predicted answers of 716 samples are an exact match to a Musique training sample gold answer (e.g. "Who is the spouse of the Green performer?" is incorrectly answered as "anna gordy gave" because this is the label to a number of training questions of "Who is the \begin{table} \begin{tabular}{l l r r} \hline \hline **Dataset** & **Ans. Type** & **Base+RATD** & **Finetuned** \\ \hline DROP & Span (2962) & 67.4 & 82.3 \\ & Multi-span (567) & 42.0 & 72.2 \\ & Num (5850) & 25.4 & 79.0 \\ & Date (157) & 62.4 & 74.0 \\ \cline{2-3} & All (9536) & 40.0 & 79.6 \\ \hline IIRC\({}_{\text{G}}\) & Span (544) & 59.8 & 74.3 \\ & Binary (66) & 57.1 & 64.7 \\ & Num (277) & 2.9 & 67.4 \\ & No answer (414) & 92.8 & 98.8 \\ \cline{2-3} & All (1301) & 58.1 & 80.1 \\ \hline IIRC\({}_{\text{R}}\) & Span (544) & 48.9 & 44.8 \\ & Binary (66) & 68.2 & 57.6 \\ & Num (277) & 3.8 & 41.5 \\ & No answer (414) & 2.6 & 69.9 \\ \cline{2-3} & All (1301) & 25.5 & 52.8 \\ \hline \hline \end{tabular} \end{table} Table 8: Breakdown by answer type on DROP development set and IIRC test set (F1). Sample counts are in brackets. Finetuned models are trained from the _Base+RATD_ checkpoint. \begin{table} \begin{tabular}{l r r r} \hline \hline **Model** & **Params** & **Base** & **Base+RATD** \\ \hline UnifiedQA+ARCDA/MC with IR\({}^{a}\) & 11B & 61.4 & \\ Ours: ARCDA (Our retrieval) & 440M & 28.8 & **31.6** \\ Ours: ARCDA\({}_{\text{G}}\) (Gold context) & 440M & 56.8 & **59.1** \\ \hline Musique - EX(SA)\({}^{b}\) & 102M & 49.8 & \\ Ours: Musique (Our retrieval) & 440M & 24.3 & 22.2 (28.2) \\ Ours: Musique\({}_{\text{G}}\) (Gold context) & 440M & 60.8 & 43.8 (62.4) \\ \hline \hline \end{tabular} \end{table} Table 9: ARC-DA (test accuracy) and Musique (development F1) comparisons. ARC-DA is science question answering and Musique involves multi-hop question answering. All _Base_ to _Base+RATD_ differences are statistically significant. Musique performance degradation in _Base+RATD_ is caused by _adding_ Musique _RATD_ in training; results for an ablative model trained with all datasets _except_ for Musique _RATD_ is shown in brackets. \({}^{a}\) Bhakthavatsalam et al. (2021): Training includes ARC-DA. \({}^{b}\) Trivedi et al. (2022a): EX(SA) uses specialised retrieval from each Musique sample’s gold and distractor paragraphs. spouse of..." form). An ablative experiment, wherein we trained a version of _Base+RATD_ without the Musique _RATD_ datasets, results in improved performance versus _Base_ and the original _Base+RATD_ on Musique (Table 9) without material impact to other evaluation dataset results. The Musique training split has 19,938 samples but only 2,057 unique labels, and questions with the same answer tend to be of similar form, such as the above "Who is the spouse of..." example. Therefore we consider the question of whether the poor performance of _Base+RATD_ here is a general weakness of our method or whether it is specific to the particular bias of Musique. We trained another _Base+RATD_ model, this time with the Musique _RATD_ training dataset substituted with a filtered variation that only contains samples with unique labels. Similar to the above Musique _RATD_ ablation, this version also improves against the original _Base+RATD_ (+3.0 F1 for Musique and +10.6 F1 for Musique) without impact to other results. Hence, assuming appropriate consideration of existing dataset bias when selecting _RATD_ training samples, we affirm the robustness of our method. ## 4 Conclusion We have argued that an ability to reason over imperfect and incomplete information is a critical skill with which question-answering models must be endowed. To facilitate such ability we create _RATD_ datasets that are designed to impart heuristic reasoning strategies with context of a form similar to that which retrieved contexts for downstream tasks will have. We show that training on _RATD_ datasets improves performance on all unseen evaluation datasets with retrieved contexts. This sometimes comes at a small cost in situations where questions come with gold contexts that are in a form that our model is already good at utilizing (SQAG, DROP, and IIRCG) although we suggest that in practice such gold contexts are the less common case. **(R1)** We also show that even with our large and diverse pre-training regime, questions involving numerical computation and those labelled unanswerable remain sensitive to the similarity of training samples. **(R2)** Our results demonstrate that generic retrieval without normalisation can outperform specialised methods (e.g. we are state of the art on fine-tuned IIRCR) and that our overall method can yield performance on par or better than that of much larger models without fine-tuning (e.g. SQAG, DROP). **(R3)** #### Broader Impact Statement In common with the well known issues with larger models, our system is capable of generating hallucinated, false and/or potentially offensive answers. We consider its usage at this stage of development to be most appropriate in research environments. Conversely, latency, physical compute size, cost and energy efficiency are important considerations where smaller models offer material benefits. As noted we suggest that a diversity of applications exist in the broad domain of reasoning systems and that due weight be assigned to all factors in determining the most appropriate approach for a given situation. #### Acknowledgments We are grateful to the authors of Pi et al. (2022) for providing us with their POET-SQL dataset and to Omar Khattab for providing us with Hover paragraph sequencing data.
2307.09881
$π$-solitons on a ring of waveguides
We study the existence and stability of $\pi$-solitons on a ring of periodically oscillating waveguides. The array is arranged into Su-Schrieffer-Heeger structure placed on a ring, with additional spacing between two ends of the array. Due to longitudinal oscillations of waveguides, this Floquet structure spends half of the longitudinal period in topological phase, while on the other half it is nontopological. Nevertheless, waveguide oscillations lead to the emergence of anomalous topological $\pi$-modes at both ends of the structure that strongly couple in our ring geometry, leading to the formation of previously unexplored in-phase and out-of-phase $\pi$-modes. We study topological solitons bifurcating from such linear $\pi$-modes and demonstrate how their properties and stability depend on the size of the ring and on spacing between two ends of the array.
Sergey K. Ivanov, Yaroslav V. Kartashov
2023-07-19T10:24:47Z
http://arxiv.org/abs/2307.09881v1
# \(\pi\)-solitons on a ring of waveguides ###### Abstract We study the existence and stability of \(\pi\)-solitons on a ring of periodically oscillating waveguides. The array is arranged into Su-Schrieffer-Heeger structure placed on a ring, with additional spacing between two ends of the array. Due to longitudinal oscillations of waveguides, this Floquet structure spends half of the longitudinal period in topological phase, while on the other half it is nontopological. Nevertheless, waveguide oscillations lead to the emergence of anomalous topological \(\pi\)-modes at both ends of the structure that strongly couple in our ring geometry, leading to the formation of previously unexplored in-phase and out-of-phase \(\pi\)-modes. We study topological solitons bifurcating from such linear \(\pi\)-modes and demonstrate how their properties and stability depend on the size of the ring and on spacing between two ends of the array. + Footnote †: Corresponding author The past decade has witnessed growing interest to photonic topological insulators representing extension of topological materials originally discovered in solid-state physics to optical realm offering rich prospects for implementation of topologically protected states in practical switching and routing devices, and lasers. Various proposals for observing photonic topological insulators have been made [1, 2]. Among them is the realization of photonic Floquet topological insulator in a waveguiding system [3]. The Floquet protocol, resulting from periodic modulation of waveguiding system in the direction of light propagation [4, 5, 6], has been used to demonstrate anomalous topological phases [7, 8], unpaired Dirac cones [9], tunneling inhibition [10] and coupling of edge states [11, 12], etc. Remarkably, such modulations can also create anomalous Floquet topological \(\pi\)-modes, whose localization strongly depends on parameters of modulation [13, 14, 15, 16, 17]. One of the simplest systems allowing observation of these localized periodically oscillating anomalous Floquet states is the Su-Schrieffer-Heeger (SSH) array with dynamically modulated intra- and inter-cell coupling strengths [18]. Experimentally topological \(\pi\)-modes were observed only in linear regime in plasmonic and acoustic systems [19, 20, 21, 22], as well as in waveguide arrays [23]. On the other hand, photonic systems may possess strong nonlinear response that in combination with nontrivial topology of the material leads to many intriguing phenomena [24, 25]. Among them is the formation of topological edge solitons, existing in diverse forms and geometries. Such states have been predicted and observed in helical waveguide arrays [26, 27, 29, 28, 30, 31, 32], in topological systems admitting linearly coupled co-propagating edge states [33, 35, 34], and in second-order insulators [36, 37], to name just a few systems. Straight SSH arrays and their generalizations also allow formation of topological edge solitons [38, 39, 40, 41]. At the same time, the impact of nonlinearity on anomalous Floquet systems remains practically unexplored. It is only very recently that \(\pi\)-mode solitons have been predicted in the simplest modulated line and square SSH waveguide arrays [42]. The goal of the present Letter is to introduce anomalous \(\pi\)-modes in new configuration, where modulated SSH-like waveguide array is placed on a ring, such that coupling of states at the opposite ends of the same structure becomes possible. We show that localized anomalous \(\pi\)-modes emerging at both ends of ring SSH array due to periodic oscillations of waveguides couple to form more complex in-phase and out-of-phase oscillating states with quasi-propagation constants in topological gap. In focusing medium such states give rise to in-phase and out-of-phase \(\pi\)-solitons that can be simultaneously stable. We study how the properties of such solitons depend on the size of the ring and on spacing between two ends of the SSH ring array. We consider paraxial propagation of light in a modulated waveguide array with focusing nonlinearity, which can be described by the dimensionless nonlinear Schrodinger equation for the field amplitude \(\psi\) \[i\frac{\partial\psi}{\partial z}=-\frac{1}{2}\nabla^{2}\psi-\mathcal{R}(x,y,z) \psi-|\psi|^{2}\psi. \tag{1}\] Here \(\nabla=(\partial/\partial x,\partial/\partial y)\), the \(x\) and \(y\) coordinates are normalized to the characteristic transverse scale \(r_{0}=10\)\(\mu\)m, the propagation distance \(z\) is normalized to the diffraction length \(kr_{0}^{2}\), \(k=2\pi n_{0}/\lambda\), \(\lambda=800\) nm, and \(n_{0}\approx 1.45\) is the unperturbed refractive index of the material. Dimensionless intensity \(|\psi|^{2}=1\) corresponds to peak intensity of \(I=n_{0}|\psi|^{2}/k^{2}r_{0}^{2}n_{2}\approx 8\cdot 10^{15}\,\mathrm{W/m^{2}}\), where the effective nonlinear refractive index is \(n_{2}\sim 1.4\cdot 10^{-20}\,\mathrm{m^{2}/W}\). The function \(\mathcal{R}(x,y,z)=p\sum_{m=0}^{N-1}[Q_{m1}(x,y,z)+Q_{m2}(x,y,z)]\) describes \(2N\) single-mode Gaussian waveguides placed on a ring of radius \(\rho=(Ns+Nd+\delta)/2\pi\), where \(Q_{mi}=e^{-[(x-\rho\cos\phi_{mi}(z))^{2}+(y-\rho\sin\phi_{mi}(z))^{2}]/\sigma^ {2}}\), \(i=1,2\), \(\phi_{m1}(z)=[(2m+1)s/2+md+\delta/2-r\sin(oz)]/\rho\) and \(\phi_{m2}(z)=[(2m+1)s/2+md+\delta/2-r\sin(oz)]/\rho\). The \(\pi\)-solitons are characterized by the \(\pi\)-solitons [38, 39, 40, 41]. The \(\pi\)-solitons are characterized by the \(\pi\)-solitons [38, 40, 41]. [\((2m+1)s/2+(m+1)d+\delta/2+r\sin(oz)\)]/\(\rho\), \(s\) and \(d\) are the inter-cell and intra-cell arc spacings, respectively, \(\delta\) is the additional spacing between two ends of the array, \(\omega=2\pi/Z\), \(Z\) is the longitudinal modulation period, and \(r\) is the amplitude of waveguide oscillations. Each guide has a width of \(\sigma=0.5\). The depth of the waveguides is given by \(p=k^{2}r_{0}^{2}\delta n/n_{0}\), where \(\delta n\) is the refractive index contrast. Here we set \(p=5\) which corresponds to \(\delta n\approx 5.6\cdot 10^{-4}\). Schematic illustrations of static (\(r=0\)) and modulated (\(r\neq 0\)) ring SSH arrays are presented in Fig. 1(a) and 2(a), respectively. By neglecting the nonlinear term in Eq. (1) and introducing the ansatz \(\psi=u(x,y,z)\exp(ibz)\), where \(b\) represents a quasi-propagation constant and \(u(x,y,z)=u(x,y,z+Z)\) is a complex field of \(z\)-periodic Floquet mode, we obtain the problem \(bu=(\partial^{2}u/\partial x^{2}+\partial^{2}u/\partial y^{2})/2+Ru+i\partial u /\partial z\) that can be solved numerically using propagation and projection method [9, 30]. First, it is instructive to consider spectrum of static ring array [Fig. 1(a)] with straight waveguides (in this case \(r=0\) and \(b\) is a standard propagation constant), presented in Figs. 1(b) and 1(c) for \(N=10\) dimers. For this array, when \(s\) becomes smaller than \(d\) [see Fig. 1(e)] the inter-cell coupling exceeds the intra-cell one and array enters topological phase, when two topological edge states emerge between two bulk bands. These states, depicted by red and blue lines in Fig. 1(b), represent in-phase and out-of-phase combinations of localized modes emerging at the opposite ends of the structure [see modes 1 and 2 in Figs. 1(f) and (g)]. Their localization becomes more pronounced when \(s\) decreases, while the intra-cell spacing \(d=2.9\) and \(\delta=3\) remain fixed. The width of the topological gap, where edge states appear, expands as \(s\) decreases. When \(s>d\) [see Fig. 1(d)], the array is in trivial phase, and no edge states can form within trivial gap in the spectrum. In Fig. 1(c) we show how linear spectrum depends on the additional spacing \(\delta\) for \(s=1.9\) and \(d=2.9\). Notice that topological modes appear only for \(\delta>0\), and the difference of their propagation constants decreases with increase of \(\delta\), since coupling between two ends of array becomes weaker. Next we consider periodically modulated ring array illustrated in Fig. 2(a). This array is in the "instantaneous" topological phase on half of the \(Z\) period, but on the other half it is in the trivial phase, where edge states would not exist in static structure. Nevertheless, for a certain range of amplitudes of waveguide oscillations \(r\), the modulated array supports topological \(Z\)-periodic anomalous \(\pi\)-modes emerging at two "coupled" ends of the array [see red and blue lines in Floquet spectra of such structure in Figs. 2(b) and (c)]. It can be seen that \(\pi\)-modes (red and blue lines) emanate at \(r\approx 0.2\) from the point of overlap of the bulk band (black lines) with its own Floquet replica arising due Figure 1: (a) Example of the SSH ring array with straight waveguides. (b) Spectrum of modes of the array with \(N=10\) dimers versus inter-cell spacing \(s\) for fixed \(d=2.9\) and \(\delta=3\). Vertical dashed line indicates the transition from trivial to topological phase. (c) Eigenvalues of modes supported by the topological array as functions of spacing \(\delta\) for \(s=1.9\), \(d=2.9\). Array profiles for \(s=2.9\), \(d=1.9\) (d) and \(s=1.9\), \(d=2.9\) (e). Examples of the modes corresponding to the red (f) and blue (g) dots in panels (b), (c). Here and below \(p=5\) and \(\sigma=0.5\). Figure 2: (a) Example of the modulated SSH ring array. Floquet spectra of the modulated array with \(\delta=3\) (b) and \(\delta=1\) (c) for various \(r\) values. Examples of \(\pi\)-modes corresponding to the red dot in panel (c) at different distances \(z\) within one longitudinal period for the array with \(N=10\) (\(d\))-(f) and for the array with \(N=4\) at \(z=0\) (g). Here and below \(s=d=2.4\) and \(Z=17\). to periodicity of the spectrum in the vertical direction. Their localization increases with \(r\). Importantly, \(\pi\)-modes corresponding to red and blue lines are Floquet generalizations of in-phase and out-of-phase edge states in ring configuration. Their quasi-propagation constants merge when spacing \(\delta\) increases and two ends of array stop interacting, while when \(\delta\to 0\) they separate more and more and tend to merge with bulk bands. Anomalous \(\pi\)-modes undergo significant shape variations during propagation, while remaining exactly \(Z\)-periodic. This behavior is evident in Figs. 2(d)-(f) for \(N=10\), where we display \(\mathrm{Re}[\psi]\) distributions in such modes at selected distances for \(r=0.5\). Although the intensity distributions are identical at distances \(z=0,Z/2,Z\), the profiles at \(z=Z/4\) and \(z=3Z/4\) differ, highlighting the fact that the full period corresponds to \(Z\). We also show the example of \(\pi\)-mode at \(z=0\) for smaller ring with only \(N=4\) dimers in Fig. 2(g). We now turn to \(\pi\)-solitons, which bifurcate from linear anomalous \(\pi\)-modes in this ring configuration. Fig. 3 illustrates the dependence of power \(P=\iint|\psi|^{2}dxdy\) of \(\pi\)-soliton, as well as of its averaged peak amplitude \(\overline{a}=Z^{-1}\int a\,d\,z\), where \(a=\max|\psi|\), on quasi-propagation constant \(b\) for \(r=0.5\). Here, red and blue lines correspond to in-phase and out-of-phase \(\pi\)-solitons that can co-exist in topological gap. The power of such states \(P\) increases almost linearly away from bifurcation point from corresponding linear modes (shown by vertical gray lines). Nonlinearity allows to adjust the localization of \(\pi\)-solitons by changing location of \(b\) within the gap. At sufficiently high powers, when \(b\) approaches the bulk band (shaded regions), the solitons interact with bulk modes and develop long tails in the entire ring. In Fig. 3, solid branches correspond to stable \(\pi\)-solitons, while dashed branches correspond to unstable ones. Stability was analyzed by modeling the propagation of perturbed solitons \(\psi|_{z=0}=u\left(x,y,z=0\right)\left(1+\delta_{\mathrm{re}}+i\delta_{\mathrm{ im}}\right)\), where \(\delta_{\mathrm{re}}+i\delta_{\mathrm{im}}\) is a small complex noise with amplitude uniformly distributed within the segment \([-0.05,+0.05]\). At \(N=10\) and \(\delta=3\) [Fig. 3(a)], solitons become unstable when their power exceeds \(P\approx 0.03\) for in-phase states (red line) and \(P\approx 0.25\) for out-of-phase states (blue line). Interestingly, for smaller spacing \(\delta=1\) [Fig. 3(b)] stability regions expand, so that in-phase solitons become stable up to \(P\approx 0.23\), while out-of-phase ones up to \(P\approx 0.37\). An example of stable propagation of the out-of-phase \(\pi\)-soliton with peak intensity \(\sim 5\cdot 10^{14}\,\mathrm{W/m^{2}}\) is shown in Fig. 4, while Fig. 5 shows typical instability development for in-phase \(\pi\)-soliton. In the former case, the soliton shows persistent oscillations over hundreds of periods periodically recovering its shape despite considerable transformations that it experiences inside each period \(Z\) (see field modulus distributions in Fig. 4). In contrast, the unstable \(\pi\)-soliton radiates and eventually decays, expanding over the entire ring (see Fig. 5). By decreasing the size of the array to \(N=4\) dimers, one can dramatically expand the domains of stability for \(\pi\)-solitons, as depicted in Fig. 3(c). This illustrates a direct influence of the size of the ring on stability of such states. The propagation dynamics of stable soliton with peak intensity \(\sim 1.3\cdot 10^{15}\,\mathrm{W/m^{2}}\) for this small-size ring array is shown in Fig. 6. We also investigated the influence of the oscillation amplitude \(r\) on the stability of \(\pi\)-solitons. Our observations indicate that the stability intervals progressively expand as Figure 4: Stable propagation of the out-of-phase \(\pi\)-soliton. Top row shows peak amplitude \(a\) vs normalized distance \(z/Z\), while middle and bottom rows show field modulus distributions at different \(z\) corresponding to the dots in the top panel. Soliton corresponds to the blue dot in Fig. 3(b). Figure 3: Averaged peak amplitude \(\overline{a}\) and power \(P\) versus quasi-propagation constant \(b\) for \(\pi\)-soliton families at \(r=0.5\) in the arrays with \(N=10\), \(\delta=3\) (a) and \(\delta=1\) (b) and in the array with \(N=4\), \(\delta=1\) (c). Dashed lines represent unstable solitons, solid lines represent stable ones, and the shaded regions show bulk bands. the value of \(r\) increases. However, it is important to note that this trend holds true only up to an upper limit of \(\sim 0.7\) for \(r\). Beyond this threshold, the introduction of radiation becomes a significant factor, affecting the stability of the solitons. Summarizing, we have shown that ring SSH arrays composed from oscillating waveguides can support two different types of stable \(\pi\)-solitons with different phase structure, emerging due to coupling of two ends of the same array in ring geometry. Stability of such states strongly depends on the size of ring array and on spacing between two ends of the structure. These results show that anomalous Floquet systems may be used as a platform for observation of many intriguing nonlinear phenomena. ## Funding ICFO was supported by Agencia Estatal de Investigacion (CEX2019-000910-S, PGC2018-097035-B-I00); Departament de Recerca i Universitats de la Generalitat de Catalunya (2021 SGR 01448); CERCA; Fundacio Cellex; and Fundacio MirPuig. Y.V.K.'s academic research was supported by Russian Science Foundation (grant 21-12-00096) and research project FFUU-2021-0003 of the Institute of Spectroscopy of the Russian Academy of Sciences. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability All data underlying the results presented in this paper may be obtained from the authors upon reasonable request.
2307.07674
An Empirical Study of the Effectiveness of Using a Replay Buffer on Mode Discovery in GFlowNets
Reinforcement Learning (RL) algorithms aim to learn an optimal policy by iteratively sampling actions to learn how to maximize the total expected return, $R(x)$. GFlowNets are a special class of algorithms designed to generate diverse candidates, $x$, from a discrete set, by learning a policy that approximates the proportional sampling of $R(x)$. GFlowNets exhibit improved mode discovery compared to conventional RL algorithms, which is very useful for applications such as drug discovery and combinatorial search. However, since GFlowNets are a relatively recent class of algorithms, many techniques which are useful in RL have not yet been associated with them. In this paper, we study the utilization of a replay buffer for GFlowNets. We explore empirically various replay buffer sampling techniques and assess the impact on the speed of mode discovery and the quality of the modes discovered. Our experimental results in the Hypergrid toy domain and a molecule synthesis environment demonstrate significant improvements in mode discovery when training with a replay buffer, compared to training only with trajectories generated on-policy.
Nikhil Vemgal, Elaine Lau, Doina Precup
2023-07-15T01:17:14Z
http://arxiv.org/abs/2307.07674v2
# An Empirical Study of the Effectiveness of Using a Replay Buffer on Mode Discovery in GFlowNets ###### Abstract Reinforcement Learning (RL) algorithms aim to learn an optimal policy by iteratively sampling actions to learn how to maximize the total expected return, \(R(x)\). GFlowNets are a special class of algorithms designed to generate diverse candidates, \(x\), from a discrete set, by learning a policy that approximates the proportional sampling of \(R(x)\). GFlowNets exhibit improved mode discovery compared to conventional RL algorithms, which is very useful for applications such as drug discovery and combinatorial search. However, since GFlowNets are a relatively recent class of algorithms, many techniques which are useful in RL have not yet been associated with them. In this paper, we study the utilization of a replay buffer for GFlowNets. We explore empirically various replay buffer sampling techniques and assess the impact on the speed of mode discovery and the quality of the modes discovered. Our experimental results in the Hypergrid toy domain and a molecule synthesis environment demonstrate significant improvements in mode discovery when training with a replay buffer, compared to training only with trajectories generated on-policy. Machine Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, ## 2 Preliminaries Let \(G=(\mathcal{S},\mathcal{A})\) be a directed acyclic graph (DAG) (Bengio et al., 2021; 2), where \(\mathcal{S}\) is the set of states (vertices) and \(\mathcal{A}\) is the set of actions (edges). In GFlowNets, the learner constructs \(G\) using a series of actions (edges) starting from an initial state, \(s_{0}\in\mathcal{S}\) until a terminal (sink) node, \(s_{n}\in\mathcal{S}\) is reached. A _complete trajectory_(Malkin et al., 2022), \(\tau\), is a sequence of transitions from \(s_{0}\) to a terminal state: \(\tau=(s_{0}\to s_{1}\rightarrow\cdots\to s_{n})\), where \((s_{i}\to s_{i+1})\in\mathcal{A}\,,\forall i\). A _trajectory flow_\(F:\mathcal{T}\mapsto\mathbb{R}^{+}\) is any non-negative function defined on the set of complete trajectories \(\mathcal{T}\) to \(\mathbb{R}^{+}\). The trajectory flow can be interpreted as the total amount of unnormalized probability flowing through a state. More formally, for any state \(s\), the state flow is defined as \(F(s)=\sum_{\tau\in\mathcal{T}:s\in\tau}F(\tau)\), and for any edge (\(s\to s^{\prime}\)), the edge flow is defined as \[F(s\to s^{\prime})=\sum_{\tau\in\mathcal{T}:(s\to s^{\prime}\in\tau)}F(\tau) \tag{1}\] The _terminal flow_ is defined as the flow associated with the final transition \((s_{i}\to s_{n})\), \(F(s_{i}\to s_{n})\). The intention is to make the total flow at state \(s_{n}\) approximately equal to the reward \(R(s_{n})\). The _forward transition probability_, \(P_{F}\) for each step of the trajectory is defined as: \[P_{F}=\frac{F(s\to s^{\prime})}{F(s)} \tag{2}\] and the probability of visiting a terminal state is: \[P_{F}(s)=\frac{\sum_{\tau\in\mathcal{T}:s\in\tau}F(\tau)}{Z} \tag{3}\] where \(Z\) is the total flow, \(Z=\sum_{\tau\in\mathcal{T}}F(\tau)\). **Flow Matching Objective**(Bengio et al., 2021): The _flow matching criterion_ states that the sum of inflow from all the parents of a node should be equal to the total outflow to all the children of that node: \[\mathcal{L}_{FM}(s;\theta)=\left(\log\frac{\sum_{s^{\prime}\in\text{Parent}(s )}F_{\theta}(s^{\prime}\to s)}{\sum_{s^{\prime\prime}\in\text{Child}(s)}F_{ \theta}(s\to s^{\prime\prime})}\right)^{2}. \tag{4}\] (Bengio et al., 2021) showed that these constraints can be converted into a temporal-difference (TD)-like objective (Sutton, 1988) which is then optimized with respect to the parameters of a function approximator, like a neural network. GFlowNets approximate the _edge flow_\(F_{\theta}:\mathcal{A}\rightarrow\mathbb{R}^{+}\) with learnable parameters \(\theta\), such that the terminal flow is roughly equal to the reward function \(R(x)\). Trajectories for training \(\theta\) are sampled from an exploratory policy \(\tilde{\pi}\) with full support, learned by minimizing the flow-matching objective (4). **Trajectory Balance Objective**(Malkin et al., 2022): The flow-matching objective can suffer from inefficient credit assignment. To overcome this, an alternative was proposed by Malkin et al., which leads to faster convergence. The trajectory balance objective is defined as: \[\mathcal{L}_{TB}(\tau;\theta)=\left(\log\frac{Z_{\theta}\prod_{s\to s ^{\prime}\in\tau}P_{F_{\theta}}(s^{\prime}|s)}{R(x)}\right)^{2} \tag{5}\] ## 3 Experience Replay Experience replay has emerged as a very useful RL technique which can improve learning efficiency and stability (Lin, 1992). The traditional approach involves storing past experiences encountered by the agent in a buffer and replaying them, by randomly sampling batches of experiences during the training process. The randomization allows the agent to explore diverse transitions, leading to better exploration and improved learning. Furthermore, if experience replay is done at the level of state-action transitions, rather than full trajectories, it breaks the temporal correlations between transitions, which can have a stabilizing effect when the RL agent is using non-linear function approximation. Mnih et al. demonstrated the effectiveness of experience replay in Deep Q-Networks (DQNs), achieving state-of-the-art performance on a wide range of Atari 2600 games. Schaul et al. proposed a technique that enhances the replay buffer, by assigning priorities to the experiences stored therein. The idea is to prioritize and sample experiences based on the potential that they will induce learning. Prioritized Experience Replay (PER) assigns higher priority to experiences that have a larger TD-error magnitude, indicating that more would be learned from replaying this experience. This approach helps the agent learn from the most informative and challenging experiences. ## 4 Experiments Our goal is to investigate the impact of different experience replay techniques on the training process of GFlowNets. Specifically, we compare three approaches: (i) training only with samples from the current online policy; (ii) training with an experience replay buffer that contains both samples from the current policy and from past policies, and where random sampling is used to select batches; and (iii) R-PRS (Reward Prioritized Replay Sampling), a technique inspired by prioritized experience replay. In R-PRS, we store and sample trajectories with the highest reward in the replay buffer. During the sampling process, the learner prioritizes the buffer according to this reward (instead of the TD-error like in PER). The underlying hypothesis is that by prioritizing and learning from the most promising trajectories, the agent can effectively explore the state space and improve learning performance. This idea is very similar in spirit to the initial work on replay by Lin (1992). ### Hypergrid We first analyze the effect of using experience replay with GFlowNets in Hypergrid, a toy domain presented by Bengio et al., which allows easy control of the number of interesting modes of a distribution and of the ease with which these modes can be discovered. The environment is an \(n\)-dimensional hypercube grid of side length \(H\), where the states are the cells of the hypercube. The agent always starts at coordinate \(x=(0,0,\dots)\), and the allowed actions \(a_{i}\) increase the coordinate \(i\), up to \(H\), upon which the episode terminates. A _stop_ action can also terminate the episode. There are many sequences of actions that lead to the same goal state, making this MDP a DAG. We use the codebase and architecture developed by (Bengio et al., 2021) as a foundation. For the GFlowNet model, we use an MLP as the reward approximator, with two hidden layers, each with \(256\) hidden units. We train all the models with the _Flow Matching_ objective. We set the learning rate to \(0.001\) and use the Adam optimizer (Kingma and Ba, 2014). All the experiments are run on 5 independent seeds and the mean and standard error are reported in the plots. The reward for terminating the episode at coordinate \(x\) is given by \(R(x)>0\). We experimented with the reward function \(R(x)=R_{0}+R_{1}\prod_{i}\mathbb{I}(0.25<|x_{i}/H-0.5|)+R_{2}\prod_{i}\mathbb{ I}(0.3<|x_{i}/H-0.5|<0.4)\) with \(0<R_{0}\ll R_{1}<R_{2}\). We set \(R_{1}=0.5\) and \(R_{2}=2\). We vary the value of \(R_{0}\) by setting it closer to 0, to make the problem artificially harder, by creating a region of state space which is less desirable to explore. The reward distribution for a 2D Hypergrid with \(H=8\) is shown in Figure 1. We present the results of an experiment with \(R_{0}=10^{-3}\), one of the more difficult settings. For each batch, the agent draws an equal number of trajectories from both the online policy and the replay buffer (\(16\) trajectories each). Figure 2 shows the evolution of the number of modes discovered as a function of training samples. R-PRS discovers all the modes relatively quickly compared to the random sampling and no replay buffer settings. Figure 9 shows that the R-PRS technique exhibits faster convergence towards the true reward distribution, compared to the other methods. Similar results in a relatively easier setting, \(R_{0}=10^{-2}\), are shown in Appendix A.1. This suggests that GFlowNets are more capable of exploration when the learner is repeatedly exposed to more promising samples (samples with high rewards). To further evaluate the impact of the experience replay sample size, we plot mode discovery as a function of the number of trajectories sampled from the replay buffer for use with R-PRS. The agent samples \(16\) trajectories from the online policy. We vary the number of trajectories from older policies from \(4\) to \(16\). Figure 3 shows that increasing the number of older trajectories sampled from the replay buffer helps the learner to discover modes more quickly. We can observe similar kinds of results in a relatively easier setting, \(R_{0}=10^{-2}\) as shown in Appendix A.1. To analyze whether the improvement in performance is due to the increased sample size from the experience replay buffer, we plot the modes discovered as a function of increasing batch size. When using no replay buffer, we varied the batch size from \(16\) to \(32\) and included R-PRS for comparison. In Figure 4, we observe that solely increasing the batch size (number of samples) negatively affects performance, thereby confirming that drawing high-reward samples from the replay buffer yields better results, compared to simply drawing additional samples from the online policy. Similar results can be observed in a relatively simpler setting, with \(R_{0}=10^{-2}\), as shown in Appendix A.1. As shown in Appendix A.2, similar results can be observed when mode discovery is plotted as a function of both the size of the experience replay and the sample size of the replay buffer. ### Molecule synthesis We carry out further analysis in a large-scale, a molecular synthesis environment, where the objective is to generate small molecules that have low binding affinity to a predefined target. In this environment, the reward function is the binding affinity of a candidate molecule to the target protein. The objective is to generate a diverse set of molecules that exhibit high reward. The environment has approximately \(10^{16}\) states, and the number of available actions ranges from 100 to 2000, depending on the agent's current state. Inspired by the work of Bengio et al. and following the framework proposed by Jin et al., we adopt a method for molecule generation that utilizes a predefined vocabulary of building blocks. The process involves constructing graphs through iterative addition. Each action corresponds to selecting a specific block to attach and determining the attachment Figure 1: Hypergrid domain - Reward distribution for a 2-dimensional Hypergrid with \(H=8\). point. This construction process gives rise to a directed acyclic graph (DAG), as multiple action sequences can lead to the same resulting graph. The details about the reward signal in this environment are shown in Appendix A.3. We tested the impact of the replay buffer on this large-scale environment by experimenting with all three training techniques introduced earlier. Figure 5 shows the number of modes discovered by each of these techniques. We identify all the candidates with rewards of more than \(0.9\) as modes. It is clear that R-PRS performs significantly better in terms of mode discovery. The average reward during the training is also better for R-PRS, as shown in Appendix A.3. Concurrent work by Shen et al. proposes prioritized replay training with high-reward samples as well. The authors claim that the performance of GFlowNets improved with the inclusion of experience replay. Figure 5 shows an interesting insight: the performance of the model without the replay buffer and the performance of the model with random sampling from the replay buffer is almost identical. This further ascertains that increasing the access to promising trajectories during training is what improves the performance, not just the use of the replay buffer. ## 5 Discussion In this paper, we conducted an empirical study of the effect of using an experience replay buffer containing past experience in GFlowNets training. Our empirical results show that using a prioritized replay which encourages the use of high-reward trajectories provides a performance boost in terms of mode discoverability as well as training speed. This, in turn, led to an increase in the diversity of candidate solutions without compromising on training convergence. We have also shown that increasing the size of the experience replay and of the replay buffer sample during training has a positive impact on the performance. While our experimentation was limited to a couple of variants of experience replay, additional variations may further improve learning performance. Investigating other methods for improving learning speed and stability from the RL literature may also bring GFlowNet performance improvements. ## Acknowledgements We gratefully acknowledge the generous funding support received for this project. We would like to express our sincere gratitude to the Fonds Recherche Quebec for their FACS-Acquity grant and to the National Research Council of Canada. Their financial contributions have played a vital role in making this research possible. Figure 4: Hypergrid domain - States visited vs. percentage of modes discovered during training in a 4-dimensional hypergrid (max = 16 modes) with \(H=8\) for different batch sizes, with \(R_{0}=10^{-3}\) (mean and standard error over 5 runs). Figure 5: Molecule synthesis environment - Number of iterations vs. the number of modes discovered with a reward at least \(r>0.9\) during training for R-PRS, Random sampling from replay buffer, and no replay buffer (mean and standard error over 3 runs). Figure 3: Hypergrid domain - States visited vs. percentage of modes discovered during training in a 4-dimensional hypergrid (max = 16 modes) with \(H=8\) for different batch sizes, with \(R_{0}=10^{-3}\) (mean and standard error over 5 runs). Figure 2: Hypergrid domain - States visited vs. percentage of modes discovered during training in a 4-dimensional hypergrid (max = 16 modes) with \(H=8\) for all three training regimes, with \(R_{0}=10^{-3}\) (mean and standard error over 5 runs).
2306.10021
Promises and Perils of Mining Software Package Ecosystem Data
The use of third-party packages is becoming increasingly popular and has led to the emergence of large software package ecosystems with a maze of inter-dependencies. Since the reliance on these ecosystems enables developers to reduce development effort and increase productivity, it has attracted the interest of researchers: understanding the infrastructure and dynamics of package ecosystems has given rise to approaches for better code reuse, automated updates, and the avoidance of vulnerabilities, to name a few examples. But the reality of these ecosystems also poses challenges to software engineering researchers, such as: How do we obtain the complete network of dependencies along with the corresponding versioning information? What are the boundaries of these package ecosystems? How do we consistently detect dependencies that are declared but not used? How do we consistently identify developers within a package ecosystem? How much of the ecosystem do we need to understand to analyse a single component? How well do our approaches generalise across different programming languages and package ecosystems? In this chapter, we review promises and perils of mining the rich data related to software package ecosystems available to software engineering researchers.
Raula Gaikovina Kula, Katsuro Inoue, Christoph Treude
2023-05-29T03:09:48Z
http://arxiv.org/abs/2306.10021v1
# Promises and Perils of Mining ###### Abstract The use of third-party packages is becoming increasingly popular and has led to the emergence of large software package ecosystems with a maze of inter-dependencies. Since the reliance on these ecosystems enables developers to reduce development effort and increase productivity, it has attracted the interest of researchers: understanding the infrastructure and dynamics of package ecosystems has given rise to approaches for better code reuse, automated updates, and the avoidance of vulnerabilities, to name a few examples. But the reality of these ecosystems also poses challenges to software engineering researchers, such as: How do we obtain the complete network of dependencies along with the corresponding versioning information? What are the boundaries of these package ecosystems? How do we consistently detect dependencies that are declared but not used? How do we consistently identify developers within a package ecosystem? How much of the ecosystem do we need to understand to analyse a single component? How well do our approaches generalise across different programming languages and package ecosystems? In this chapter, we review promises and perils of mining the rich data related to software package ecosystems available to software engineering researchers. ## 1 Introduction Third-party libraries are a great way for developers to incorporate code without having to write their own for every functionality required. By using these libraries, developers can save time and energy while still getting the functions they need. Using third-party libraries is becoming increasingly popular and has led to the emergence of large software package ecosystems such as npm. While these ecosystems offer many benefits, they also come with risks, such as software vulnerability attacks [5]. Large software package ecosystems are a treasure trove for researchers who can investigate a wide range of questions. For example, by studying activity in large ecosystems, researchers can identify which libraries are the most popular and learn what characteristics make them successful [16; 8]. Additionally, research on large ecosystems can help developers understand how to protect their code from malicious actors who may attempt to exploit vulnerabilities or insert malware into popular libraries. Studying large software package ecosystems can help us better understand the dynamics of open source development in general. Open source development is a complex process that involves many different stakeholders working together (or sometimes competing) to create valuable code that anyone can use or improve upon. By understanding how these interactions play out in different types of ecosystem structures - including those with many small projects versus few very large ones - we can develop insights that might be applicable more broadly across other types of collaborative systems. In this chapter, we identify and discuss promises and perils during the mining process, ranging from planning what information to mine from the ecosystem to analysing and visualising the mined data. Therefore, the chapter is broken down into these logical processes of mining ecosystem data: 1) Planning what Information to Mine, 2) Defining Components and their Dependencies, 3) Defining Boundaries and Completeness, and 4) Analysing and Visualising the Data. This chapter is intended for researchers and practitioners who are interested in exploring and exploiting software package ecosystem information from a diverse range of sources that are publicly available. We also highlight the pitfalls to consider during the mining process, particularly when these pitfalls could lead to a misinterpretation of the analysis and results. The chapter is written in a manner that encourages newcomers who have little or no experience or who are interested in utilising ecosystem data across different disciplines outside of software engineering. Our goal is to get new researchers quickly accustomed to gathering ecosystem information for their research. ### A Component-based Software Ecosystem Defined as a component-based software ecosystem, we suggest using the term'software package ecosystem' as a suitable term for the symbiotic relationships among third-party library components (as software projects or repositories), as these libraries and their dependent clients coexist on the same technological platform, therefore sharing the same environment and other internal and external factors (e.g., security threats, sharing contributions, etc.). Please refer to the Introduction chapter for an in-depth definition of the different types of software ecosystems. We present our interpretation of the software package ecosystem in Kula et al. [17], where we formally define a package ecosystem using a Software Universe Graph (SUG). This is modelled as a structured abstraction of the evolution of software systems and their library dependencies over time. #### 2.2.1 Component-based Representation as a Software Universe Graph First introduced by Kula et al. [17], the _Software Universe Graph_ (SUG) is a structural abstraction of the software ecosystem of third-party libraries. Figure 1 provides an illustration of the different relationships within the graph. Let \(G=(N,E)\) represent a graph \(G\). \(N\) is a set of nodes, each node representing a software unit. We define a software unit as a version instance of any software program. The authors then present the _use_ and _update_ relationships that exist in the ecosystem. Hence, the edges \(E\) are composed of \(E_{use}\) and \(E_{update}\). \(E_{use}\) is a set of _use-relations_ and \(E_{update}\) is a set of _update-relations_. **Definition 1.1**: An edge \(u\to v\in E_{use}\) means that \(u\) uses \(v\). The defined functions of \(E_{use}\) are: \[Use(u)\equiv\{v|u\to v\} \tag{1.1}\] \[UsedBy(u)\equiv\{v|v\to u\} \tag{1.2}\] Use-relations can be extracted from either the source code or configuration files. As shown in Figure 1, node \(a1\) uses node \(x1\). In addition, node \(x1\) is used by nodes \(a1\), \(q1\), and \(q2\). Parallel edges for node pairs are not allowed. Figure 1: Conceptual example of the Software Universe Graph, depicting the use and update relationships between different software units. **Definition 1.2**: We represent an update relation from node \(a\) to \(b\) using \(a\Rightarrow b\), which means that the newer update \(b\) was released from node \(a\) and is defined as: \[a\Rightarrow b\in E_{update} \tag{1.3}\] Update relations refer to when a successive release of a software unit is made available. Figure 1.1 shows that node \(q1\) is first updated to node \(q2\). Later, node \(q2\) is updated to the latest node \(q3\). Hence, \(q1\Rightarrow q2\Rightarrow q3\). Note that an update should not be confused with forking. We distinguish a fork as a separate software unit. Each node in the SUG should be denoted by three attributes: <name,release,time>. For a node \(u\), we define: * **u.name** Name is the string representation identifier of a software unit. We introduce the name axiom: For nodes \(u\) and \(v\), if \(u\Rightarrow v\), then \(u.name=v.name\) holds. * **u.release**. Release refers to the specific assigned change reference for a software unit. For nodes \(u\) and \(v\), if \(u\Rightarrow v\) then \(v\) is the immediate successor of \(u\). Note that the versioning pattern may vary from project to project. * **u.time**. Time refers to the time stamp at which node \(u\) was released. For nodes \(u\) and \(v\) of \(u\Rightarrow v\), \(u.time<v.time\). **Definition 1.3**: The SUG has temporal properties. This describes the simultaneity or ordering in reference to time. Let SUG \(G=(N,E)\) be at time \(t\). At time \(t^{\prime}>t\), we observe an extension of \(G\), such that: \[G^{\prime}=(N\cup\Delta N,E\cup\Delta E) \tag{1.4}\] where \(\Delta E\cap(N\times N)=\emptyset\) Figure 1.2 illustrates the temporal properties of the SUG. Here, it is observed that \(G^{\prime}\) is composed of \(G\) augmented with newly added node \(a3\) and its corresponding Figure 1.2: Temporal property of the SUG \(a3\to x2\) and \(a2\Rightarrow a3\) relations. A SUG grows monotonically over time with only additions. Here, we consider that modification or deletion changes on the SUG do not occur. **Definition 4**: A timed SUG specifies the state of the SUG at any point in time. So for an SUG \(G=(N,E)\), we represent a timed SUG \(G_{t}\) at time \(t\) as a sub-graph of \(G\). Formally, \[G_{t}\equiv(N_{t},E_{t}) \tag{5}\] where \(N_{t}=\{u|u\in N,u.time\leq t\}\) and \(E_{t}=\{e|e\in E\wedge e\in N_{t}\}\) ### Data Sources Researchers can use various datasets to model the ecosystem using the SUG model of usage and update relationships. The most obvious data source that has revolutionised data mining in the software engineering domain is the GitHub platform. Established in 2008, and then purchased by Microsoft in 2020, GitHub is home to various popular Open Source Software. GitHub is built on the git version control system and is useful for storing all changes made to a repository. In the case of the SUG, a GitHub repository can represent one software unit, whose depend relations can be extracted via a configuration file (such as the package.json file for JavaScript projects). The repository should also contain the release information that holds the update relations. Due to its large size, researchers and the GitHub team have made available datasets for researchers to mine, for example through the GitHub API/Graph QL.1 This is the backend Application Programming Interface (API) that can be used to query large amounts of data on GitHub. Most researchers use the API to download and mine information from the GitHub platform. It is important to note that while GitHub introduced a new feature of Dependency Graphs to map the depend relationship,2 most older projects do not have this feature. In this case, the researcher would need to manually extract and query the configuration files for dependency information. Footnote 1: [https://docs.github.com/en/graphql](https://docs.github.com/en/graphql) Footnote 2: [https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph](https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph) We refer to the first chapter for additional information on data sources for mining software ecosystems. ### Promises and Perils Using the SUG model of depend and use relations and the available datasets, we can now present our promises and perils of mining ecosystem information. #### Planning What Information to Mine **Promise 1.**_Researchers can access and link heterogeneous data related to software package ecosystems, e.g., package registries and bug trackers._ When planning what information to mine from the ecosystem, researchers do not need to limit themselves to the usage and update relationship information. Platforms that host software repositories include other software management systems such as bug trackers. For example, GitHub allows researchers to manage GitHub Pull Requests, Issues, and Discussions not only for one project, but for multiple projects. GitHub provides three management systems that are related to a software repository: * The GitHub Discussions forum is a collaborative communication forum for the community around an open source or internal project. Community members can ask and answer questions, share updates, have open-ended conversations, and follow along on decisions affecting the community's way of working. Footnote 3: [https://docs.github.com/en/discussions](https://docs.github.com/en/discussions) * Pull Requests allow other developers from an ecosystem to make a contribution to a software repository. Pull requests also allow maintainers to discuss and review potential changes with collaborators and add follow-up commits before changes are merged into the software. Footnote 4: [https://docs.github.com/en/pull-requests](https://docs.github.com/en/pull-requests) * Issues are used to track ideas, feedback, tasks, or bugs for work on GitHub. Footnote 5: [https://docs.github.com/en/issues](https://docs.github.com/en/issues) These three systems are examples of how developers contribute to both their own and other projects. Hence, to incorporate this information, we can extend the SUG model, creating a model that includes a contribution relationship [32]. **Definition 1.5**: A Dependency-Contribution graph incorporates contributions by developers whose libraries are involved in dependency relationships. In this work [32], the authors explore the congruence between dependency updates and developer contributions, based on the original concept of social-technical congruence [4] where developers contribution patterns are congruent with their coordination needs. Hence, the goal is to identify contributions that are congruent to dependency updates. As shown in Figure 1.3 the authors extend from the typical SUG graph model where \(lib_{i}\) depends (use) on \(lib_{k}\) and \(lib_{j}\), while \(lib_{j}\) also depends on \(lib_{k}\), to the example shown in Figure 1.4. Different to the SUG, the graph captures developers and their contributions (i.e., the square as \(dev_{x}\) and \(dev_{y}\) represent two different developers making a contribution). Here contributions are defined as \(c\) (Pull Request or Issue) that were submitted to both a library and the client that depends on that library. Hence, the graph can show contributions that are congruent to dependency changes for a software unit. This is just one example of the type of research that is enabled by access to heterogeneous data related to software package ecosystems. **Peril 1**.: _Developers might use different identifiers when contributing to different parts of a software package ecosystem, e.g., when contributing to different libraries._ When modelling using such graphs, there is a threat that contributors may use multiple identifiers (i.e., \(c_{x}\) and \(c_{y}\) are the same contributor). This is a well-known research problem, and there has been work to merge these accounts, such as [34]. GitHub has introduced mechanisms such as two-factor authentication6 to counteract the issue of multiple identifiers. This is since developers might be less likely to switch accounts if it requires cumbersome authentication. Footnote 6: [https://docs.github.com/en/authentication/securing-your-account-with-two-factor-authentication-2fa/configuring-two-factor-authentication](https://docs.github.com/en/authentication/securing-your-account-with-two-factor-authentication-2fa/configuring-two-factor-authentication) Figure 1.4: Example Dependency-Contribution graph showing relationships between contributions and dependencies Figure 1.3: Example dependency graph for a given time period **Peril 2**.: _Developers' contributions to software package ecosystems might be interspersed with bot contributions, e.g., automated dependency updates._ The rise of automation and artificial intelligence has led to much work on the integration of automated scheduling (i.e., bots) into software development workflows [29; 11; 33; 10; 27] to name a few. These bots are designed to perform specific tasks within a software package ecosystem. For example, a bot may be programmed to automatically update dependencies, test code changes, or deploy software to production. As an example, the Google APIs repo-automation-bots project lists bots for automated labelling of issues and pull requests, automated approval of pull requests, and triggering releases.7 Bots perform common maintenance tasks in many software projects and are now commonplace [2; 30; 9; 14]. Especially with bots such as dependabot (automated pull requests to update configurations to reduce the risk of vulnerability threats),8 more and more automation has caused a lot of noise in the contributions between projects. There are also bots for communication and documentation [30; 19; 18]. Footnote 7: [https://github.com/googleapis/repo-automation-bots](https://github.com/googleapis/repo-automation-bots) Footnote 8: [https://github.com/dependabot](https://github.com/dependabot) To be able to draw accurate conclusions about what humans are doing in software package ecosystems, researchers should consider distinguishing between bot and human contributions. It is also important to differentiate this from other contributions [21]. The research community has responded well, with a wide range of techniques and tools to mitigate this peril [13; 12]. **Peril 3**.: _Not all developer activities in software package ecosystems are accessible to the public, e.g., library use in proprietary settings._ Not all developer activities in software package ecosystems are accessible to the public, e.g., when the boundary between open source and industry is blurred [28], which presents a challenge for researchers who aim to study the development process. This is particularly true in proprietary settings where software development is performed behind closed doors or is open source for a limited time period, thus resulting in the artefacts not permanently being publicly available. This can make it difficult to understand the broader ecosystem in which a software project is developed. Proprietary settings may lead to non-standardisation in software development practises. Different software projects may use different management systems and tools, making it difficult to accurately compare and analyse software development activities across various projects. For example, some projects may use communication, documentation, and other management tools not captured on the same platform [23]. For example, some projects might use Bugzilla instead of issues and pull requests for their bug and code review systems, while others may use Discord, Slack channels, or email threads for their communication needs. This lack of standardisation in software development practises presents a challenge for researchers who study the software package ecosystem and understand the development process. To address this issue, researchers should strive to collect data from a diverse set of projects to gain a comprehensive understanding of the software package ecosystem. In addition, researchers may need to adjust their methodologies or data collection techniques to accommodate the different tools and practises used by different software projects. #### Defining Components and their Dependencies **Promise 2**.: _Researchers can access a software package ecosystem's dependency network through package managers and registries, e.g., npm lists the dependencies and dependents for over a million libraries._ With the rise of curated datasets like libraries.io, researchers can now recover and model dependency relations between software units using pre-extracted datasets. Table 1 shows examples of popular package managers mined from the libraries.io dataset in 2020. **Peril 4**.: _Different software package ecosystems define the concept of "dependency" differently, e.g., by allowing or not allowing different versions of a library on the same dependency tree._ Different software package ecosystems have varying definitions of what constitutes a dependency. For example, some ecosystems may allow multiple versions of a library to exist on the same dependency tree, while others may restrict developers to a single version of a library [15]. These restrictions are often based on the pro \begin{table} \begin{tabular}{l l l l l} \hline \hline Package & Programming & Tiobe & Environment & Dependency & Package \\ Ecosystem & Language & Rank & Tree & Archive link \\ \hline PyPI & Python & 2 Python & Flat & pypi.org \\ Maven & Java & 3 JVM & Flat & Maven.org \\ Bower & JavaScript & 7 Node.js & Flat & bower.io \\ Meteor & JavaScript & 7 Node.js & Nested & atmospherejs.com \\ npm & JavaScript & 7 Node.js & Nested (v2) & npmjs.com \\ Packagist & PHP & 8 PHP & Flat & packagist.org \\ Puppet & Ruby & 13 Ruby MRI & Flat & forge.puppet.com \\ RubyGems & Ruby & 13 Ruby MRI & Flat & rubygems.org \\ CRAN & R & 14 RStudio & Flat & cran.r-project.org \\ CPAN & Perl & 15 Perl & Flat & metacpan.org \\ GO & Golang & 20 Go & Flat & pkg.go.dev \\ NutGet & C\#, VB & 5, 6.NET & Flat & nuget.org \\ Anaconda & Python, R, C\# 2, 14, 5 Anaconda & Flat & anaconda.org \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of 13 package managers from libraries.io as ranked by TIOBE in 2020 gramming language being used, as different languages have different approaches to managing dependencies. It is important to consider the restrictions on dependency relationships when studying software package ecosystems, as they can have a major impact on the development process. For example, the ability to use multiple versions of a library on the same dependency tree can greatly simplify the process of updating dependencies and can make it easier to resolve conflicts between libraries. One way to visualise the impact of these restrictions is to compare the difference between a nested dependency tree and a directed dependency tree, as shown in Figure 1.5.9 This distinction is important because it highlights the different ways that a software unit can depend on different versions of the same library. In this example, npm v3 creates the dependency tree based on the installation order, therefore flattening unnecessary nested dependencies (i.e., B v1.0 in cyan). This reduces the complexity of a nested tree by resolving some of the transitive dependencies (nested dependencies). Footnote 9: Taken from [https://npm.github.io/how-npm-works-docs/npm3/how-npm3-works.html](https://npm.github.io/how-npm-works-docs/npm3/how-npm3-works.html) **Peril 5**.: _Developers might declare a dependency to other parts of a software package ecosystem but not use it, e.g., because they failed to update its removal._ It is common for developers to declare dependencies on other parts of the software package ecosystem but not always use them. This can happen for various reasons, such as forgetting to remove the dependency after it is no longer needed. This can pose a challenge for researchers who are trying to extract dependencies from package managers, like those in configuration files, as there may be inconsistencies between the listed dependencies and what is actually being compiled and used by the code. This can lead to a biased understanding of the software package ecosystem and the relationships between software components. To address this issue, there have been numerous efforts to track the actual library dependencies compiled and executed in software systems. These efforts aim to provide a more accurate understanding of the dependencies and the relationships Figure 1.5: Difference between flat and nested dependencies between software components. For example, research has been conducted on the use of dynamic analysis to track compiled dependencies in real time and on the development of tools to automatically detect and track executed dependencies [35; 26; 5]. #### Defining Boundaries and Completeness **Promise 3.**_Researchers can use the boundaries of software package ecosystems to study communities of developers, e.g., developers contributing to and/or benefiting from the npm ecosystem._ Following Promise 2, the emergence of package managers has also led to studies that approximate software communities. Using the libraries.io dataset, researchers were able to study projects that host libraries that use package managers. Researchers have used this dataset to compare different library ecosystems [16; 8; 7]. **Peril 6.**_Package managers do not always represent software package ecosystems, their communities, or their sub-communities, e.g., in cases where multiple package managers exist._ Package managers are a fundamental aspect of software package ecosystems, but do not always fully represent the complex relationships and interactions that occur within a community of developers and users, as shown in Table 1. In some cases, multiple package managers exist for the same programming language, creating a complex landscape of software libraries and dependencies that are not always easily understood. For instance, Bower and Meteor manage npm libraries, which can lead to confusion and overlap in the management of dependencies. Similarly, Java, Scala, Android, and other Java-based open source communities all use the Maven package manager, but each of these communities has its own unique set of libraries, dependencies, and development practises. Researchers should be aware of the limitations of package managers when studying software package ecosystems, and consider the broader context and relationships that exist within these communities. **Peril 7.**_Lack of activity in parts of a software package ecosystem does not necessarily indicate project failure, e.g., when highly depended-upon libraries are feature-complete._ It is important to note that lack of activity in a part of a software package ecosystem does not always mean project failure [6]. In some cases, highly relied-upon libraries that have reached feature-completeness may see little activity, but continue to be used by the software community. However, it is still important to consider the long-term sustainability of these libraries, especially given the rate at which technology and software development pracises change. This has become a topic of interest in recent years, and researchers have explored best pracises for sustaining open source projects and ensuring their continued success [1; 31]. Understanding the factors that contribute to project sustainability is important to ensure the longevity and continued growth of software package ecosystems. **Peril 8**.: _Sampling from a software package ecosystem is challenging since sub-setting might alter the dependency network, e.g., by breaking dependency chains._ Sampling from a package ecosystem is not straightforward, as the sample composition can be significantly affected due to missing dependency links between libraries. For instance, a subset of the ecosystem might alter the dependencies between libraries, leading to the breakdown of the dependency chains. This could lead to an incomplete picture of the software package ecosystem, leading to incorrect conclusions from a study. To minimise this risk, researchers should carefully consider the boundaries of their study and choose the appropriate sampling method based on the research questions and goals. For example, researchers could focus on popular, highly dependent, or risk-vulnerable aspects of the ecosystem as a starting point. For some ecosystems, the number of downloads, GitHub stars, and watchers are other aspects for the researcher to utilise. **Peril 9**.: _Sampling from a software package ecosystem is challenging since the dependency network changes over time, e.g., when dependencies are added, removed, upgraded, or downgraded._ The dynamic nature of package ecosystems and the constant changes to their dependencies can impact the generalisability of the results. Therefore, it is important to also consider the time granularity of the analysis. For example, if the goal is to understand the evolution of dependencies over time, a finer time granularity may be necessary to capture the smaller changes and trends. However, if the goal is to understand the overall structure and relationships within the ecosystem, a coarser time granularity may be sufficient. Based on recent studies [32; 31; 22; 3; 24], a three-month window seems appropriate for some studies. Another level of granularity to consider is the size of the component. For instance, there are cases where a single package may contain more than one repository, especially for large library frameworks. The granularity also depends on the nature of the ecosystem itself. For instance, researchers should understand whether the ecosystem comprises library packages (e.g., PyPI), plugins (e.g., Eclipse), or is a library distribution (e.g., Android). ### Analysing and Visualising the Data **Peril 10**.: _Analysing and visualising entire software package ecosystems is challenging due to their size, e.g., in terms of nodes and edges in the network._ The size of software package ecosystems implies large data sets, which can be overwhelming for tools and algorithms to analyse and display. Therefore, it may be necessary to make choices about the granularity of the data included in the analysis and visualisation. Another alternative is to focus on the most critical parts of the software package ecosystem, such as the high-level structure, highly dependent packages, or parts of the system that pose a risk to security and reliability. The key is to strike a balance between detail and simplicity, providing a meaningful representation of the ecosystem while being able to handle the complexity of its size. ### Application: When to Apply Which Peril We include a disclaimer stating that not all perils are applicable to every mining situation. To demonstrate the practical application of our perils and their mitigation, we present two case studies that involve mining the software package ecosystem. Each case study has a distinct research objective and focusses on a specific dataset to be mined. #### Two Case Studies Table 2 presents the two case studies we have selected for this analysis. The _first case_ involves mining for contributions congruent to dependency updates [32]. In this work, the authors mine GitHub repositories for Pull Requests and Issues that were submitted and merged congruent to dependency updates within the npm ecosystem. The _second case_ involves mining communication data for the Eclipse ecosystem [25]. Although the second case does not mine for dependency relations (i.e., use relations), we show that these perils still apply when mining for other relationships in an ecosystem. Moreover, the second case studies the Eclipse ecosystem, which is a different dataset compared to the more popular GitHub dataset. #### Applying Perils and their Mitigation Strategies Table 3 provides a summary of the perils that can be applied to each of the case studies. We will now go into the details of mitigation strategies based on these perils. For better organisation and understanding, we have grouped the perils according to the four logical processes for mining. **Information to Mine**. The first set of mitigation strategies, which addresses perils 1-3, focusses on planning which information to mine. There are two primary strategies that researchers can employ: 1. Researchers should use research tools and techniques to remove noise and other biases in the dataset, such as bot detection and the handling of multiple identities. This strategy was implemented in both case studies, as contributions and discussions often have the potential to involve bots or developers with multiple identities. 2. Depending on the research goals, researchers should recognise that not all contributions are equal and filter the dataset accordingly. We applied these two strategies to both cases. In the first case, the goal was to capture all congruent contributions, so we filtered out contributions made to libraries without dependencies. Since all npm packages are listed in the registry, Peril 3 (private activities) did not apply. In the second case, we addressed Peril 1 by conducting a qualitative analysis to ensure that the member identities were not duplicated, as Eclipse developers were known to change identities. To mitigate Peril 2, we removed bot responses. For the second case, since all forum data is made public, Peril 3 did not apply. **Defining Dependencies**. The second set of perils (Perils 4-5) is related to dependency relationships between software units, and only the first case study is applicable. To address these perils, researchers should adopt the following strategy: 1. Researchers should not rely solely on listed dependencies in configuration files (e.g., pom.xml, package.json, etc.) as a measure of dependency between two components. Instead, code-centric approaches should be used to validate which libraries are actually depended upon. For example, in the first case, in addition to mining the configuration information, the authors also analysed the similarity of the source code contributions to address Peril 4. Regarding Peril 5, since the study's objective was to investigate changes to the configuration files, the risk of the update not being executed was deemed less important. It is important to note that the second case study did not include dependency analysis and, therefore, these perils did not apply. **Defining Boundaries**. The third set of perils (Perils 6-9) is related to the definition of boundaries and completeness and is relevant for both case studies. To mitigate these perils, we recommend the following strategies: \begin{table} \begin{tabular}{l l c} \hline \hline **Case Study** & **Research Objective** & **Datasets** \\ \hline Wattanakriengkrai _et al._[32] & Explore code contributions between library and client (i.e, use-relations) & libraries.io \\ & & GitHub API \\ Nugroho _et al._[25] & Explore discussion contributions between contributors (i.e., contributions) & Eclipse API \\ \hline \hline \end{tabular} \end{table} Table 1.2. Description of the research objectives and datasets for the case studies Figure 6: Visualisation examples for the two case studies 1. Researchers should recognise that a dormant project does not necessarily mean that it is inactive. Instead, studies can use alternative heuristics, such as the number of dependents and dependencies, as better indicators of a project's importance in the ecosystem. 2. Researchers should not rely solely on the programming language to define sub-communities. Using a common package manager for the programming language is a more effective rule of thumb for distinguishing boundaries. 3. Researchers should avoid random sampling. Instead, sampling should be tailored to the research goals by considering factors such as an appropriate time window or focussing on specific attributes of components (e.g., most dependents, most popular, most contributors). Peril 6 did not apply to any of the case studies. Particularly for the first case, since the goal was to explore the npm package ecosystem, we assumed that the boundaries were clearly defined by the npm registry. Similarly, the second case study used the \begin{table} \begin{tabular}{l r r} \hline \hline **Perils** & case 1 & case 2 \\ & npm & Eclipse \\ \hline **P1**: Developers might use different identifiers when contributing to different parts of a software package ecosystem, e.g., when contributing to different libraries. & \(\boxplus\) & \(\boxplus\) \\ **P2**: Developers’ contributions to software package ecosystems might be interspersed with bot contributions, e.g., automated dependency updates. & \(\boxplus\) & \(\boxplus\) \\ **P3**: Not all developer activities in software package ecosystems are accessible to the public, e.g., library use in proprietary settings. & \(\boxplus\) & \(\boxplus\) \\ **P4**: Different software package ecosystems define the concept of “ dependency” differently, e.g., by allowing or not allowing different versions of a library on the same dependency tree. & \(\boxplus\) & \(\boxplus\) \\ **P5**: Developers might declare a dependency to other parts of a software package ecosystem but not use it, e.g., because they failed to update its removal. & \(\boxplus\) & \(\boxplus\) \\ **P6**: Package managers do not always represent software package ecosystems, their communities, or their sub-communities, e.g., in cases where multiple package managers exist. & \(\boxplus\) & \(\boxplus\) \\ **P7**: Lack of activity in parts of a software package ecosystem does not necessarily indicate project failure, e.g., when highly depended-upon libraries are feature-complete. & \(\boxplus\) & \(\boxplus\) \\ **P8**: Sampling from a software package ecosystem is challenging since sub-setting might alter the dependency network, e.g., by breaking dependency chains. & \(\boxplus\) & \(\boxplus\) \\ **P9**: Sampling from a software package ecosystem is challenging since the dependency network changes over time, e.g., when dependencies are added, removed, upgraded, or downgraded. & \(\boxplus\) & \(\boxplus\) \\ **P10**: Analysing and visualising entire software package ecosystems is challenging due to their size, e.g., in terms of nodes and edges in the network. & \(\boxplus\) & \(\boxplus\) \\ \hline \hline \end{tabular} \end{table} Table 3: Application of each peril to the case studies generic Eclipse platform as the boundary. Peril 7 was applied to the npm study, while Peril 8 was applied to both case studies. As a result, the two cases conducted a qualitative analysis of the dataset to gain deeper insights. In the first case study, a three-month time window was created to capture dependencies. For the second case study, forum contributors were sampled into three groups (i.e., junior, member, or senior) according to the sliding window of their contributions. **Visualisation**. The final peril (Peril 10) relates to visualisation, which can be challenging due to the vast size and complexity of software ecosystems. As it is not feasible to visualise every aspect of an ecosystem simultaneously, a focused approach is necessary. A mitigation strategy is to select specific attributes of the ecosystem (e.g., the most dependent, most popular, and most contributions) that align with the research needs and objectives. Figure 6 shows two cases where visualizations are employed to gain insights, especially for large datasets. In the first figure (a), we visualize the distributions of the data set and applied the appropriate statistical tests, along with the effect size, to test our hypotheses and answer research questions. In the second example (b), although not directly related to package ecosystems, the authors utilized a topological visualization [20] to gain insights on the over 800,000 forum threads of discussions. ### Chapter Summary In this chapter, we explore the various aspects of mining information from the software package ecosystem, presenting three promises and ten perils that researchers should be aware of when undertaking such tasks. The chapter is structured around four key processes for mining: 1) Planning what Information to Mine, 2) Defining Components and their Dependencies, 3) Defining Boundaries and Completeness, and 4) Analysing and Visualising the Data. To help new and experienced researchers navigate these challenges, we introduced the SUG model, which can serve as a valuable tool to minimise threats to validity. Although some perils may be more relevant to specific research objectives, our aim is to equip researchers with the knowledge and resources needed to confidently gather and integrate software package ecosystem data into their work.
2307.10308
Toward a comprehensive system for constructing compartmental epidemic models
Compartmental models are valuable tools for investigating infectious diseases. Researchers building such models typically begin with a simple structure where compartments correspond to individuals with different epidemiological statuses, e.g., the classic SIR model which splits the population into susceptible, infected, and recovered compartments. However, as more information about a specific pathogen is discovered, or as a means to investigate the effects of heterogeneities, it becomes useful to stratify models further -- for example by age, geographic location, or pathogen strain. The operation of constructing stratified compartmental models from a pair of simpler models resembles the Cartesian product used in graph theory, but several key differences complicate matters. In this article we give explicit mathematical definitions for several so-called ``model products'' and provide examples where each is suitable. We also provide examples of model stratification where no existing model product will generate the desired result.
Darren Flynn-Primrose, Steven C. Walker, Michael Li, Benjamin M. Bolker, David J. D. Earn, Jonathan Dushoff
2023-07-19T00:18:51Z
http://arxiv.org/abs/2307.10308v1
# Toward a comprehensive system for constructing compartmental epidemic models ###### Abstract Compartmental models are valuable tools for investigating infectious diseases. Researchers building such models typically begin with a simple structure where compartments correspond to individuals with different epidemiological statuses, e.g., the classic SIR model which splits the population into susceptible, infected, and recovered compartments. However, as more information about a specific pathogen is discovered, or as a means to investigate the effects of heterogeneities, it becomes useful to stratify models further -- for example by age, geographic location, or pathogen strain. The operation of constructing stratified compartmental models from a pair of simpler models resembles the Cartesian product used in graph theory, but several key differences complicate matters. In this article we give explicit mathematical definitions for several so-called "model products" and provide examples where each is suitable. We also provide examples of model stratification where no existing model product will generate the desired result. epidemiology, transmission dynamics, compartmental model, graph theory, Cartesian product ## 1 Introduction The COVID-19 pandemic has reemphasized the importance of compartmental models (Abou-Ismail, 2020; Massonis et al., 2021; Adam, 2020; Currie et al., 2020; Lofgren et al., 2014; McBryde et al., 2020; Enserink and Kupferschmidt, 2020) and has resulted in a flood of new compartmental models (e.g., Friston et al. (2020); Fields et al. (2021); Chang and Liu (2022); Lavielle et al. (2020); Balabdaoui and Mohr (2020); Leontitsis et al. (2021)). This abundance of new model variants is to be expected given the number of public health modelers seeking to integrate the current scientific understanding of emerging infectious diseases in a way that will have policy impact. Modelers must be able to build models rapidly to explore scenarios and generate high quality forecasts; public health recommendations have the biggest impact if they can be acted on promptly. However, the speed at which modelers can develop new models typically trades off with model quality. It is therefore useful to develop tools that make it easier for modelers to build high-quality models more quickly. One way to address this speed-quality trade-off is to build infectious disease models incrementally. Information is scarce early in an epidemic, and so early models should be simple to reflect ignorance. As epidemics progress, more information is gathered and policy choices require fast input from scientists. Public health modelers will then need to quickly add complexity to their models if they are to be relevant to policy. Proceeding in this way can eventually result in extremely complex models, much of whose complexity is no longer relevant. Complexity makes it more difficult to add new features to the model when they become necessary. Therefore, modelers need tools that make it easier to flexibly add and remove model features. Savageau (1988) and Voit (1988, 1990) made an early attempt to create such a toolbox by recasting the underlying differential equations of a model into a canonical form they call an "S-model". Unfortunately this effort focused on the model's differential equations rather than its graphical structure, thus making it unsuitable for less mathematically inclined modelers. It does not seem to have been widely adopted. Friston et al. (2020) describe how the state space of a complex epidemiological model can be constructed from the product of different latent state dimensions (their Figure 1: infection status, clinical status, testing status, and location), but the definition of which compartments are connected, and the rates of flow between them, is left up to the modeller. A recent and promising effort to formalize the construction of compartmental models employs the language of category theory (Fong and Spivak, 2018; Libkind et al., 2022, 2021; Baez et al., 2022; Baez and Pollard, 2017). This powerful approach addresses many of the concepts we discuss here; however, at its current stage of development it requires considerable knowledge of advanced mathematics to use effectively. An ongoing project to implement the category theoretic approach in the Julia language can be found at [https://github.com/AlgebraicJulia/AlgebraicPetri.jl](https://github.com/AlgebraicJulia/AlgebraicPetri.jl)(Halter et al., 2022). Worden and Porco (2017) use the relatively simple language of graph theory to describe common methods of combining compartmental models. The current paper is a result of the our efforts to implement the products described by Worden and Porco in software. At a high level, we view model "multiplication" as a three-step operation: **Procedure for Model Multiplication** 1. Generate the vertices of the product model by combining the vertices of the factor models. This typically means taking the Cartesian product of the vertices of factor models. In many cases, we will want only a subset of the Cartesian product in the final model (e.g., some combinations are physically or biologically impossible). 2. Generate the edges of the product model. Again, we will typically take the Cartesian product of edges in each factor model with the vertices in the other. Some transitions may be disallowed, in which case we would drop those edges from the product or set the flows across them to zero. In other cases, we may want to add edges to the product to allow state changes in multiple strata to occur simultaneously. 3. Resolve ambiguities in how flow functions are generalized to accommodate the presence of additional strata. 4. Set parameters for the product model The combination of the first two steps is just a graph-theoretic Cartesian product. Recognizing the need to adjust flow rates (the third step above) is Worden and Porco's contribution. This requirement can arise in a number of ways; for example, when stratifying a standard SIR model, we need to decide whether the susceptible population of one stratum can be infected by the infectious population of another stratum. If the stratification is based on age groups then it is reasonable to let different strata infect one another, i.e. old people infecting young people and _vice versa_. On the other hand, if the populations in different strata are isolated from each other it may be preferable to prohibit inter-stratum infection. Our approach follows Worden and Porco's; when computing the magnitude of a flow between compartments we separately compute the contribution from each individual stratum and then sum the resulting quantities to find the total magnitude of the flow. There are other possible approaches which we will revisit in Section 5.1, but this additive approach appears to be the most common. When stated in these simple terms it is easy to imagine that we have solved the problem of combining models, but the devil is in the details. It is one thing to develop a framework for combining models and their various components but it is a much greater challenge to determine precisely which models can (and cannot) be expressed within that framework. Given the diversity of the audience potentially interested in this material we have attempted to strike a healthy balance between mathematical rigour and intuitive explanation. In Section 2 we introduce some mathematical terminology that will be used in the remaining sections to rigorously define a number of distinct model products. Sections 3 and 4 share similar structure, in the first subsection we focus on mathematical details and in the second subsection we provide examples which we hope will communicate our main points in a more intuitive way. Section 3 restates two products introduced by Worden and Porco (2017), Section 4 describes a generalization of the products from the previous section. Section 5 has three subsections each dedicated to different complexity that can make models challenging to construct using products alone. We conclude with Section 6 where we summarize our primary results and discuss the potential for further investigation. ## 2 Defining Compartmental Models Compartmental models closely resemble directed graphs, which are graphs where the connections between nodes have a specific direction (Roberts and Tesman, 2009); in this analogy the directional connections correspond to flows between compartments. The relationship between directed graphs and compartmental models has been studied before, for example by Walter and Contreras (1999) -- however, such investigations have been largely limited to the case where the magnitudes of flows between compartments are governed by linear equations. While certainly an important special case, this framework is insufficient for a disease-transmission model. In fact, a central point we hope to communicate in this paper is that, while constructing the nodes and edges of compartmental models is straightforward, making choices about how to calculate flows between compartments is a challenge with considerable nuance. We consider a compartmental model with \(n\) compartments. Flows can go from one compartment to another, from outside the system into a compartment (e.g., births), or from a compartment to the outside (e.g., deaths). There are thus up to \(n(n+1)\) possible flows, although most of these will be missing in any particular model. Each flow that is present will be described by a function that may depend on the state of any of the \(n\) compartments and on any of \(m\) (possibly time-varying) parameters. Flows out of a compartment can be either _per capita_ flows, in which case their value is multiplied by the current state value of that compartment, or they may be _absolute_ flows where no further computation is required. Historically much of the work done investigating compartmental models through a mathematical lens has been focused on the case where flow rates between compartments are restricted to linear equations (i.e., flows strictly proportional to the current state of the originating compartment). Formally, we let \(\mathcal{F}(n,m)\) denote the set of allowable flow-rate functions \(f:\mathbb{R}_{\geq 0}^{n}\times\mathbb{R}^{m}\to\mathbb{R}_{\geq 0}\); we will often abbreviate this as \(\mathcal{F}\). More generally, we write the dimension of a model _state space_ as \(n=\mathfrak{s}(f)\) and of the _parameter space_ as \(m=\mathfrak{p}(f)\). It is also frequently convenient to write functions in the form \(f(\vec{\beta},\vec{x})\) where \(\vec{\beta}\) is a vector in \(f\)'s parameter space and \(\vec{x}\) is a vector in \(f\)'s state space. To move from flow-rate functions to actual flows, it is convenient to define a family of _filtering functions_ which select specific subsets of the state space. We have two slightly different uses for these filtering functions: (1) we use them to select the "from" compartment of per-capita flows and (2) they will be used later to select compartments of a model product that belong to particular strata, for example selecting all compartments that correspond to a specific age group or location. We denote these families with the symbol \(\Upsilon\); for example \(\Upsilon(3,2)\) is the set of all simple projections from \(\mathbb{R}^{3}\) to \(\mathbb{R}^{2}\) (in this case, the three ways of choosing two variables from a list of three). We also include in all such families the projection onto unity to allow for the possibility of flows specified in absolute rather than per-capita terms. A filtering function being used to select a single "from" compartment for a per-capita flow will be denoted with \(\tau^{\text{\tiny fun}}\); one being used to select all compartments from a single stratum of the model, which will typically then be used as arguments to a flow function, will be denoted with \(\tau^{\text{\tiny fun}}\). We now have two families of functions. The first family is a set of flow rate functions, \(\mathcal{F}\), which we will use to describe either the per-capita or absolute rate of flow between compartments. The second family is a set of filtering functions, \(\Upsilon\), which will be used to select the population of the "from" compartment for a given per-capita flow from all compartments in a model (in the case of absolute flow rates this function will map all inputs to 1). With these definitions we can now define our third and final family of functions which will be denoted by \(\mathfrak{F}\) and is used to describe the absolute rate of flow between compartments. Every function in \(\mathfrak{F}\) can be expressed as a product of one function in \(\Upsilon\) and another in \(\mathcal{F}\). That is, if \(f\) is a member of \(\mathfrak{F}\) then there must be a \(\tau^{\text{\tiny fun}}\) in \(\Upsilon\) and an \(f_{r}\) in \(\mathcal{F}\) such that \(f=\tau^{\text{\tiny fun}}\cdot f_{r}\). Since \(\Upsilon\) contains functions that select the population of a single compartment as well as the projection of state space onto the number \(1\), \(\mathfrak{F}\) will contain functions that describe the magnitude of a flow both in per-capita terms and in absolute terms. In the former case we will have \(f=\tau^{\text{\tiny fun}}\cdot f_{r}\) where \(\tau^{\text{\tiny fun}}\) corresponds to the population of the from compartment; in the latter case we will have the same equation but \(\tau^{\text{\tiny fun}}\) will reduce to the number \(1\), leaving \(f_{r}\) as the absolute flow rate between compartments. Below we give a more technical definition. **Definition 1** (Flow Functions).: Given \(n,m\in\mathbb{N}\) we define \(\mathfrak{F}(n,m)\) to be the set of all functions \(f:\mathbb{R}^{n}_{\geq 0}\times\mathbb{R}^{m}\rightarrow\mathbb{R}_{\geq 0}\) such that for some \(f_{r}\in\mathcal{F}(n,m)\) and \(\tau^{\text{\tiny fun}}\in\Upsilon(n,1)\), for every \(\vec{\beta}\in\mathbb{R}^{m}\) and \(\vec{x}\in\mathbb{R}^{n}_{\geq 0}\) \[f(\vec{\beta},\vec{x})=\tau^{\text{\tiny fun}}(\vec{x})\cdot f_{r}(\vec{\beta },\vec{x}) \tag{1}\] **Remark 1** (Notation, Ordered sets and Labels).: Where sets are labeled we assume their labels correspond to the order of the set (i.e. the first element in the set is labeled "1" and the \(i^{\text{th}}\) element is labeled "i"). Where sets are associated with each other we assume their labels (and/or order) correspond to the association map (i.e. if \(A\) and \(B\) are associated then \(a_{i}\in A\) maps to \(b_{i}\in B\) and vice versa). A closely related concept to compartmental models is that of directed graphs (aka **digraphs**). Intuitively, digraphs are just graphs where the connections between vertices have a specific direction associated with them. A more technical definition is that a digraph consists of two sets: a set of vertices \(V\) and a set of ordered pairs of vertices \(E\) that defines directed connections between vertices. As we will see below, compartmental models are digraphs with an additional set of functions that regulate the magnitude of the flow across edges. Later when we discuss model products it will be helpful to recognize that the digraph underlying a product model is the Cartesian product of the digraphs underlying the factor models. First, though, we define compartmental models in the language developed thus far. **Definition 2** (Compartmental Models).: Let \(D_{0}=(V,E)\) be a labeled, finite digraph and let \(n=|V|\). Suppose there is a set \(F\subset\mathfrak{F}(n)\) associated with \(E\), Then we call \(D=(D_{0},F)\) a **compartmental model**. We call \(\mathbb{R}^{n}_{\geq 0}\) the **state space of \(D\)** and if \(m=\sum\limits_{f\in F}\mathfrak{p}(f)\), then we call \(\mathbb{R}^{m}\) the **parameter space of \(D\)**. ### Parameter space of factor and product models In Definition 2 and in much of the rest of this paper we assume the product model parameters are entirely independent of each other. So if one factor model has \(k_{1}\) compartments and \(l_{1}\) parameters and the other factor model has \(k_{2}\) compartments and \(l_{2}\) parameters then every the product model will have a minimum of \(k_{1}l_{2}+k_{2}l_{1}\) parameters (when no interaction between strata is allowed) and a maximum of \(k_{1}^{2}l_{2}+k_{2}^{2}l_{1}\) parameters (when all strata are allowed to interact with each other). This approach has the advantage of preserving generality. In practice, parameters in the product model are often related to parameters in the original model factors in simple mechanistic ways. However, there is an enormous range of possible relationships between the parameters of the factor models and the parameters of their product. Some parameters, such as those describing intrinsic properties of a pathogen, may be constant across all strata of a product model. Others such as recovery time my be constant with respect to some dimensions of stratification (e.g. location) but variable with respect to others (e.g. age). In other cases the relationships may depend on the degree of available data -- we may know that recovery time varies with age in reality, but choose to treat is as constant for modeling purposes. It does not appear possible, with just the information present in the factor models, to deduce the desired relationship between factor model and product model parameters. Thus we choose for now to default to the most general possible case and trust that where more convenient relationships between parameters exist modelers will construct appropriate mappings for themselves. That said, we will discuss a few common scenarios here for the purpose of illustration. A parameter may be constant across multiple strata of a product model. This is common for parameters that describe intrinsic properties of the pathogen being modeled, where the strata represent variation among hosts. A related case occurs when the value of a parameter at each stratum in the product model is related to the factor model version of the parameter by a simple scalar. For example a pathogen may have an average recovery time across the entire population and the specific recovery time for different age groups could be specified as a percentage of the overall average. In both these cases if we let \(\alpha\in\mathbb{R}\) denote the parameter value in the factor model and \(\vec{\beta}\in\mathbb{R}^{k}\) denote the values of the derived parameters at \(k\) different stratum in the product model then we can write \(\vec{\beta}=\alpha\vec{w}\) where \(\vec{w}\in\mathbb{R}^{k}\) is a vector of weights. Multiple parameters in a factor model may be related to each other, as when a single compartment has multiple flows emanating from it. Consider the case of a person who has been exposed to a pathogen. Some models will allow them several possible fates: for example they could go on to be asymptomatic or have mild or severe symptoms. In that case the factor model in question will have three parameters (\(\alpha_{1},\alpha_{2},\alpha_{3}\in(0,1)\)) related by the fact that their sum must be one (\(\alpha_{1}+\alpha_{2}+\alpha_{3}=1)\). In other words, if \(\alpha_{1}\) and \(\alpha_{2}\) are given then \(\alpha_{3}=1-\alpha_{1}-\alpha_{2}\). When included in a product model every stratum of the new model will have three parameters derived from the original \(\alpha\) values; however each stratum may have different values for those parameters. For example people in different age groups may be more or less likely to experience severe, mild, or no symptoms. Mathematically, if we say \(\vec{\beta_{1}},\vec{\beta_{2}},\vec{\beta_{3}}\) denote the parameters at every stratum of the product model derived from \(\alpha_{1},\alpha_{2},\alpha_{3}\) respectively, then we can write \(\vec{\beta_{1}}=\alpha_{1}\vec{w_{1}}\), \(\vec{\beta_{2}}=\alpha_{2}\vec{w_{2}}\), and \(\vec{\beta_{3}}=\vec{1}-\vec{\beta_{1}}-\vec{\beta_{2}}\). A more complex case occurs when different strata of a product model interact. For example consider a simple SI model, that is, a model with only susceptible and infected compartments. In the standard formulation the force of infection of such a model is given by \(\Lambda=\frac{\beta I}{N}\), so the total number of newly infected people is \(S\cdot\Lambda=\frac{\beta SI}{N}\). Suppose we now stratify this model to represent a scenario where each person lives in one of three different locations but may come in contact with people living in the other locations. Our model would then have three infected compartments ( i.e. \(\vec{I}=(I_{1},I_{2},I_{3})\)) and three susceptible compartments (i.e. \(\vec{S}=(S_{1},S_{2},S_{3})\); the force of infection would be generalized to a vector with three entries, one for each location (i.e. \(\vec{\Lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})\)). In the most general case, where the force of infection does not take the standard form given above, each \(\lambda_{i}\) would be expressed as _some_ function of the infected populations as well as a vector of parameters \(\vec{\beta_{i}}\) which gives some information about how people at different locations interact with each other. Thus we would be left with \(\vec{\Lambda}=(f(\vec{\beta_{1}},\vec{I}),f(\vec{\beta_{2}},\vec{I}),f(\vec{ \beta_{3}},\vec{I}))\). In the standard formulation the force of infection is a linear equation with respect to the population of infected compartments, so we can be more specific. The factor model parameter \(\beta\) is generalized to nine new, presumed unrelated, parameters which can be written as a \(3\times 3\) matrix \[B=\begin{pmatrix}\beta_{11}&\beta_{12}&\beta_{13}\\ \beta_{21}&\beta_{22}&\beta_{23}\\ \beta_{31}&\beta_{32}&\beta_{33}\end{pmatrix}\] with the end result that we can write the expression \[\vec{\Lambda}=\frac{1}{N}B\vec{I}\] While it still preserves a degree of generality, this approach unfortunately expands the model's parameter space significantly. In practice the likelihood of a person residing in one location coming into contact with a person in a different location is not arbitrary but can reasonably be expected to vary according to the distance between the two locations. So if \(D\) is a three-by-three matrix where the \(d_{i,j}\) entry denotes the distance between location \(i\) and location \(j\) then it is possible to construct a contact matrix \(C\) that assigns numerical values to the likelihood of contact between people at two locations based on the known distance between them. One way to do this would be to let every entry \(c_{i,j}\) in \(C\) be given by \[c_{i,j}=e^{-\gamma d_{i,j}}\] where \(\gamma\in\mathbb{R}_{\geq 0}\) is some fixed parameter. In this way we can write \[\vec{\Lambda}=\frac{\beta}{N}C\vec{I}\] which preserves the original meaning of the parameter \(\beta\) and only introduces one new parameter (\(\gamma\)) instead of nine. Notice that it is possible to translate between the two approaches using the map \[B=\beta C=\beta\begin{pmatrix}e^{-\gamma d_{1,1}}&e^{-\gamma d_{1,2}}&e^{- \gamma d_{1,3}}\\ e^{-\gamma d_{2,1}}&e^{-\gamma d_{2,2}}&e^{-\gamma d_{2,3}}\\ e^{-\gamma d_{3,1}}&e^{-\gamma d_{3,2}}&e^{-\gamma d_{3,3}}\end{pmatrix}\] There are of course other ways to handle this kind of parameter simplification (e.g. Anderson and May (1985, 1992); Grenfell and Anderson (1985)). Most situations will allow for a parameter space mapping of this kind that relates the default parameter space generated by model products to a smaller parameter space dictated by the specific data available to the modeler. However, as we have tried to make clear in this subsection, the sheer variety of parameter space mappings that could be useful in some situation is so great that for now we will maintain generality by treating all parameters in the product model as independent. In our general parameterization the \(l_{i}\) parameters in the product model that come from the \(i\)'th factor model (of two) can be organized in a \(l_{i}\times k_{2-i}\times k_{2-i}\) order three tensor, \(B^{(i)}\). The modeler will have some set of parameters known to them which we call \(\vec{\theta}\) and will be able to compose, from a library of standard relations, a mapping \(g\) so that \[B^{(i)}_{hij}=g^{(i)}_{hij}(\vec{\theta})\] ## 3 Naive and Modified Products Worden and Porco (2017) describe several distinct model products, two of which are particularly relevant to us. Here, we would like to restate these two definitions in a mathematically rigorous way. These formal definitions provide a foundation for us to define a new, generalized version of a model product in Section 4 and have been valuable in our efforts to automate the process of model stratification. However, we will begin with a description of the Cartesian product of digraphs as they are a significant component of model products; treating them separately will simplify our later definitions. **Definition 3** (Cartesian Product of Digraphs).: Let \(D^{a}=(V^{a},E^{a})\) and \(D^{b}=(V^{b},E^{b})\) be digraphs with vertices \(V^{\bullet}\) and edges \(E^{\bullet}\). Then the **Cartesian Product** of \(D^{a}\) and \(D^{b}\) (i.e. \(D^{a}\times D^{b}\)) can be written as \(D=(V,E)\) where \(V=V^{a}\times V^{b}\) and \(E\) is the union of two sets \(E^{c}\) and \(E^{d}\) which are given by \[E^{c}=\{((w,x),(y,x))\in V\times V|(w,y)\in E^{a}\} \tag{2}\] \[E^{d}=\left\{((w,x),(w,z))\in V\times V|(x,z)\in E^{b}\right\} \tag{3}\] respectively. It is not always the case that the digraph underlying a product model is the Cartesian product of the digraphs underlying the factor models, one example is the so-called "strong product" defined in Worden and Porco (2017) which includes additional edges that are not present in the Cartesian product. Alternatively, as we shall see in Section 5.3, there are some cases where the digraph underlying the product model is a proper subset of the Cartesian product of the factor models. This is particularly the case when combining models with different pathogen strains while disallowing the possibility of being infected by multiple strains simultaneously. However we believe the practical applications of these non-Cartesian model products are relatively limited; in this article we focus on model products where the related digraph product is Cartesian. Worden and Porco (2017) discuss two Cartesian-like products; since the underlying digraphs of these products are identical, the only difference between them is the set of flow functions in the product model (with respect to the procedure for model multiplication they agree on the first two steps and only differ on the third). Notice that in Definition 3 the edge set of the product digraph is the union of two other sets \(E^{c}\) and \(E^{d}\). The set of flow functions of a model product \(F\) will be formed by a similar union of sets \(F^{c}\) and \(F^{d}\), where \(F^{c}\) contains the flow functions related to edges in \(E^{c}\) and \(F^{d}\) contains those related to \(F^{d}\). **Definition 4** (Naive and Modified Products).: Suppose we have two compartmental models \(D^{a}=(V^{a},E^{a},F^{a})\) and \(D^{b}=(V^{b},E^{b},F^{b})\). Let \(V\), \(E^{c}\), \(E^{d}\), and \(E\) be as in Definition 3. Since \(E\) is the union of \(E^{c}\) and \(E^{d}\) every edge \(e\in E\) is associated with an edge in one factor model and a vertex in the other; we use \(p(e)\) to denote the associated factor model edge and \(s(e)\) to denote the associated factor model vertex. For example if \(e\in E^{c}\), then \(p(e)=(w,y)\in E^{a}\) and \(s(e)=x\in V^{b}\) (Diagram 1). By Definition 2 for every \(e\in E^{a}\) there are functions \(f^{a}_{e}\in F^{a}\) and \(f_{r,e}\in\mathcal{F}(|V^{a}|)\), as well as a filtering function \(\tau^{\text{\tiny{nm}}}_{e}\in\Upsilon(|V^{a}|,1)\) such that \[f^{a}_{e}(\vec{\beta},\vec{x})=\tau^{\text{\tiny{nm}}}_{e}(\vec{x})\cdot f_{r,e}(\vec{\beta},\vec{x}). \tag{4}\] Let \(\tau^{\text{\tiny{nm}}}_{e}\) be the filtering function from \(V\) to \(V^{a}\times s(e)\). For every \(e\in E^{c}\) and \(\vec{x}\in\mathbb{R}^{|V|}_{\geq 0}\) we define \(f^{1}_{e}:\mathbb{R}^{\text{p}(f^{(e)}_{p(e)})}\times\mathbb{R}^{|V|}_{\geq 0 }\rightarrow\mathbb{R}_{\geq 0}\) by \[f^{1}_{e}(\vec{\beta},\vec{x})=\tau^{\text{\tiny{nm}}}_{p(e)}(\tau^{\text{ \tiny{nm}}}_{e}(\vec{x}))\cdot f_{r,p(e)}(\vec{\beta},\tau^{\text{\tiny{nm}}} _{e}(\vec{x})) \tag{5}\] and we define \(F^{1,c}=\bigcup\limits_{e\in E^{c}}f^{1}_{e}\). A symmetric definition can be given for \(e\in E^{d}\) yielding \(F^{1,d}=\bigcup\limits_{e\in E^{d}}f^{1}_{e}\). We define \(F^{1}=F^{1,c}\cup F^{1,d}\) and we say \(D^{1}=(V,E,F^{1})\). We define \[D^{a}\boxminus D^{b}=D^{1} \tag{6}\] We call **the naive product**. Alternatively, we can for every \(v^{b}\in V^{b}\) let \(\tau^{\text{\tiny{nm}}}_{v^{b}}\) be the filtering function from \(V\) to \(V^{a}\times v^{b}\). For every \(e\in E^{c}\) and \(\vec{x}\in\mathbb{R}^{|V|}_{\geq 0}\) we define \(f^{2}_{e}:\mathbb{R}^{\text{p}(f^{a}_{p(e)})\times|V^{b}|}\times\mathbb{R}^{|V |}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) by \[f_{e}(\vec{\beta},\vec{x})=\tau^{\text{\tiny{nm}}}_{p(e)}(\tau^{\text{\tiny{nm }}}_{s(e)}(\vec{x}))\sum\limits_{v^{b}\in V^{b}}f_{r,p(e)}(\vec{\beta}_{v^{b} },\tau^{\text{\tiny{nm}}}_{v^{b}}(\vec{x})) \tag{7}\] where we have used the convention that \(\vec{\beta}\in\mathbb{R}^{\text{p}(f^{a}_{p(e)})\times|V^{b}|}\) can be expressed as \(\vec{\beta}=(\vec{\beta}^{*}_{v^{b}_{1}},\dots,\vec{\beta}^{*}_{v^{b}_{|V^{b} |}})\) and for every \(i\in\mathbb{N}_{|V^{b}|}\), \(\vec{\beta}^{*}_{v^{b}_{i}}\in\mathbb{R}^{\text{p}(f^{a}_{p(e)})}\). We define \(F^{2,c}=\bigcup\limits_{e\in E^{c}}f^{2}_{e}\). A symmetric definition can be given for \(e\in E^{d}\) yielding \(F^{2,d}=\bigcup\limits_{e\in E^{d}}f^{2}_{e}\). We define \(F^{2}=F^{2,c}\cup F^{2,d}\) and we say \(D^{2}=(V,E,F^{2})\). We define \[D^{a}\Box D^{b}=D^{2}. \tag{8}\] We call **the modified product**. Given two models, their naive and modified products will have identical vertex and edge sets. In fact, the only difference between the two is their flow functions. ### Example Naive and Modified Products We now describe specific examples of naive and modified products. We will note here that in our example models we have erred on the side of simplicity. We occasionally omit minor details which, while important for the actual job of modeling, are unnecessary and potentially distracting for the purpose of clearly communicating our point. To illustrate the difference between naive and modified products, consider the two compartmental models in Figure 2. The top model tracks the epidemiological status of a population of susceptible, infected, and recovered individuals, and can be written in our notation as \(G=(V^{g},E^{g},F^{g})\) where \(V^{g}=\{S,I,R\}\), \(E^{g}=\{(S,I),(I,R)\}\), and \(F^{g}=\{S\frac{\beta I}{S+I+R},I\gamma\}\). The bottom model divides the population into three age groups, young, medium, and old, and can be written as \(H=(V^{h},E^{h},F^{h})\) where \(V^{h}=\{Y,M,O\}\), \(E^{h}=\{(Y,M),(M,O)\}\), and \(F^{h}=\{\frac{Y}{\mu},\frac{M}{\mu}\}\), where \(\mu\) is the expected residency time for people in the young and medium age groups. We can then compute both \(G\square H=M\) (Figure 4) and \(G\,\square\,H=N\) (Figure 3). Note that \(V^{m}=V^{n}\) and \(E^{m}=E^{n}\); the only difference between \(M\) and \(N\) is the difference between \(F^{m}\) and \(F^{n}\). So \[V^{m}=V^{n}=\{SY,SM,SO,IY,IM,IO,RY,RM,RO\} \tag{9}\] \[E^{m}=E^{n}=\{(SY,IY),(IY,RY),(SM,IM),(IM,RM),(SO,IO),(IO,RO),\\ (SY,SM),(SM,SO),(IY,IM),(IM,IO),(RY,RM),(RM,RO)\} \tag{10}\] Figure 1: A Susceptible-Infected model combined with a Unvaccinated-Vaccinated model. Every edge in the product model maps to an edge in one of the factor models (via the mapping \(p\)) and a vertex in the other factor model (via the mapping “s”.) \[F^{m}=\left\{SY\left(\frac{\beta_{1}IY}{SY+IY+RY}+\frac{\beta_{2}IM} {SM+IM+RM}+\frac{\beta_{3}IO}{SO+IO+RO}\right),IY\gamma_{1},\right.\\ \left.SM\left(\frac{\beta_{1}IY}{SY+IY+RY}+\frac{\beta_{2}IM}{SM+ IM+RM}+\frac{\beta_{3}IO}{SO+IO+RO}\right),IM\gamma_{2},\right.\\ \left.SO\left(\frac{\beta_{1}IY}{SY+IY+RY}+\frac{\beta_{2}IM}{ SM+IM+RM}+\frac{\beta_{3}IO}{SO+IO+RO}\right),IO\gamma_{3},\right.\\ \left.\frac{SY}{\mu},\frac{IM}{\mu},\frac{IM}{\mu},\frac{RY}{\mu },\frac{RM}{\mu}\right\} \tag{11}\] \[F^{n}=\left\{SY\frac{\beta_{1}IY}{SY+IY+RY},IY\gamma_{1},SM\frac {\beta_{2}IM}{SM+IM+RM},IM\gamma_{2},SO\frac{\beta_{3}IO}{SO+IO+RO},IO\gamma_ {3},\right.\\ \left.\frac{SY}{\mu},\frac{SM}{\mu},\frac{IY}{\mu},\frac{IM}{\mu },\frac{RY}{\mu},\frac{RM}{\mu}\right\} \tag{12}\] Figure 2: A standard SIR model and a simple age stratification model. Orange shading denotes the infectious compartment, while blue shading denotes non-infectious compartments. Figure 3: The naive product of the two models from Figure 2. Blue denotes non-infectious compartments, yellow/orange/red denote infectious compartments. The force of infection is only influenced by the infected population within the same age stratum. In this example, people of different age groups have no contact (or very limited contact) with each other. The term "naive product" is not pejorative; in this specific example the modified product is likely to be preferred because people of all ages commonly interact with each other. In some scenarios, however, the naive product would be preferred. In the case of spatial stratification, for example, one might want to use the naive, the modified, or another alternative depending on the specifics of the epidemiological system. At first glance the naive product seems promising because it incorporates the idea that people at different physical locations cannot interact and so do not infect one another. This, however, assumes that the model in question simulates movement explicitly which is not always the case. In product models that do simulate movement explicitly (e.g. Mohammadi et al., 2023), the flows between different locations are included in the original factor model describing spatial structure. Other models, such as Dietz and Sattenspiel (1995), model movement implicitly. In this sort of model a person's location stratum might determine where they live but the possibility of their movement to another location is represented as a contact rate between people in their home stratum and the stratum they might visit. In this case the modified product would seem the most appropriate. However, the situation can be even more complex; for example, depending on the scales involved, it may be beneficial to allow contact between people who are either in the same location _or_ in neighbouring locations, for some appropriate definition of a neighbourhood. In this case we want to go beyond the naive or modified products to some kind of _generalized product_. Figure 4: The modified product of the two models from Figure 2. Unlike in Figure 3, individuals make epidemiological contacts across age strata, so the force of infection for each age stratum is influenced by the infected population in all age strata. ## 4 Generalized Product The naive product restricts people in each stratum to interacting only with other people in the same stratum; the modified product allows people in any strata to interact. We propose a new product that allows for people in each stratum to interact with people in an arbitrary subset of the other strata. This allows the creation of a model where people at a given location can interact at the same location or neighboring locations. The following definition is very similar to Definition 4; however unlike Equation 7, where the sum is over all vertices in a factor model, the sum in this definition will only include a subset of a factor model's vertices. **Definition 5** (Generalized Product).: Let \(D^{a}=(V^{a},E^{a},F^{a})\) and \(D^{b}=(V^{b},E^{b},F^{b})\) be two compartmental models and let \(V\), \(E\), \(E^{c}\), and \(E^{d}\) be as in Definition 3. As in Definition 2 we know that every edge \(e\in E\) is associated with an edge in one factor model and a vertex in the other; we use \(p(e)\) to denote the associated factor model edge and \(s(e)\) to denote the associated factor model vertex. For every \(e\in E^{a}\) there are functions \(f_{e}^{a}\in F^{a}\) and \(f_{r,e}\in\mathcal{F}(|V^{a}|)\), as well as a filtering function \(\tau_{e}^{\text{\tiny{from}}}\in\Upsilon(|V^{a}|,1)\) such that \[f_{e}^{a}(\vec{\beta},\vec{x})=\tau_{e}^{\text{\tiny{from}}}(\vec{x})\cdot f_ {r,e}(\vec{\beta},\vec{x}). \tag{13}\] For every \(v^{b}\in V^{b}\) let \(\tau_{v^{b}}^{\text{\tiny{as}}}\) be the filtering function from \(V\) to \(V^{a}\times v^{b}\) and let \(V_{vb}\subset V^{b}\) denote those vertices in \(V^{b}\) that belong to strata that interact with the \(V^{a}\times v^{b}\) stratum. For every \(e\in E^{c}\) and \(\vec{x}\in\mathbb{R}_{\geq 0}^{|V|}\) we define \(f_{e}:\mathbb{R}^{p(f_{e}^{a})\times|V_{vb}|}\times\mathbb{R}_{\geq 0}^{|V|} \rightarrow\mathbb{R}_{\geq 0}\) by \[f_{e}(\vec{\beta},\vec{x})=\tau_{p(e)}^{\text{\tiny{from}}}(\tau_{s(e)}^{\text {\tiny{as}}}(\vec{x}))\sum_{v^{b}\in V_{vb}}f_{r,p(e)}(\vec{\beta}_{v^{b}}, \tau_{vb}^{\text{\tiny{as}}}(\vec{x})) \tag{14}\] We define \(F^{c}=\bigcup\limits_{e\in E^{c}}f_{e}\). A symmetric definition can be given for \(e\in E^{d}\) yielding \(F^{d}=\bigcup\limits_{e\in E^{d}}f_{e}\). We define \(F=F^{c}\cup F^{d}\) and we say \(D=(V,E,F)\). We define \[D^{a}\boxempty D^{b}=D. \tag{15}\] We call \(\boxempty\)**the generalized product**. ### Example Generalized Products Below we show three different ways an SI model could be stratified with location. Figures 5 and 6 show the naive and modified products respectively. Figure 7 shows one example of a generalized product where interactions can only occur within a single geographic region or between neighboring regions. So for example an infected person in the Toronto region could infect a susceptible person in Toronto or Ottawa but not one in Montreal. However, an infected person in Ottawa could infect a susceptible person anywhere. Figure 5: The naive product of an SI model with location model including Toronto, Ottawa, and Montreal. Notice the force of infection at any given location is influenced is determined by the size of the infectious population at the same location. This approach is useful when movement between locations is being modeled explicitly Figure 6: The modified product of an SI model with location model including Toronto, Ottawa, and Montreal. Notice that the size infectious population at each location influences the force of infection at every location. This approach is useful when movement between location is modeled implicitly, for example through non-zero contact rates between populations at different locations Figure 7: A generalized product of an SI model with location model including Toronto, Ottawa, and Montreal. Notice that the force of infection at a given location depends on the size of the infectious population at the same location _and_ at neighboring locations. This approach has a wide variety of applications, for example when there is significant variance in the distance between different locations ## 5 Uncooperative Examples While the operations defined above allow us to construct a wide range of compartmental models by taking products of simpler factor models, they cannot account for every possible model. In this section we discuss a number of examples where products alone, as they appear in this article, are insufficient. ### Models with alternate functional forms Although we have tried in this article to make the functional forms we use as general as possible there are still certain approaches that do not fit neatly into the formalism we have developed. Fortunately the difficulties that arise from this can typically be overcome by designing a new model product that possesses the desired properties. As we stated in Section 1 in this paper we have followed the approach of Worden and Porco when computing the magnitude of flows in a product model. That is, we individually compute the contribution from every stratum and compute the sum. In an alternate approach, we instead take a weighted average of the compartment populations in each stratum and use this new average as the input to the flow rate function. This approach is particularly useful when incorporating inhibitory influences in a model. For example during an epidemic will be more careful if they know hospitals are at capacity than they would be when there are ample medical resources available. We introduce the following product definition: **Definition 6** (Weighted States Product).: Let \(D^{a}=(V^{a},E^{a},F^{a})\) and \(D^{b}=(V^{b},E^{b},F^{b})\) be two compartmental models and let \(V\), \(E\), \(E^{c}\), and \(E^{d}\) be as in Definition 3. As in Definition 2 we know that every edge \(e\in E\) is associated with an edge in one factor model and a vertex in the other; we use \(p(e)\) to denote the associated factor model edge and \(s(e)\) to denote the associated factor model vertex. For every \(e\in E^{a}\) there are functions \(f_{e}^{a}\in F^{a}\) and \(f_{r,e}\in\mathcal{F}(|V^{a}|)\), as well as a filtering function \(\tau_{e}^{\text{\tiny{from}}}\in\Upsilon(|V^{a}|,1)\) such that \[f_{e}^{a}(\vec{\beta},\vec{x})=\tau_{e}^{\text{\tiny{from}}}(\vec{x})\cdot f_ {r,e}(\vec{\beta},\vec{x}). \tag{16}\] For every \(v^{b}\in V^{b}\) let \(\tau_{v^{b}}^{\text{\tiny{avg}}}\) be the filtering function from \(V\) to \(V^{a}\times v^{b}\). For every \(e\in E^{c}\) and \(\vec{x}\in\mathbb{R}_{\geq 0}^{|V|}\) we define \(f_{e}:\mathbb{R}^{\text{p}(f_{p(e)}^{a})\times|V_{b}|}\times\mathbb{R}_{\geq 0 }^{|V|}\rightarrow\mathbb{R}_{\geq 0}\) by \[f_{e}(\vec{\beta},\vec{x})=\tau_{p(e)}^{\text{\tiny{from}}}(\tau_{s(e)}^{ \text{\tiny{avg}}}(\vec{x}))f_{r,p(e)}\left(\vec{\beta}_{s(e)},\sum_{v^{b}\in V ^{b}}w_{v^{b}}\tau_{v^{b}}^{\text{\tiny{avg}}}(\vec{x})\right) \tag{17}\] where for every \(v^{b}\in V^{b}\), \(w_{v^{b}}\in\mathbb{R}_{\geq 0}\) denotes the weight to be given to the \(v^{b}\) strata in the weighted average. We define \(F^{c}=\bigcup\limits_{e\in E^{c}}f_{e}\). A symmetric definition can be given for \(e\in E^{d}\) yielding \(F^{d}=\bigcup\limits_{e\in E^{d}}f_{e}\). We define \(F=F^{c}\cup F^{d}\) and we say \(D=(V,E,F)\). We define \[D^{a}\triangle D^{b}=D. \tag{18}\] We call \(\triangle\)**the weighted states product**. Alternatively, one might notice that if a flow rate function involves normalization, when computing the contribution to the flow of each stratum individually the normalization is done with respect to the population of the individual strata rather the the total population of the model. One example of this can be seen in Equations 11 and 12. To state this difference more clearly we introduce a new function \(N\) which simply sums the population of every compartment in a state vector. So for \(\vec{x}\in\mathbb{R}_{\geq 0}^{n}\), \[N(\vec{x})=\sum_{i=0}^{n-1}x_{i} \tag{19}\] Recalling Definition 1 many, but not necessarily all, flow functions in a model will have the form \[f(\vec{\beta},\vec{x})=\tau(\vec{x})\cdot f_{r}(\vec{\beta},\vec{x})=\tau( \vec{x})\cdot\frac{g(\vec{\beta},\vec{x})}{N(\vec{x})} \tag{20}\] where \(g\in\mathcal{F}\). If a factor model has flow functions with the above form then, if that model is used in a modified product for example, the form of the related flow function in the product model which is given by Equation 7 will be \[f_{e}(\vec{\beta},\vec{x})=\tau_{p(e)}^{\text{nom}}(\tau_{s(e)}^{\text{arg}}( \vec{x}))\sum_{v^{b}\in V^{b}}\frac{g(\vec{\beta}_{v^{b}},\tau_{v^{b}}^{\text{ arg}}(\vec{x})}{N(\tau_{v^{b}}^{\text{arg}}(\vec{x}))} \tag{21}\] where in fact, depending on the specific context the desired outcome may be \[f_{e}(\vec{\beta},\vec{x})=\frac{\tau_{p(e)}^{\text{nom}}(\tau_{s(e)}^{\text{ arg}}(\vec{x}))}{N(\vec{x})}\sum_{v^{b}\in V^{b}}g(\vec{\beta}_{v^{b}},\tau_{v^{b}} ^{\text{arg}}(\vec{x})) \tag{22}\] This difference suggests yet another type of model product, although a rigorous definition would require altering Definition 1 to explicitly include a denominator in the form of a flow function. Another example where the functional form used in Definition 1 may require amendment is when investigating models with non-linear incidence rates. In such cases flow functions related to the force of infection in a model may have an additional factor associated with them yielding the form: \[f(\vec{\beta},\vec{x})=\tau^{\text{nom}}(\vec{x})\cdot f_{r}(\vec{\beta},\vec {x})=\frac{\tau^{\text{nom}}(\vec{x})^{\zeta}}{N(\vec{x})^{\zeta+1}}\cdot g( \vec{\beta},\vec{x}) \tag{23}\] Defining a space of flow functions which allows some, but not all, of the flows in a model to employ these variations in form is a non-trivial challenge and a significant reason why it is difficult to build a simulation engine for a product model without the supervision of a trained programmer. ### Models with Testing One such example (where model products alone cannot produce the desired result) involves modeling the effects of testing for infection, inspired by the dynamics of testing during the COVID-19 pandemic. One example of a model that includes the effects of testing can be found in Gharouni et al. (2022). Consider the epidemiological model in Figure 8 and the testing process depicted in Figure 9. The modified product of these to models includes a compartment for untested individuals at the hospital. However, this product is not what we want (Figure 10). The key difference is that untested individuals entering the hospital are typically tested (i.e., moved from "untested" to "awaiting results"); our model world assumes that they always are. Therefore, the "untested hospitalized" compartment in product model is empty and should be eliminated; the flow that goes to that compartment should instead be directed to the "hospitalized/awaiting test result" compartment. Constructing the desired model would thus require an extra step to remove the superfluous compartment. Figure 8: A simple epidemiological model that we will expand to include testing. In this model, some exposed individuals will develop asymptomatic or mild illness, in which case they stay in the community during their infectious period (and potentially transmit to others); those who instead develop severe illness will be hospitalized. (This model allows neither for within-hospital transmission nor for disease-induced mortality either inside or outside the hospital.) Figure 9: A simple testing model. Individuals who test negative will, over time, revert back to the “untested” status. This is not the case for those that test ”positive”; at least during the early stages of the COVID-19 pandemic, someone who had tested positive for COVID-19 would assume that they were immune and would not be re-tested even if they developed COVID-like symptoms. Figure 10: The desired result of combining Figure 8 with Figure 9. Note the missing grey “untested” box associated with the hospital location; exposed individuals going into the hospital (enlarged, grey downward arrow starting at \(E\)) flow into the purple “awaiting results” subcompartment. ### Multistrain Models and a Weak Product Many epidemics involve multiple co-circulating strains of the same pathogen (Gog and Grenfell, 2002; Williams et al., 2021). In the case of COVID-19 such variants have significant implications for the efficacy of vaccines(Abu-Raddad et al., 2021; Koyama et al., 2020) and diagnostic tests (Vasireddy et al., 2021). In more complex models, including multiple strains rapidly inflates the size of both the state space and the parameter space (Kryazhimskiy et al., 2007). One way to limit the size of these unwieldy models while continuing to include the effects of multiple strains in our model is to disallow the possibility of _superinfection_ (i.e. an individual being infected with multiple strains at the same time). It would therefore be useful to define a _weak product_ similar to the operations proposed by Worden and Porco (2017) but which excludes all states corresponding to a superinfected status. Below we propose such a product which is satisfactory for producing two-strain models but, as we shall see, fails for more than two strains. **Definition 7** (Weak product).: Suppose \(A\) and \(B\) are two compartmental models; let \(S_{a}\subset V(A)\) and \(S_{b}\subset V(B)\) denote the set of all vertices with either no inflows (i.e. _sources_) or no outflows (i.e. _sinks_) in \(A\) and \(B\) respectively. Let \(I_{a}=V(A)\setminus S_{a}\) and \(I_{b}=V(B)\setminus S_{b}\). We define \[V_{c}=(S_{a}\times S_{b})\cup(S_{a}\times I_{b})\cup(S_{b}\times I_{a}). \tag{24}\] We define \[E_{c}^{a}=\left\{((w,x),(y,x))\in V_{c}\times V_{c}|(w,y)\in E(A)\right\}, \tag{25}\] \[E_{c}^{b}=\left\{((w,x),(w,z))\in V_{c}\times V_{c}|(x,z)\in E(B)\right\}, \tag{26}\] and \[E_{c}=E_{c}^{a}\cup E_{c}^{b}. \tag{27}\] For every \(e\in E_{c}^{a}\), let \(p(e)=(w,y)\in E(A)\). For every \(e\in E(A)\) there are functions \(f_{e}^{a}\in F(A)\) and \(f_{r,e}\in\mathcal{F}(|V(A)|)\), as well as a filtering function \(\tau_{e}^{\text{\tiny{from}}}\in\Upsilon(|V(A)|,1)\) such that \[f_{e}^{a}(\vec{\beta},\vec{x})=\tau_{e}^{\text{\tiny{from}}}(\vec{x})\cdot f_ {r,e}(\vec{\beta},\vec{x}) \tag{28}\] For every, \(v_{b}\in S_{b}\) let \(\tau_{v_{b}}^{\text{\tiny{as}}}\) be the filtering function from \(V\) to \(V(A)\times v_{b}\). For every \(e\in E_{c}^{a}\) we define \[f_{e}(\vec{\beta},\vec{x})=\sum_{v_{b}\in S_{b}}\tau_{p(e)}^{\text{\tiny{from} }}(\tau_{v_{b}}^{\text{\tiny{as}}}(\vec{x}))f_{p(e)}^{a}(\vec{\beta}_{v_{b}}, \tau_{v_{b}}^{\text{\tiny{as}}}(\vec{x})) \tag{29}\] We define \(F_{c}^{a}=\bigcup\limits_{e\in E_{c}^{a}}f_{e}\). A symmetric definition can be given for \(e\in E_{c}^{b}\) yielding \(F_{c}^{b}=\bigcup\limits_{e\in E_{c}^{b}}f_{e}\). We define \(F_{c}=F_{c}^{a}\cup F_{c}^{b}\) and we say \(C=(V_{c},E_{c},F_{c})\). We define \[A\boxplus B=C \tag{30}\] We call \(\boxplus\)**the weak product**. Figure 11 depicts a two-strain SIR model without super-infection; this corresponds to the weak product of two SIR models. Figures 12 and 13 depict two different results for the weak product of three SIR models; note that the difference between them results from changing the order in which products are done. If, for example, we denote the models of the red, yellow, and purple strains by \(A\), \(B\), and \(C\) respectively then Figure 12 depicts \((A\boxplus B)\boxplus C\) and Figure 13 depicts \(A\boxplus(B\boxplus C)\). Figure 14 depicts the desired result for a three-strain SIR model with no super-infection. It is possible to create a version of the weak product defined above that will produce the model shown in Figure 14; however it requires us to distinguish between compartments that are global sources or sinks and compartments that are sources or sinks with respect to one of the three strains specifically. That is to say, while a global sink must have no outflows, a weaker condition says that a compartment is a sink with respect to a specific pathogen if every compartment that can be reached via the outflow has the same infection status with respect to that pathogen as the original compartment. Programmatically we achieve this by introducing a concept of 'labeled partitions' which separates the vertices of the model into disjoint sets corresponding to the vertices' status with respect to a specific pathogen. Each dimension of stratification in the model corresponds to a different labeled partition with each stratum corresponding to a different disjoint set. In this way we can define sources and sinks with respect to a specific set of labels rather than globally. For example, we can say a compartment \(A\) is a sink with respect to a specific labelled partition if every compartment that can be reached after being in \(A\) is in the same set as \(A\). Figure 15 outlines a compartmental model with one source compartment but two sink compartments and Figure 16 shows the weak product of two such models. An unfortunate aspect of this product model is that several of the compartments can only be reached by individuals after they are already dead (!). If there are relatively few such compartments a modeler may choose simply to leave them in the model and treat them all as a single compartment. But if there are many such "zombie compartments", or if computational efficiency is a pressing concern, they could be removed from the model. Figure 11: A two-strain SIR model admitting no superinfection. Red Compartments indicate an infectious population whereas the population in blue compartments are not infectious Figure 12: A model corresponding to the product \((A\boxplus B)\boxplus C\). Notice that the “SRS” and “RSS” compartments are not sinks or sources in \(A\boxplus B\) hence why they have no paths to “SRR” and “RSR” respectively. Figure 13: A model corresponding to the product \(A\boxplus(B\boxplus C)\). Notice that “SRS” and “SSR” compartments are not sources or sinks in \((B\boxplus C)\) hence why they have no paths to “RRS” and “RSR” respectively Figure 14: A three strain SIR model admitting no superinfection. This model cannot be constructed using only the products defined in this article Figure 16: The weak product of two of the single strain models depicted in Figure 15. Notice that the grey compartments are superfluous as they correspond to changes in infection status occurring after death Figure 15: A single strain model with two sinks and one source ## 6 Conclusion Adding new strata to simple epidemiological models is closely related to taking the Cartesian product of digraphs. Modellers who want to combine sets of simple models into a single large stratified model would benefit from a toolkit based on well-defined mathematical operations. This toolkit must contain a variety of operations representing a useful subset of the numerous ways that separate strata in a model can interact. We have developed a mathematical formalism for defining such operations and used it to restate two previously proposed model operations, the naive and modified products, which represent extremes of a spectrum of interactions between strata. The naive product corresponds to the case where different strata never interact, while the modified product corresponds to scenarios where any stratum can interact with any other stratum. We generalize these previously proposed operations to a third operation that allows any level of interaction between model strata, for example to construct geographically stratified models where interactions can occur within a single location and its neighbours but not more distantly. Modelers employ a wide variety of functional forms; it is not always clear how to adjust them to accommodate new levels of stratification. For example, the flow functions in a factor model may involve normalization by the total population size. When generalizing to a product model, modelers need to decide whether the correct denominator for any given term is the total population of the model or the population of an individual stratum (either option might be correct depending on the situation). It seems likely that these issues could be resolved by modifying Definition 1. For example, one could explicitly include numerator and denominator terms, or even an arbitrary number of terms each potentially drawing its arguments from a different subset of the state space. We have refrained from exploring such possibilities in this paper. Several challenges remain for anyone wishing to further develop a model construction toolkit. We have paid little attention to parameter space and the question of how to use knowledge about factor model parameters to draw conclusions about product model parameters. It would be convenient to have a catalog of the most common ways of generalizing factor model parameters to product models so that modelers aren't required to reinvent the procedure every time. Many models (e.g. models with infection status testing) also have asymmetries in their structure that cannot be reproduced with Cartesian-like products. This suggests the need for addition-like operations to supplement the multiplication-like operations discussed in this paper. In fact such operations already exist in the category theoretic approach to model operations, which is one reason why it could be a worthwhile project to unite the category theory and graph theory approaches. In our understanding this would involve finding so-called "type-graphs" that cause the category theoretic operations known as "pull-backs" and "push-outs" to reproduce the results of graph theory operations. We are heavily motivated by the desire to develop software to facilitate model construction. One insight of our investigations is the utility of a system of so-called "labeled partitions", which divide the compartments of a model into mutually exclusive groups. Each group in such a division will contain all compartments that are in the same level of some dimension of stratification and the groups can be labelled accordingly. By applying several such divisions to a model, one for each dimension of stratification, it becomes possible to specify important subsets of the model compartments. Using this system of labels and partitions provides an easy way to address issues like the non-commutativity of the weak product and the presence of "zombie compartments" discussed in Section 5.3. Although theoretical and practical challenges with the application of binary operations on model space remain, our approach forms the basis of a powerful toolkit for the construction of compartmental models. ## 7 Acknowledgements This project was supported by the Canadian Network for Modelling Infectious Diseases (CANMOD), which is funded through the Emerging Infectious Disease Modelling programme of the Natural Sciences and Engineering Research Council of Canada (NSERC). ## 8 Declarations The authors declare that they have no competing interests.
2307.13601
The Importance of Distrust in AI
In recent years the use of Artificial Intelligence (AI) has become increasingly prevalent in a growing number of fields. As AI systems are being adopted in more high-stakes areas such as medicine and finance, ensuring that they are trustworthy is of increasing importance. A concern that is prominently addressed by the development and application of explainability methods, which are purported to increase trust from its users and wider society. While an increase in trust may be desirable, an analysis of literature from different research fields shows that an exclusive focus on increasing trust may not be warranted. Something which is well exemplified by the recent development in AI chatbots, which while highly coherent tend to make up facts. In this contribution, we investigate the concepts of trust, trustworthiness, and user reliance. In order to foster appropriate reliance on AI we need to prevent both disuse of these systems as well as overtrust. From our analysis of research on interpersonal trust, trust in automation, and trust in (X)AI, we identify the potential merit of the distinction between trust and distrust (in AI). We propose that alongside trust a healthy amount of distrust is of additional value for mitigating disuse and overtrust. We argue that by considering and evaluating both trust and distrust, we can ensure that users can rely appropriately on trustworthy AI, which can both be useful as well as fallible.
Tobias M. Peters, Roel W. Visser
2023-07-25T16:06:21Z
http://arxiv.org/abs/2307.13601v2
# The Importance of Distrust in AI + ###### Abstract In recent years the use of Artificial Intelligence (AI) has become increasingly prevalent in a growing number of fields. As AI systems are being adopted in more high-stakes areas such as medicine and finance, ensuring that they are trustworthy is of increasing importance. A concern that is prominently addressed by the development and application of explainability methods, which are purported to increase trust from its users and wider society. While an increase in trust may be desirable, an analysis of literature from different research fields shows that an exclusive focus on increasing trust may not be warranted. Something which is well exemplified by the recent development in AI chatbots, which while highly coherent tend to make up facts. In this contribution, we investigate the concepts of trust, trustworthiness, and user reliance. In order to foster appropriate reliance on AI we need to prevent both disuse of these systems as well as overtrust. From our analysis of research on interpersonal trust, trust in automation, and trust in (X)AI, we identify the potential merit of the distinction between trust and distrust (in AI). We propose that alongside trust a healthy amount of distrust is of additional value for mitigating disuse and overtrust. We argue that by considering and evaluating both trust and distrust, we can ensure that users can rely appropriately on trustworthy AI, which can both be useful as well as fallible. Keywords:XAI Psychology Appropriate Trust Distrust Reliance Trustworthy AI ## 1 Introduction Intelligent systems and decision making supported by Artificial Intelligence (AI) are becoming ever more present and relevant within our everyday lives. Especially their use in high-stakes applications like medical diagnosis, credit scoring, and parole and bail decisions have led to concerns about the AI models [49]. This includes concerns about the AI's transparency, interpretability, and fairness [1, 16, 42]. These objectives are acknowledged and further enforced in legislation by the EU's General Data Protection Regulation (GDPR, Art. 15), in which citizens are granted the right to be provided with meaningful information about the logic involved in automated decision making. While contemporary AI methods are becoming increasingly accurate and sophisticated, they often remain opaque and may, and most likely will, produce errors. For instance, recent development in generative AI chatbots have highlighted that there remains a risk in relying on AI. While the current transformer-based large language models (LLMs) are very good at generating highly convincing and coherent texts, they are known to make up facts and can be inaccurate to the extent of fabricating entire quotes and references [24]. While it may be appropriate to use such models in certain low-stakes applications, their inherent fallibility is more problematic in, say, a medical setting. These developments and different concerns have lead to increased research interest into making AI systems more trustworthy and reliable. One prominent way to address these growing concerns and new objectives is for modern (blackbox) AI methods to be able to explain their outputs [1], leading to a surge in the development of explainable AI (XAI) over the last years for a host of different applications, domains, and data types [1, 16, 50]. Additionally, a number of different guidelines have been set out to ensure the trustworthiness of AI (for an overview see [57]), the main objective being that ensuring the trustworthiness of AI should help increase user trust. Likewise, in the literature explainability is often explicitly considered as a means to increase user trust [28]. In this contribution, we take a closer look at the connection between user trust and trustworthiness and explainability and its limitations with respect to insights from psychology. Currently, such insights are often incorporated only superficially or founded more on common-sense reasoning on trust. As even the best-performing models are prone to errors, we argue not to focus exclusively on increasing trust, but rather on establishing an appropriate level of reliance from the user on the AI with a **healthy amount of critical reflection, or distrust**, along with a **sufficient level of trust**. In the following, we investigate why not only increasing trust but also taking into account the importance of distrust is relevant for appropriate reliance on AI, preventing both the disuse as well as overtrust of such systems. For this, in the next sections we give an overview of literature related to automation, AI, human-computer interaction, and Psychology, in order to establish the relation between and importance of trust, distrust, reliance, and trustworthiness of AI. Some things which are of primary concern in the employment of XAI. ## 2 Trusting an AI? Trustworthy AI is defined according to a number of design objectives that AI should conform to in order to be trustworthy to its users and to wider society [1]. An example of these are those formulated in the Ethics Guidelines for Trustworthy AI [20]. The exact definition of which design objectives should be taken into account and which concerns should be addressed depends, for example, on who the concerning party is (e.g. the European commission in place of wider society) or to whom it is addressed to (e.g. end users vs machine learning engineers) [1, 42]. Of these objectives transparency and interpretability are the most important ones in the context of trust and XAI [1, 16, 42]. ### Explainability and Trust The development of explainable AI methods is currently one of the most prominent ways of working on means for addressing these concerns and fulfilling such regulations [30, 59]. It is focused on developing ways to make AI both (more) interpretable and transparent, thereby ensuring that both its users and wider society can trust that an AI will work in a way that is intended, expected, and desirable [1]. A multitude of XAI studies implicitly or explicitly assume explainability to facilitate trust [12, 28]. In their summary of current XAI studies concerned with user trust, Kastner et al. [28] call this the explainability-trust hypothesis. In contrast, results of empirical investigations into the observed relation between explanations and trust are summarized by Kastner et al. [28] as mixed and inconclusive. These results range from positive relations to no effect up to negative relations, which calls the validity of the explainability-trust hypothesis into question. Similarly, Ferrario and Loi [12] have serious doubts about the usefulness of explainability methods in fostering trust in the case of medical AI applications, and note that the relation between the explainability of AI and trust are far from being clarified. One potential reason, which Kastner et al. [28] also entertain, of why explanations could fail to foster trust is that explanations can actually reveal problems of the system that may have otherwise gone unnoticed, which could lead a user not to trust the AI. To reveal problems of an AI is a function of explanations that is also targeted in the paper on the LIME algorithm, one of the first popular XAI methods which has been used for deep models [48]. According to Ribeiro et al. [48] explanations are not only helpful for deciding when one should trust a prediction, but also beneficial in identifying when not to trust one. Thereby, they differentiate between the explanation's utility for trusting and not trusting, demonstrating the latter in an example where an explanation reveals a wrong causal relation underlying an automated decision. Yet, when generally discussing the benefit of explanations, Ribeiro et al. [48] argue that "[...] explaining predictions is an important aspect in getting humans to trust and use machine learning effectively, if the explanations are faithful and intelligible". With the conditional part of the sentence they acknowledge the possibility of explanations to indicate erroneous predictions, but still mainly focus on convincing a human to trust. This sets the focus on the utility of explanations to identify correct predictions, while the utility of explanation to identify wrong predictions falls short. This pattern can be found across much XAI literature that discusses user trust. When authors speak in broad terms, they connect explanations to the facilitation of trust, which represents the explainability-trust hypothesis as discussed above. Thereby, the explanation's utility to indicate correct predictions is discussed because only in this case a facilitation of trust is desired. However, when authors describe the actual utilities of explanation methods, another utility of explanations is also identified, e.g., not trusting predictions [48], critical reflection [9], or enabling distrust [23]. To clarify the aim of these different utilities, some XAI research employs the terms disuse and overtrust [23; 42]. Before we discuss both utilities of explanations and their connection to disuse and overtrust in more detail in Section 2.3, it is first necessary to disentangle the terms trust and trustworthiness (Section 2.2). This necessity partially stems from the fact that trust is a typical objective of XAI, but often in papers concerning the development of XAI methods no proper definitions of trust are given [2; 17; 48]. E.g., Ribeiro et al. [48] only separate trusting a prediction and trusting a model without explicating what is meant by trusting. Comparably, Ferrario and Loi [12] observed that the dynamics between trust and explainability are far from being clarified, which they primarily attribute to the lack of precise definitions of the two. A summary of important terms in the AI context, which we discuss in the following, is provided in Table 1. Setting trust as a goal of explainability without providing information on what trust encompasses can lead to different problems. Firstly, in empirical investigations, cooperation or confidence might be easily mistaken for trust (see Section 3). Secondly, without drawing from previous work on trust and their definitions, trust in the (X)AI context runs the risk of falling back into the state of a conceptual confusion regarding the meaning of trust [34] that earlier work on trust aimed to overcome. Furthermore, basing trust in (X)AI research on the already established definitions of trust, the desired outcome, i.e. trust, becomes more standardized. This can improve the comparability between different studies, which then can allow researchers to make more general assumption about improvements of AI and their effect on trust. In addition, the process of evaluating potential effects of explainability on trust may profit as well. ### Trustworthiness and Trust When looking at literature related to (X)AI, **it is important to make a clear distinction between trust and trustworthiness**. Trust can be defined as an attitude that a stakeholder has towards a system [28], while trustworthiness is a property of the system that justifies to trust the system [59]. One complication in the current literature, is that these concepts are not always clearly defined. In some cases trust - the attitude of a user - and trustworthiness - the property of the system - are not clearly differentiated, or rather used interchangeably. For example, Barredo Arrieta et al. [3] describe trustworthiness as the confidence that a model will act as intended when facing a given problem, which is a fitting description of trust. The differentiation is critical, because there are further factors apart from the system's trustworthiness that also influence trust [59]. According to research on trust between humans (so-called trustor and trustee), trustworthiness is characterized by the trustee's ability, i.e. competence or expertise in the relevant context, the trustee's benevolence towards the trustor, and the trustee's integrity towards principles that the trustor finds acceptable [37]. Furthermore, a high level of these three factors of trustworthiness does not necessarily lead to trust, and trust can also occur in situations where lesser degrees of trustworthiness are present [37]. For example, a meta-analysis [27] identified the expertise and personality traits of a person interacting with AI as significant predictors for trust in AI. Moreover, cultural differences, such as individualism and power relations within a culture, can influence a person's propensity to trust automation [6, 7]. Other factors influencing trust in automation may be the type of technology (e.g. using a DNN (deep neural network) blackbox model [2] or decision tree [63]), the complexity of the task (e.g. tasks of varying cognitive load [26, 62]), and perceived risk (e.g. using AI in a medical application [5]) [51]. For these reasons, trust is influenced, but not determined by the trustworthiness of the system. Even the most trustworthy model will not be trusted in every case by every person. Vice versa, persons may - and often do - trust an untrustworthy model. \begin{table} \begin{tabular}{|l|l|} \hline Term & Definition \\ \hline Trustworthy AI & Descriptive term for a desired form of AI. \\ Reliance & A human decision or action that considers the decision or recommendation of an AI. \\ Trustworthiness & Property of an AI, which leads an interactor to trust the system. \\ & Property of an AI, which justifies to place trust in it [59]. \\ Trust & The willingness of a person to rely on AI in a situation that involves risk and uncertainty. \\ & “Trust is an attitude a stakeholder holds towards a system.” [28] \\ Appropriate Trust & “User’s ability to know when to, and when not to, trust the system’s recommendations and decisions.” [17] \\ \hline \end{tabular} \end{table} Table 1: Definitions of important terms in the AI context. ### Disuse, overtrust, and reliance in AI In the previous sections, we have discussed the use of trustworthy AI and explainability methods in preventing both disuse of AI systems as well as users overtrusting a useful yet imperfect AI. Additionally, we have looked at the apparent connection between the objective of trust in AI, trustworthy AI, and explainability methods. In order to make a connection between trust and the concerns of disuse and overtrust, in this part we draw a connection between concerns of trust in AI and other (earlier) forms of automation [21, 32]. While some issues and concerns may be specific to the AI context [15], the general concerns surrounding user reliance in technology have been a recurring theme over the last decades [29]. In the following, both work on trust in AI and trust in automation is considered. AI systems can be regarded as a subcategory of automation, and overarchingly we will refer to both as trust in a(n automated) system. In the field of trust in automation the prevention of disuse and overtrust has been targeted by **ensuring appropriate trust** or calibrated trust. McBride and Morgan [39] as well as McGuirl and Sarter [40] define appropriate trust as the alignment between perceived and actual performance of an automated system. This relates to a user's ability to recognize when the system is correct or incorrect and adjust their reliance on it accordingly. Within their model for trust calibration in human-robot teams, de Visser et al. [61] define calibrated trust as given when a team member's perception of trustworthiness of another team member matches the actual trustworthiness of that team member. If this is not given, either 'undertrust', which leads to disuse, or 'overtrust' can occur [45, 61]. The aim of trust calibration by de Visser et al. [61] is to assure a healthy level of trust and to avoid unhealthy trust relationships. Thereto, they establish a process of trust calibration which accompanies collaboration by establishing and continuously re-calibrating trust between the team members. To prevent people from overtrusting, so-called, trust dampening methods are to be applied. According to the authors, these methods are especially worthwhile in interactions with machines and robots, as humans have a tendency to expect too much from automation [61]. The authors recommend to present exemplary failures, performance history, likelihood alarms, or provide information about the system's limitations. Moreover, they make the connection to the expanding field of XAI arguing that explanation activities can help with calibrating trust. **Reliance in the AI context** can be understood as a human decision or action that considers the decision or recommendation of an AI. Trust is an attitude that benefits the decision to rely, as it has a critical role for human reliance on automation [21]. So, beneath the desideratum of increased appropriate trust lies the desideratum of increased appropriate reliance. To rely appropriately, one would consider correct decisions or recommendations of an AI and would disregard false ones. Trust does not lead to this because trust is not only influenced by the correctness, i.e. the performance of an AI. According to Hoff and Bashir [21], the performance of an automated system is similar to the trustee's ability in interpersonal trust, and the process and purpose of an automated system are analogous to benevolence and integrity. On top of that and as mentioned before, trust is not fully determined by trustworthiness. In other words, current improvements in automated systems, like XAI methods, are regarded as beneficial for **appropriate reliance** by preventing disuse and overtrust. Ideally, appropriate reliance should be achieved by fostering appropriate trust. Implied by the appropriateness of trust is that neither blind trust leading to overtrust, nor blind distrust leading to disuse is wanted. Several phrasing of this underlying notion of appropriate trust can be observed across the literature, which often entail trust and terms that can be summarized under distrust (see Fig. 1). Yet, how can such an appropriate trust, its influencing factors, and the relation to appropriate reliance be conceptualized? More recent trust research highlights a distinction that might be of interest here. Several researchers provided evidence that trust and distrust are two related, yet separate dimensions [4, 19, 33, 60]. We assume that this separation of trust and distrust might help solving the conceptual issue and propose that **we need an understanding of appropriate trust and healthy distrust**. The psychological underpinnings of our hypothesis will be detailed in the next sections. Figure 1: Underlying desideratum of appropriate trust in AI and its relation to appropriate reliance. ## 3 What is trust? Early, influential work on trust by sociologist Luhmann [35] defines trust as a mechanism for reducing the complexity of social interactions. Within social interactions, multiple goals and motives can be present, and multiple interpretation are possible with varying truth. According to the author, to decide which interpretation to follow and how to act upon them, this complexity needs to be reduced. By trusting, a person engages in the interaction as if only certain interpretations are possible (e.g., taking things at face value), and thus rendering the interaction less complex [35]. In analogy, trust is also important in human-AI interactions, because of the involved risk caused by the complexity and non-determinism of AI [15]. Similarly, Hoff and Bashir [21] argue that trust is not only important to interpersonal relations but can also be defining for the way people interact with technology. The typical conceptualisation of trust within (X)AI research regards trust as one end of a single dimension with distrust being the opposing end. The Integrative Model of Organizational Trust by Mayer et al. [37], which elaborates this conceptualisation, is a prominent basis for trust in AI and automation [56]. Mayer et al. [37] define trust as "[...] the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" [37]. Based on this definition, they differentiate between factors that contribute to trust, trust itself, the role of risk, and the outcomes of trust. The extent of a person's willingness to trust is influenced by the trustor's propensity to trust and the trustor's perception of the trustworthiness of the trustee. To separate trust from related constructs Mayer et al. [37] highlight the importance of risk and vulnerability for trust. They argue that, if a situation does not involve a form of vulnerability to the trustor, cooperation can occur without trust. Similarly, if a trustor does not recognize and assume any risk, the trustor is in a situation of confidence and not of trust. Thus, trust serves the purpose of reducing complexity in an interaction, and for trust to be present, a form of vulnerability and risk is required. Trust in a system and trust defined by Mayer et al. [37] share that they influence the willingness to rely and the situational requirements of risk and vulnerability for them to be of importance. According to Muir and Moray [43] users used automated systems they trust but not those they do not trust. Lee and Moray [31] state that operators did not use automation systems if their trust in them was less than their own self-confidence. Drawing from Mayer et al. [37], definitions of trust in automation also consider the necessity of uncertainty (i.e., risk) [21, 32] and vulnerability [29, 32]. Trust in automated systems "plays a leading role in determining the willingness of humans to rely on automated systems in situations characterized by uncertainty" [21], and is defined as "[...] the attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability" [32]. Returning to the desired appropriate reliance via appropriate trust (Section 2.3) and combining it with the discussed insights on trust in automated systems, which drew from Mayer et al. [37], we see a problem. To reiterate, the desideratum is to prevent either the case in which a person relies on the AI, even though it was wrong (overtrust), or the case, where a person does not rely on the AI, even though it was correct (disuse). Fostering trust, i.e., increasing the willingness to rely, mitigates the problem of disuse. However, for mitigating overtrust, not an absence of the willingness to rely, but the ability to identify reasons not to rely is needed. Trust in Mayer et al.'s model does not entail this, as they define trust "[...] irrespective of the ability to monitor or control that other party". Mayer et al.'s influential work on trust demonstrates the difference between trust and trustworthiness, but for the mitigation of overtrust, their model does not provide a basis to proceed. For our proposition to resolve this, not only trust but also distrust is needed. ## 4 The importance of distrust Distrust is often connotated negatively [33, 60] and sometimes explicitly considered something to be avoided [13, 54], or at least implied to be avoided when focusing the sole strengthening of trust. Yet considering the imperfection of contemporary ML models, distrust towards erroneous predictions and towards explanations that indicate them is not to be avoided, but fostered. Otherwise, a neglect of distrust remains, which is serious because it renders potential positive consequences of distrust invisible. In a study by McKnight et al. [41] the disposition to distrust predicted high-risk perceptions better than the disposition to trust did. For their study context of online expert advice sites they suggest that future research should study dispositional trust and also dispositional distrust. Psychological studies also point to the benefit of considering distrust by identifying positive consequences of distrust. Distrust or suspicion led, for example, to an increase of creativity [36] or a reduction of the correspondence bias [11]. Moreover, a series of studies by Posten and Gino [47] showed an increase of memory performance in their distrust condition as opposed to a trust or control condition. Vaske [60] identified a potential of distrust to improve critical reflection and innovation in the context of working in an organisational setting. Looking at potential underlying mechanisms of distrust, Mayo's [38] review introduces a so-called distrust mindset as an explanation for the positive effects of distrust. The distrust mindset leads to an activation incongruent and alternative associations, which aligns very well with the increase of creativity, reflection, and innovation. According to Posten and Gino [47], trust triggers a perception focus on similarities that makes it harder to remember single entities. Distrust shifts the perception focus towards differences and, therefore, increases memory performance. Interestingly, in one of their studies Posten and Gino [47] observe a higher acceptance of misinformation in a trust condition, underlining the potential problem of the current trust focus in the (X)AI context and the danger of overtrust. A conceptual example of how trust and distrust can be targeted is provided by Hoffman et al. [22] in their work on measuring trust in XAI. They advocate that people experience a mixture of justified and unjustified trust, as well as justified and unjustified mistrust. Ideally, the user would develop the ability to trust the machine in certain task, goals, and problems, and also to appropriately mistrust1 the machine in other tasks, goals, and problems. This ideal scenario requires them to be able to decide when to trust or to correctly distrust, when scepticism is warranted. In sum, although often connotated negatively, distrust also has positive consequences and its own merits separate from those of trust. Footnote 1: Mistrust is used as a synonym for distrust in this paper. ## 5 Distrust as a separate dimension While distrust is often regarded as the opposite of trust, the concept of a one-dimensional view of trust and distrust is being questioned and not widely accepted [18, 33, 52, 53]. In the two-dimensional approach, by definition, low trust is not the same as high distrust, and low distrust is not equal to high trust [33]. This allows the coexistence of trust and distrust. Among others, trust is characterized by hope, faith, or assurance, and distrust by scepticism, fear, suspicion or vigilance [4, 8, 33]. Lewicki et al. [33] exemplify the separation of trust and distrust by contrasting low trust with high distrust. The authors regard expectations of beneficial actions being absent or present as antecedent to trust, and expectations of harmful actions being absent or present as antecedent to distrust. If the former is absent, low trust is expressed by a lack of hope, faith, and confidence. If the later is present, high distrust is expressed by vigilance, skepticism, and wariness. The combination of high trust and high distrust is described by the authors with a relationship in which opportunities are pursued while risks and vulnerabilities are continually monitored. When reviewing research that draws from two-dimensional approaches, concepts and terms like critical trust, trust but verify, and healthy distrust are used [33, 46, 60]. These align well with the problem of mitigating overtrust, yet little consideration of the two-dimensional view on trust and distrust can be found when trust is considered in the technology context. One, at least partial, reason for this is found in the field of organisational psychology. Vaske [60] discusses the trajectory of the conceptual debate about trust and distrust within the organizational context. He describes that most of the earlier work on trust falls into the category of the one-dimensional approaches. From the mid 80s onward, doubt increased towards the one-dimensional approach, which was considered too simplistic [60]. Yet, efforts to resolve this debate and empirically test it remain scarce, while work on the two-dimensional approach mostly reproduces common-sense assumptions instead of providing empirical evidence [60]. Furthermore, Vaske [60] points out that only the concept trust has a good theoretical background and is well researched. Distrust remains in the state of conceptual debate and is given little research attention. As a consequence, even in the field of organizational psychology, in which the conceptual critique towards the one-dimensional approach is the most visible, applied work still relies mostly on the model by Mayer et al. [60]. As highly influential work on trust in automation [21; 32] also draws from Mayer et al. [37]'s model, which then was taken as a starting point in the context of trust in XAI [57], these fields inherited the focus on trust and the neglect of distrust. Regardless of evidence for the two-dimensional conceptualisation, uni-dimensional scales are the common form to evaluate trust in automation [29]. Of those, the Checklist for Trust between People and Automation [25] is the most frequently used one [29]. This checklist measures trust and distrust as polar opposites along a single dimension. Five of the 12 items (statements rated by the user) measure distrust. In practice, these items are often reverse-scored and summed with the trust items to form one trust score, which was also suggested by the original authors of the scale [55]. A critical validation attempt of this scale by Spain et al. [55] compared a one-factor model (indicating the polar opposites along a single dimension) and a two-factor model. This factor analysis provided evidence for the conceptualization of trust and distrust as separate, yet related constructs [55]. Thus, reverse scoring distrust items to then form a sum score with the trust items entails a problematic entanglement of the two factors identified by Spain et al. [55] and disregards the incremental insight by measuring trust and distrust individually. The merit of considering trust and distrust as separate dimension has been identified across different sub-fields of human-technology interaction [4; 10; 19; 29; 41; 44; 58]. A difference between dispositional trust and dispositional distrust was observed in the context of online expert advice [41], and trust and distrust co-existed as distinct construct in the context of online banking [4] and online shopping [44]. A study on website design showed that trust and distrust are affected by different antecedents, and the performance of a trust-aware recommender system was improved by not only predicting trust but also distrust [10]. Thielsch et al. [58] investigated work-related information systems and also identified trust and distrust as related yet separate influences on different outcome variables. Some authors [10; 44; 58] emphasize that they are, to the best of their knowledge, the first to consider not only trust but also distrust in their field. Additionally, with a distance of two decades Harrison McKnight and Chervany [19] and Kohn et al. [29] both argue in favour for considering trust and distrust in the technology. This indicates a lack of generalization on the conceptualisation of trust and distrust in context of technology. The impression arises that within individual sub-fields at different points in time, the potential merit of considering trust and distrust is identified, and only some first steps are taken towards it. Some of the studies that take these first steps still only partially separate trust and distrust, which may hinder generalization. For instance, Benamati et al. [4], Fang et al. [10] make the distinction between trust and distrust only on a superficial level because they relate trust and distrust to the same antecedents [4; 10] and consequences [4]. Therefore, they do not fully acknowledge Lewicki et al.'s proposition [33] of different antecedents and consequences of trust and distrust. In both cases, the authors themselves suggest addressing these limitations in future research. Fang et al. [10] suggest to predict trust and distrust from different antecedents. Benamati et al. [4] entertain the degree of monitoring as plausible outcome for distrust instead of the intention to use, which they had used as an outcome for both trust and distrust. ## 6 Conclusion To summarize, a focus on trust and the neglect of distrust is evident in research about trust in (X)AI, trust in automation, and in trust research in organizational psychology. Some examples of considering both trust and distrust can be identified in different sub-fields about interaction with technology. The underlying idea of these studies, that trust does not suffice, is strengthened by the examples of positive consequences of distrust. The notion of appropriate trust in current (X)AI research also acknowledges that trust alone does not suffice. However, by aiming for appropriate trust, a crucial ambiguity remains, because stating that it is not appropriate to trust allows for two interpretations: either that one does not trust, or that one distrusts. The same distinction has to be made when returning to the problems of disuse and overtrust. By increasing trust, the problem of disuse can be mitigated, as the willingness to rely increases. While it may be true that a lower willingness to rely, a lower trust, would decrease the likelihood of overtrust, there would only be less reliance overall. To this point, to mitigate overtrust, reliance should be prevented if, and only if, it would be wrong to rely. To conceptualize this distinction we regard distrust as relevant. Thus, **we propose the consideration and evaluation of both trust and distrust to achieve appropriate reliance in AI by mitigating both disuse and overtrust**. We consider our proposition to be in line with the underlying motivation of appropriate trust (see Fig. 1). However, the term and work on it is often too keen on trust and disregards the critical review of distrust. The aim of our proposition is to ensure trust plus a healthy amount of distrust. Being inherently imperfect, contemporary AI will benefit from healthy distrust, as this entails a more conscientious usage of AI. With healthy distrust, a user would have a warranted critical stance towards the AI's outputs without being outright distrustful towards using the AI at all. For instance, current AI chatbots generate plausible sounding and highly convincing texts. A situation in which overtrust is arguably a more prominent issue than disuse. Developing or implementing methods to support the identification of possibly wrong outputs would be a sensible approach to mitigate overtrust. Ideally, such methods would foster the user's wariness towards and the user's monitoring of the system, i.e. characteristics of distrust. If these envisioned methods were to only be evaluated by measuring trust, different results would be plausible. On the one hand, a decrease in trust could be observed. On the other hand, observing no change in trust or even an increase in trust, despite errors being identified more easily, would also be plausible. Firstly, the trust measurement may not change because only characteristics of distrust would be affected by the methods sketched above. Secondly, trust may increase if the user considered the output as correct regardless of the supported ability to monitor. This should occur by giving the user the opportunity to distrust. The user would have better ability to examine the output and could verify his trust, resulting in a stronger willingness to rely. Such potential impacts of distrust will not be directly noticed when only measuring trust. In these hypothetical scenarios additionally evaluating distrust would provide more insights into the user's attitude when interacting with AI. Generally, evaluating both trust and distrust could help to clarify the mixed and inconclusive results in empirical research on the explainability-trust hypothesis (Section 2.1). As explanations have the two utilities of identifying both correct as well as wrong outputs, explanations may influence both the user's trust and distrust. To conclude, in the aim of appropriate reliance on trustworthy (X)AI both trust and distrust should be considered. ## 7 Open questions and future work The conceptualisation of distrust in AI and the empirical identification of its antecedents and consequences still needs further empirical research. Aspects that constitute healthy distrust have to be identified as well. To progress this, future work should not only separate trust and distrust on a superficial level, but also investigate whether and which individual antecedents and consequences of trust and distrust are relevant. Studies on trust and distrust in the (X)AI context need to continue to draw from the established work on (dis)trust without intermixing it with common-sense reasoning on trust and distrust. Furthermore, for researchers to be able to appropriately compare what effects different (X)AI methods, design, and systems have on user trust and distrust standardized ways of measuring them as separate dimensions have to be created and validated. With overcoming such conceptual and methodological issues, the two-dimensional concept of trust and distrust can be validated more convincingly. Thereby the lack of generalization can be addressed and an improved starting point for further research on both trust and distrust can be established.
2306.13528
Limitations of Out-of-Distribution Detection in 3D Medical Image Segmentation
Deep Learning models perform unreliably when the data comes from a distribution different from the training one. In critical applications such as medical imaging, out-of-distribution (OOD) detection methods help to identify such data samples, preventing erroneous predictions. In this paper, we further investigate the OOD detection effectiveness when applied to 3D medical image segmentation. We design several OOD challenges representing clinically occurring cases and show that none of these methods achieve acceptable performance. Methods not dedicated to segmentation severely fail to perform in the designed setups; their best mean false positive rate at 95% true positive rate (FPR) is 0.59. Segmentation-dedicated ones still achieve suboptimal performance, with the best mean FPR of 0.31 (lower is better). To indicate this suboptimality, we develop a simple method called Intensity Histogram Features (IHF), which performs comparable or better in the same challenges, with a mean FPR of 0.25. Our findings highlight the limitations of the existing OOD detection methods on 3D medical images and present a promising avenue for improving them. To facilitate research in this area, we release the designed challenges as a publicly available benchmark and formulate practical criteria to test the OOD detection generalization beyond the suggested benchmark. We also propose IHF as a solid baseline to contest the emerging methods.
Anton Vasiliuk, Daria Frolova, Mikhail Belyaev, Boris Shirokikh
2023-06-23T14:49:13Z
http://arxiv.org/abs/2306.13528v1
# Limitations of Out-of-Distribution Detection in 3D Medical Image Segmentation ###### Abstract Deep Learning models perform unreliably when the data comes from a distribution different from the training one. In critical applications such as medical imaging, out-of-distribution (OOD) detection methods help to identify such data samples, preventing erroneous predictions. In this paper, we further investigate the OOD detection effectiveness when applied to 3D medical image segmentation. We design several OOD challenges representing clinically occurring cases and show that none of these methods achieve acceptable performance. Methods not dedicated to segmentation severely fail to perform in the designed setups; their best mean false positive rate at 95% true positive rate (FPR) is 0.59. Segmentation-dedicated ones still achieve suboptimal performance, with the best mean FPR of 0.31 (lower is better). To indicate this suboptimality, we develop a simple method called Intensity Histogram Features (IHF), which performs comparable or better in the same challenges, with a mean FPR of 0.25. Our findings highlight the limitations of the existing OOD detection methods on 3D medical images and present a promising avenue for improving them. To facilitate research in this area, we release the designed challenges as a publicly available benchmark and formulate practical criteria to test the OOD detection generalization beyond the suggested benchmark. We also propose IHF as a solid baseline to contest the emerging methods. Computed Tomography, Magnetic Resonance Imaging, out-of-distribution detection, segmentation. ## I Introduction In recent years, Deep Learning (DL) methods have achieved human-level performance in automated medical image processing. But the development of these methods on a large scale is slowed by several factors. One such factor is the unreliable performance of DL models when the data comes from a distribution different from the training one [1]. These differences are common in medical imaging: population, demographic, acquisition parameter changes, or new imaging modalities. Out-of-distribution (OOD) detection helps to identify the data samples with such differences, hence increasing the reliability and safety of a DL model. For instance, detected cases could be marked as rejected, preserving the model performance, or reported to the experts, preventing the model from failing silently. The ability to report or reject unreliable cases is now considered a necessary capability to enable safe clinical deployment [2]. OOD detection on natural images is a well-researched area [3] where several established benchmarks [4, 5] facilitate its development. Moreover, these methods directly scale on 2D medical images, resulting in multiple algorithms [6, 7, 8], and also a benchmark [9]. At the same time, OOD detection on 3D medical images remains poorly explored, although 3D medical image segmentation is one of the most addressed tasks in medical imaging [10] with outstanding practical usefulness, e.g., quantifying anatomical structures, pathologies, or important biomarkers. The primary cause of this poor exploration is the lack of datasets and benchmarks with a correct problem design. For example, one party uses private data [11], while the other simulates synthetic anomalies that are unlikely to occur in clinical settings [12]. A study can be limited to a single distribution shift, e.g., changes in the scanning location [11], lacking the diversity of setups. Also, a study can be restricted to uncertainty estimation [13] or anomaly detection [12] methods, leaving the full spectrum of approaches uncovered. Such issues limit a fair comparison of the proposed approaches. In this paper, we investigate the effectiveness of OOD detection when applied to 3D medical image segmentation, closing the outlined gaps in prior work. To enable a correct comparison, we thus design a _diverse_ set of challenges using _publicly available data_ with a _downstream segmentation task_ and simulating _clinically occurring anomaly sources_. Besides the problem design, such a study requires appropriately selected state-of-the-art methods. We note that several areas, e.g., anomaly detection and uncertainty estimation, share motivation and methodology with OOD detection. Therefore, we overview all related areas and, contrary to the previous works, present a complete methodological coverage. Extensive evaluation of six selected methods results in our main conclusion: state-of-the-art OOD detection falls short of achieving optimal performance on 3D medical images. We show that the methods not designed for segmentation completely fail in most setups, scoring from \(0.84\) to \(0.59\) False-Positive Rate (FPR) on average, which is not far below \(0.95\) FPR of the random guessing. (Lower FPR is better.) Two methods specifically designed for 3D segmentation achieve \(0.38\) and \(0.31\) mean FPR, further reducing the error about two times. At the same time, we show that these errors can be reduced even further with a simple approach. We show this space for improvement by developing a histogram-based method called Intensity Histogram Features (IHF). IHF achieves comparable and often superior results to its competitors with \(0.25\) mean FPR. It also scores \(0\) FPR in multiple challenges, indicating that the distribution shifts in 3D medical imaging can often be detected using image intensity histograms, while the DL-based methods overlook this domain feature. Therefore, we consider current DL-based OOD detection far from unveiling its full potential and assume it can be further improved. Given IHF's negligible computational costs compared to DL, we suggest it as a baseline to contest the emerging OOD detection methods. Furthermore, we propose using the designed challenges as a benchmark for developing new methods. Correct problem setting, in-depth analysis with simple methods, such as IHF, and ablation studies on synthetic data confirm that our benchmark allows to estimate the quality of solving general OOD detection instead of classifying a priori known anomaly types. Thus, summarizing our contributions, we outline the following: 1. We demonstrate severe limitations of the existing OOD detection methods on 3D medical images. 2. We design and release the corresponding benchmark that can be used as a starting point for the related research. 3. We propose a method, IHF, suggesting it as a solid baseline for OOD detection on 3D medical images. Below, we describe the data used in our study and the problem setup (Sec. II). Then, we review and select state-of-the-art and core methods from the related fields and also detail IHF (Sec. III). Finally, we present the results (Sec. V) and discuss the limitations and implications of our study (Sec. VI). ## II Data Contrary to the fields of 2D natural and medical images, no established OOD detection benchmark with a correct problem setting exists for 3D medical images. For example, [11] used a variety of brain and abdominal CT and MRI datasets but included private ones. The authors also studied only a single distribution shift, changes in the scanning location, which does not allow to estimate the general performance. [12] created an OOD detection challenge, simulating synthetic anomalies in brain MR and abdominal CT images. However, their setup lacks a downstream task (e.g., segmentation), so their study is limited to unsupervised anomaly detection methods. Synthesizing local corruptions, as in [12], can also lead to evaluation biases, which we show with our analysis. On the other hand, [13] included datasets with the segmentation task but limited the considered methods to supervised uncertainty estimation. Given the disagreement of setups, their partial problem coverage, or privacy, we design the OOD detection challenges from scratch following three core principles: * We include two large _publicly available_ CT and MRI in-distribution (ID) datasets to cover the most frequent volumetric modalities. * We ensure both datasets have _a downstream segmentation task_, allowing us to use the full spectrum of methods. * We select _diverse_ OOD datasets that simulate the _clinically occurring sources of anomalies_: changes in acquisition protocol, patient population, or anatomical region. All these datasets are also publicly available. We also introduce several medical imaging artifacts as anomaly sources, synthetically generated as random transformations. Generating synthetic anomalies is a popular approach, applied to 3D images [12, 13] as well as 2D ones [4, 5]; this approach also allows us to conduct controlled ablation studies at different distortion levels. We made the resulting benchmark publicly available1 and we detail all related CT and MRI datasets in Sec. II-A and Sec. II-B. The problem setting is described in Sec. II-C. Footnote 1: [https://github.com/francisco/OOD-benchmark](https://github.com/francisco/OOD-benchmark) ### _3D CT datasets_ We construct a total of 6 challenges on CT data, including two synthetic ones. We give a visual example of data samples in Fig. 1 and detail the ID dataset and every setup below. _ID dataset:_ As an ID CT dataset, we use LIDC-IDRI [14]. It contains \(1018\) chest CT images with the lung nodules segmentation task. We remove cases without nodules since they do not contribute to training a segmentation model. Then, we randomly split the rest \(883\) images \(4:1\) into the train and test, stratified by the number of nodules. #### Ii-A1 OOD source: scanner To simulate a covariate shift, we select Cancer500 [15] that has the same downstream task as the ID dataset but is obtained with different scanners and acquisition protocols. It contains \(979\) chest CT images. We exclude all images with low resolution (less than \(64\) axial slices) and no annotated nodules, resulting in \(841\) images left. #### Ii-A2 OOD source: population To simulate a patient population shift, we use two datasets with the similar semantic content but different downstream task. These datasets are Medseg2 and MIDRC [16], containing 9 and 154 chest CT images, respectively, with COVID-19 cases. Excluding all non-COVID cases, the merged dataset has 120 images. Footnote 2: [https://radioparedia.org/articles/covid-19-3](https://radioparedia.org/articles/covid-19-3) #### Ii-A3 OOD source: location (liver) To simulate a semantic shift, we select a dataset of the same modality that focuses on different body region. Here, we use LiTS [17], a dataset with \(201\) abdominal CT images. #### Ii-A4 OOD source: location (head) Similarly, we include CT-ICH [18], a dataset with \(75\) head CT images. #### Ii-A5 OOD source: synthetic (image noise) We simulate local image corruptions by applying damaging transformations to the testing cases of ID dataset. Applied to a randomly selected image crop, these transformations include blurring, changing contrast, or inserting noise. #### Ii-A6 OOD source: synthetic (elastic) We simulate tissue anomalies by applying elastic transform of random severity. ### _3D MRI datasets_ We construct a total of 7 challenges on MRI data, including four synthetic ones. We give a visual example of data samples in Fig. 2 and detail every setup below. _ID dataset:_ As an ID MRI dataset, we use VS-Seg [19]. It contains \(242\) brain T1c MRIs with the vestibular schwannoma segmentation task. We remove cases with empty target mask and split the rest \(239\) images \(2:1\) into the train and test. #### Ii-B1 OOD source: scanner To simulate a covariate shift, we select the data with the same semantic content and downstream task but is obtained with different scanners and acquisition protocols. Here, we choose CrossMoDA ETZ as a subset of the CrossMoDA 2022 Challenge dataset [20] with \(105\) brain T1c MR images and use it without changes. #### Ii-B2 OOD source: population (glioblastoma) To simulate a patient population shift, we use EGD [21], a dataset with 774 brain MRIs of four modalities (FLAIR, T1, T1c, T2) with a glioma segmentation task. We reduce a possible covariate shift by using only T1c modality from the Siemens Avanto 1.5T scanner, as in VS-Seg, resulting in 262 seleted images. #### Ii-B3 OOD source: population (healthy) Additionally, we simulate a patient population shift with healthy cases instead of the changing pathology. To do so, we use the CC359 [22] dataset with \(359\) brain MR images of T1 modality. We note however that CC359 images differ in vendor, scanning protocol, and do not contain contrast enhancement, so this setup has a secondary OOD source, a covariate shift. #### Ii-B4 OOD source: synthetic (K-space noise) We synthesize MR imaging artifact, known as Herringbone artifact, at different magnitudes. This results in the visible spikes across the whole image due to anomaly points in K-space. #### Ii-B5 OOD source: synthetic (anisotropy) We synthesize wrong resolution by downsampling the image and upsampling it back along one randomly chosen axis. #### Ii-B6 OOD source: synthetic (motion) We synthesize two types of MR imaging artifacts that can happen due to the patient motion. One is ghosting, which appears as shifted copies of the original image. Another exploits _RandomMotion_ simulation from the torchIO library [23]. #### Ii-B7 OOD source: synthetic (image noise) The same pipeline as for CT images; see Sec. II-A5. ### _Problem setting_ We define the OOD detection problem as the classification between samples from a source distribution (ID) and abnormal samples from a novel different distribution (OOD). The core assumption is the abnormal sample distribution is unknown and cannot be computed in advance. Thus, we approximate the anomaly distribution by constructing a diverse set of challenges that represent clinically occurring cases. Consequently, to develop a reliable method in practice, we need not only to attain the desired accuracy on such a set, but also to ensure that the method is capable of generalizing to novel sources of anomalies. By providing a downstream segmentation task, we remove any constraints on the method design. One can use segmentation model features, its uncertainty estimates, or an auxiliary model to detect outliers. Trained algorithms output a single number called _OOD score_ for every testing image; a higher score means a higher outlier likelihood. ## III Methods ### _Methods selection_ Several sub-topics, including anomaly detection (AD), novelty detection, uncertainty estimation (UE), and outlier de Fig. 1: Examples of CT images (representative axial slices) from different simulated OOD sources in our benchmark. Fig. 2: Examples of MRI images (representative axial slices) from different simulated OOD sources in our benchmark. tection, share motivation and methodology with OOD detection. Despite subtle differences between these topics, the approaches are similar, and most of them can be applied to OOD detection with minimal changes, as shown in [24]. So we follow the structure of [24] and select the core methods from OOD detection, UE, and AD. In our selection, we prioritize the methods already implemented for medical imaging, e.g., in [11, 25], and [26]. As a universal baseline, the maximum probability of softmax outputs can be used to detect OOD samples without any model modifications [4]. In practice, however, the entropy of the same softmax outputs (**Entropy**) is used instead [25, 11, 27]. We consider Entropy a starting point for all other approaches and show its performance in our task. The softmax entropy captures the total uncertainty, while the OOD measure corresponds only to the epistemic uncertainty, as explained in [28]. Thereby, one can use epistemic uncertainty estimation to improve over Entropy. Among the others, Deep **Ensemble**[29] is considered the state-of-the-art approach to UE. To use Ensemble, one computes mutual information or variance over several predictions for a single image to obtain epistemic uncertainty map. An alternative way to obtain multiple predictions is Monte-Carlo dropout (**MCD**) [30], which we also include in our comparison. Further, we include the approach of [11], which directly addresses OOD detection on 3D medical images. The authors apply singular value decomposition (**SVD**) to the network features and use singular values as an image embedding. OOD score is calculated as the distance from a sample's embedding to its nearest neighbor from a training set. A better uncertainty estimation can be obtained by modifying the downstream model, although such modifications can harm the model's performance. We include one such popular modification, generalized ODIN (**G-ODIN**) [31], in our study. Finally, OOD scores can be obtained with an auxiliary model, solely dedicated to detecting anomalies in data. Such AD methods were extensively compared in the Medical Out-of-Distribution (MOOD) challenge [12]. We choose the best solution from MOOD 2022, implement, and include it in our experiments under the name **MOOD-1**. Discussing the auxiliary AD models, we intentionally exclude the reconstruction-based methods (e.g., auto-encoders, generative-adversarial nets) from our consideration. Firstly, these methods performed substantially worse in MOOD 2022 than self-supervised learning-based ones (e.g., MOOD-1) [26]. [32] also showed that this type of methods scores far behind self-supervised learning. And [33] highlighted the severe limitations of auto-encoders applied to OOD detection in a similar setup. Given this critique, we do not include the reconstruction-based approaches in our experiments. So, we consider the following methods: Entropy, Ensemble, MCD, SVD, G-ODIN, and MOOD-1. Since some of them are designed for the downstream classification task, we detail their adaptation to segmentation below. ### _Methods implementation_ To preserve a fair comparison, we add only trivial and unavoidable modifications. We also test (in preliminary experiments) any additional component or a critical hyperparameter of every method and select the best performing setting. #### Iii-B1 Entropy Our downstream task is binary segmentation, where the sigmoid function is applied to the network's outputs. We note that two-classes softmax can be derived from the sigmoid. Then, Entropy follows the implementation from [27] and [11], computing the average entropy value over the predicted area (i.e., positive class). We set OOD score to 0 in the case of empty predicted mask. #### Iii-B2 Ensemble We train 5 U-Net models with different initializations and calculate the uncertainty map as the voxel-wise standard deviation of the five corresponding predictions. OOD score is the average of this uncertainty map. #### Iii-B3 MCD We implement MCD by introducing a dropout layer before every down- and up-sampling layer in U-Net. To obtain an uncertainty map, we calculate voxel-wise standard deviations of 5 inference steps with a dropout rate of 0.1. OOD score is the average of this uncertainty map. #### Iii-B4 SVD We follow [11] without any changes. #### Iii-B5 G-ODIN We preserve the original structure of the G-ODIN output layer [31]; the only difference is that we substitute the linear layers with the convolution ones. These convolution layers have kernels of size \(1^{3}\), so the procedure remains equal to classification of every voxel. To obtain the uncertainty map, we use the best reported G-ODIN DeConf-C* variant. In Ensemble, MCD, and G-ODIN, using mutual information or entropy and averaging the uncertainty map over only the predicted area, as in Entropy, harms the performance. Thus, we use simple averaging of uncertainty. #### Iii-B6 Mood-1 The top-performing MOOD solutions generate synthetic anomalies and train a network to segment them [26]. So our MOOD-1 implementation is based on this cut-paste-segment approach, which won MOOD 2021 [34]. We then supplement it with technical improvements from the 2022's best solution, such as one-cycle learning and ensembling over 5 models. The subject-level OOD score is calculated as the mean of the top-100 anomaly probabilities. #### Iii-B7 Volume predictor To demonstrate that some semantic differences might be trivial from the model's perspective but not captured by other methods, we use the total volume of prediction (positive class) as an OOD score. Since a predicted volume can vary in any direction, we consider sample an outlier if the volume is below \(\frac{q}{2}\)-th or above \(100-\frac{q}{2}\)-th percentile of the ID, thus retaining \(100-q\) TPR. ### _Intensity Histogram Features_ To contest the DL algorithms, we propose an unsupervised method based on image intensity histograms as embeddings. Our design is motivated by two other works. Firstly, [11] showed that SVD can efficiently reduce full-image-sized network features. We note a space for improvement in their method - one can optimize the choice of the network's layer to apply SVD. Here, [35] suggested that the earlier network's layers contain the most domain-specific information. Following the latter suggestion, we hypothesize that we can extract enough domain-specific information directly from the image (i.e., the zeroth network's layer). A histogram is a convenient way to do so. We present our method, called Intensity Histogram Features (IHF), schematically in Fig. 3. It consists of three steps: (1) calculating intensity histograms of images and using them as vectors, (2) reducing their dimensionality with PCA, and (3) running the outlier detection algorithms on these vectors. Step 1: preprocessing and histogramsAll images undergo the same preprocessing pipeline to standardize the intensity distribution: 1. We interpolate images to the same spacing. So in all CT and MRI experiments, we use \(1\times 1\times 1.5\) mm. 2. We clip image intensities to \([-1350;300]\) Hounsfield units for CT (a standard lung window) and \([1^{th}\text{percentile};99^{th}\text{percentile}]\) for MRI. 3. We min-max-scale image intensities to the \([0,1]\) range. Given a preprocessed image \(x\), we then compute a probability density function of its intensities in \(m\) bins, a histogram \(e(x)\in\mathbb{R}^{m}\), and further use these vectors \(e(x)\). Step 2: Principal Component Analysis (PCA)As an optional step, we use PCA to reduce the dimensions \(m\). The main reason to use it is that some outlier detection algorithms at _Step 3_ behave unstable in high dimensional spaces. For instance, calculating Mahalanobis distance require reversing the empirical sample covariance matrix, and this matrix is likely to become ill-conditioned or singular with larger \(m\). Therefore, we fit PCA\({}_{v}\) once on the training data \(E_{tr}\) to preserve \(v=99.99\%\) of the explained variance. This mostly eliminates the potential instability while preserving the distribution properties. \(E_{tr}\) consists of row-vectors \(e(x_{tr})\) for all training images \(x_{tr}\in X_{tr}\). Further, we use transformed vectors \(\tilde{e}(x)=\text{PCA}_{v}(e(x))\). Step 3: OOD detection algorithmTo calculate the OOD score for \(x\), we can apply any distance- or density-based outlier detection method. As in [36], we can calculate Mahalanobis distance \(S_{Mah}(x)\): \[S_{Mah}(x)=\sqrt{\left(\tilde{e}(x)-\hat{\mu}\right)^{T}\hat{\Sigma}^{-1} \left(\tilde{e}(x)-\hat{\mu}\right)}, \tag{1}\] where \(\hat{\mu}\) and \(\hat{\Sigma}\) are the estimated mean and covariance matrix on the training set, \(\hat{\mu}=\frac{1}{\left|X_{tr}\right|}\sum_{x_{tr}\in X_{tr}}\tilde{e}\left( x_{tr}\right)\) and \(\hat{\Sigma}=\frac{1}{\left|X_{tr}\right|}\sum_{x_{tr}\in X_{tr}}\left( \tilde{e}(x_{tr})-\hat{\mu}\right)\left(\tilde{e}(x_{tr})-\hat{\mu}\right)^{T}\). Alternatively, one can calculate the distance to the nearest neighbor (min-distance) \(S_{NN}(x)\), as in [11]: \[S_{NN}(x)=\min_{x_{tr}\in X_{tr}}||\tilde{e}(x)-\tilde{e}(x_{tr})||_{2}. \tag{2}\] Using \(S_{Mah}\) (Eq. 1) and \(S_{NN}\) (Eq. 2) corresponds to the methods called IHF-Mah and IHF-NN, respectively. We include them in comparison and ablation study independently. ## IV Experimental setup ### _Downstream task_ We have 3D CT and MRI datasets with a binary segmentation task. So we adhere the standard approaches to train a segmentation model in all methods that require the latter. Data preprocessingWe describe preprocessing in IHF, Step 1 (Sec. III-C); it is the same in all experiments and it is the minimum allowing the correct DL model training. Fig. 3: The proposed OOD detection method, called _Intensity Histogram Features (IHF)_. It consists of three steps: calculating a \(m\)-dimensional vector as a histogram bin values from the preprocessed image (_Step 1_), fitting and applying _PCA_ to the occuring data, and calculating Mahalanobis distance between a test vector and ID samples distribution (_Step 3_). We apply IHF to the 3D images and illustrate the process using 2D axial slices for simplicity. (*PCA is fitted once on all training data.) _Architecture and training:_ In all experiments, we use 3D U-Net [37], a standard architecture for segmentation. We train it on patches of \(64\) axial slices, with a batch size of \(3\), Adam optimizer, and learning rate of \(10^{-4}\) for \(30\) epochs, \(1000\) iterations each. In a batch, patches from different images are padded if necessary. We minimize the sum of Binary Cross-Entropy and Focal Tversky losses [38] to achieve high segmentation sensitivity. _Segmentation evaluation:_ We train all models on the training part of ID datasets. Then, we can evaluate their segmentation quality on the corresponding testing part of the OOD datasets, showing its possible decline. These segmentation results are given in Tab. III for the CT and in Tab. IV for the MRI datasets. ### _OOD detection evaluation_ Given the testing part of the ID dataset, we measure the OOD detection quality against it for all the suggested OOD setups, similarly to the classification task. Outliers occur rarely in practice, so we aim to measure detection quality when most of the ID samples are being preserved w.r.t. relatively rare OOD events. In this case, one of the most convenient classification metrics to use is false positive rate at 95% true positive rate (FPR), so we consider FPR our primary metric Nonetheless, for the consistency with other studies, we report AUROC in the supplementary materials. ## V Results In this section, we report on our experiments and results. We start by benchmarking all considered methods, then present the analysis of the benchmark design and finish with the ablation study of the methods on synthetic data. ### _Benchmarking_ Tab. I presents the primary results of our study. Uncertainty-based methods, the ones that are not designed for segmentation, mostly fail in the suggested challenges. Entropy, Ensemble, MCD, and G-ODIN gives substantially worse (higher) FPR than the other methods, with only G-ODIN slightly surpassing a simple _Volume_ predictor. Methods dedicated to segmentation perform better on average. For instance, our implementation of the best MOOD 2022 solution, MOOD-1, achieves \(0.36\) and \(0.41\) average FPR on CT and MRI data, respectively. SVD improves further; it appears the only reliable studied method, providing \(0.42\) and \(0.21\) mean FPR. Then, SVD performance is contested by the proposed IHF. In a combination with min-distance, IHF-NN provides the best average score across studied challenges: 0.43 and 0.08 FPR, respectively. In a combination with Mahalanobis distance, IHF-Mah provides practically worse results in the CT setups. Although IHF-Mah performs weaker, it was historically the first and we submitted it to MOOD 2022 (\(m=150\), no PCA). We placed second (team AIRI3) with the earliest IHF version, supporting its robustness by the independent evaluation. Footnote 3: [http://medicalood.dkfz.de/web/](http://medicalood.dkfz.de/web/) We also conduct an ablation study to verify IHF robustness. As shown in Fig. 4, we test IHF performance varying its only two parameters, the number of bins (\(m\)) and explained variance ratio (\(v\)). Our findings indicate a consistent behavior and numerical stability regardless of the parameter choice, with a slight trend of improved quality at a larger \(m\). Both IHF variants perform comparable or better on average than SVD and, consequently, the other studied methods. Therefore, we conclude that the histograms of image intensities are descriptive enough to detect most of the suggested OOD cases, while neural networks might omit important domain features that could be used in this problem. We thus hypothesize that neural networks-based OOD detection can be further improved and leave this promising direction for future research. We present the same comparison in terms of AUROC in Tab. V. Although AUROC is not our primary metric, it roughly preserves the same relative ranking of the studied methods, not contradicting to our main message. ### _In-depth benchmark analysis_ Further, we emphasize the significance of constructing a _correct_ benchmark to study the methods. Analysis of our experimental results suggests the following: * The benchmark should contain diverse OOD challenges. * Challenges should represent clinically occurring cases. * Potential biases in the benchmark should be explored using simple methods, such as IHF or Volume predictor. It is often possible to develop a method tailored to specific OOD sources where it thrives but fails in the other setups. For example, G-ODIN demonstrates near-perfect results in the Population and Scanner setups on MRI data but yields the worst scores in the others. In practice however, the precise anomaly source is always unknown, and a general OOD detector, with an acceptable average performance, is needed. The true method effectiveness can be estimated only in the context of diverse challenges. Secondly, OOD sources should accurately represent or simulate the clinically occurring cases. For instance, the Synthetic (Noise) setup, as introduced in [26] and reproduced in our study, is not supported by any medical imaging process. MOOD-1 achieves the highest performance in this setup because its training objective closely aligns with the anomaly synthesis process. However, performing well in this and similar cases is of no clinical value and, consequently, biases the methods' evaluation towards explicitly unrealistic scenarios. Finally, our analysis reveals that OOD challenges might contain implicit but trivial features. If a benchmark focuses solely on any such feature, we can design a method that exploits this feature, leading to deceptive conclusions about the generalized performance. Instead, we suggest using such methods to reveal biased features beforehand. For example, near-perfect IHF results in several setups demonstrate that certain anomalies are actually trivial intensity changes, reinforcing the need of designing diverse benchmarks. To ensure methods' generalization, we calculate the Fechner correlation between their results and results of the _simple methods_. We show that, apart from SVD, the other methods do not exhibit a strong correlation with the Volume or IHF scores (Tab. II). So the examined methods mostly do not rely on trivial features, such as image intensity distribution. However, SVD shows a correlation of 0.54 with the Volume scores, suggesting its hindered generalization on new sources of data with small difference in the predicted area volume. ### _Ablation study on synthetic data_ In Fig. 5, we show the OOD detection results on synthetic data for different levels of distortion. A larger severity level indicates an image corruption with the larger magnitude, where the magnitudes are chosen perceptually from 1 (barely noticeable distortion) to 5 (heavily distorted image). The general trend is that more distorted images are easier to detect. Here, SVD exhibits the steepest average slope and behaves almost linearly with the increasing severity level, suggesting that we have considered challenging but solvable tasks. Different methods exhibit different sensitivity to the level of distortion required for detection. Entropy and the other UE methods begin to operate effectively only at level 3, while IHF can detect anomalies at the minimal level. So we conclude that the methods should be studied across a wide range of the anomaly severity levels. Additionally, we show that MOOD-1 depends more on the OOD source than severity level: it completely fails in Motion and K-space setups, while almost perfectly detects Noise and Elastic deformations independently of the severity level. Moreover, MOOD-1 and IHF behave inversely to each other in Noise and Elastic setups. Such diverse behavior suggests the need to study methods also across a wide range of anomaly types. ## VI Discussion Besides benchmarking the OOD detection methods, our study also suggests practical ideas on building a correct \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & IHF-NN & SVD & IHF-M & MOOD & GODN & Vd & MCD & Ens & Ent \\ \hline HF-NN & 1.00 & 0.38 & **0.85** & 0.23 & -0.08 & 0.23 & 0.08 & -0.08 & -0.23 \\ Volume & 0.23 & **0.54** & 0.38 & 0.38 & 0.38 & 1.00 & 0.23 & 0.08 & 0.23 \\ \hline \hline \end{tabular} \end{table} TABLE II: Fechner correlation coefficients between IHF-NN, Volume and the other studied methods’ performance. Fig. 4: Dependence of IHF on its two hyperparameters: the number of histogram bins (\(m\)) and explained variance in PCA (\(v\)). We give the results for both IHF variants and CT and MRI setups. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline OOD setup & IHF-NN & SVD & IHF-Mah & MOOD-1 & G-ODIN & Volume & MCD & Ensemble & Entropy \\ \hline Location (Head) & **.00** & **.00** & **.00** &.12 &.55 &.53 &.36 &.51 &.56 \\ Location (Liver) &.51 & **.13** &.64 &.56 &.56 &.84 &.89 &.93 &.78 \\ Population (COVID-19) &.54 &.75 &.72 & **.51** &.54 &.82 &.58 &.58 &.87 \\ Scanner &.88 &.89 &.85 & **.73** &.92 &.86 &.89 &.90 &.83 \\ Synthetic (Elastic) & **.15** &.37 &.67 &.16 &.59 &.81 &.42 &.37 &.84 \\ Synthetic (Image noise) &.49 &.37 &.62 & **.11** &.89 &.85 &.87 &.82 &.81 \\ \hline Population (Glioblastoma) & **.00** & **.00** & **.00** &.10 &.21 &.01 &.85 &.81 &.86 \\ Population (Healthy) & **.00** & **.00** & **.00** &.11 & **.00** & **.00** &.88 & 1.0 &.85 \\ Scanner & **.00** & **.00** & **.00** &.15 & **.00** &.74 &.63 &.66 &.89 \\ Synthetic (K-space noise) & **.00** &.36 & **.00** &.88 &.88 &.90 &.82 &.77 &.73 \\ Synthetic (Anisotropy) &.09 &.20 & **.05** &.57 &.88 &.93 &.77 &.77 &.81 \\ Synthetic (Motion) & **.00** &.58 & **.00** &.73 &.93 &.94 &.85 &.88 &.91 \\ Synthetic (Image noise) &.47 &.33 &.47 & **.30** &.56 &.71 &.78 &.75 &.75 \\ \hline CT average &.43 &.42 &.58 & **.36** &.67 &.79 &.67 &.68 &.78 \\ MRI average &.08 &.21 & **.07** &.41 &.50 &.60 &.80 &.81 &.83 \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of the considered OOD detection methods in terms of FPR@TPR95% scores (lower is better). We highlight the best scores in every row in **bold** and ranked the methods by their average performance. The first and second sections correspond to CT and MRI setups, respectively. benchmark. We mainly highlight the data diversity in multiple dimensions and clinical relevance of the setups. However, we leave several critical questions undeveloped, thus opening the corresponding research directions, which we discuss below. #### Vi-B1 Benchmark design We select the datasets that represent the clinically occurring sources of OOD data. And we confirm the importance of the constructed challenges by the degraded downstream performance (Tab. III, Tab. IV). However, we cannot state with certainty that the highlighted sources are the only differences between the distributions. While we name the primary difference in each case (e.g., acquisition parameters, patient populations), the distributions might differ due to several causes and the other causes exist outside of our consideration. So the development of a more refined benchmark with the controlled distinction between the OOD sources would greatly facilitate further research in this area. Furthermore, the question of whether any semantic difference should be considered abnormal requires additional investigation. For instance, Population (Healthy) setup is considered OOD due to this apparent semantic difference - healthy cases instead of pathology ones. Segmentation models often yield correct empty predictions for such images regardless any OOD detection decision. Rejecting a correct prediction in this case should be considered a false negative instead of true positive outcome, lowering the false positive rate. #### Vi-B2 Uncertainty estimation Our study verifies that the epistemic uncertainty is better suited for OOD detection than the total uncertainty, since MCD and Ensemble achieve better results than Entropy. Nevertheless, a question of how to aggregate the uncertainty map into a single score remains open. On the one hand, aggregating uncertainty over the predicted volume offers certain advantages compared to the whole volume, especially when dealing with 3D images, where the area of interest may occupy only a small portion of the entire image. While this aggregation shows improved results for the Entropy method, it cannot rank samples with an empty predicted mask and does not trigger on anomalies outside of the predicted area. Contrary, aggregating uncertainty as a simple average over the whole image provides better results for MCD and Ensemble. Developing a reasonable UE method for 3D images, thereby, is a possible direction for future research. #### Vi-B3 Other IHF applications Additionally, we explored alternative applications of the proposed IHF method. We noted its strong performance in such medical imaging tasks as contrast detection and domain classification. In this paper, however, we do not delve into the potential IHF applications, as they lie beyond the scope of the OOD detection problem. ## VII Conclusion In this paper, we have conducted an extensive investigation of OOD detection on 3D medical images. Our results revealed that the established approaches, including uncertainty estimation and anomaly detection ones, are unable to provide reliable performance. They provide unacceptably high number of false positives (0.31 mean FPR at best) and fail to generalize. They can also be improved. To demonstrate this, we developed a histogram-based method, IHF, that achieves comparable and often superior results to its competitors. Thereby, we indicated that the distribution shifts in 3D medical imaging can often be detected using intensity histograms, while the DL algorithms neglect this domain feature. Although IHF achieves better average results, its performance is surpassed in multiple challenges, emphasizing the need and possibility for developing a robust and general OOD detection method. We constructed and released the corresponding challenges as a benchmark for OOD detection on 3D medical images, proposing IHF as a solid baseline to contest new methods. AcknowledgmentsWe acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. Fig. 5: FPR under synthetically distorted data for every distortion severity level. Blue line indicates method’s average trend across presented challenges with 95% confidence interval. The other UE methods (MCD, Ensemble, and G-ODIN) are excluded since their average trend is similar to Entropy.
2305.12366
A Quantile Shift Approach To Main Effects And Interactions In A 2-By-2 Design
When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon-Mann-Whitney method. For both methods, we propose an implementation using the Harrell-Davis quantile estimator, used in conjunction with a percentile bootstrap approach. We report results of simulations of false and true positive rates.
Rand R. Wilcox, Guillaume A. Rousselet
2023-05-21T06:29:31Z
http://arxiv.org/abs/2305.12366v3
###### Abstract ###### Abstract When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon-Mann-Whitney method. There are many quantile estimators, but for reasons summarized in the paper, the focus is on using the Harrell-Davis quantile estimator used in conjunction with a percentile bootstrap method. Included are results comparing two methods aimed at controlling the probability of one or more Type I errors. **A QUANTILE SHIFT APPROACH TO MAIN EFFECTS AND INTERACTIONS IN A 2-BY-2 DESIGN** Rand R. Wilcox\({}^{1*}\) & Guillaume A. Rousselet\({}^{2*}\) 1. Dept. of Psychology, University of Southern California, Los Angeles, CA 90089-1061, USA orcid.org/0000-0002-2524-2976 2. School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, G12 8QB, Glasgow, UK orcid.org/0000-0003-0006-8729 \({}^{*}\) Corresponding authors: [email protected], [email protected] Keywords: shift function, deciles, factorial design, interaction, quantile estimation ## 1 Introduction When comparing two distributions, certainly the most common approach is to focus on a single measure of location, typically the mean or median. An alternative approach is to compare multiple quantiles with the goal of getting a more detailed understanding of where groups differ and by how much (Rousselet et al., 2017). Several methods have been derived for dealing with this issue (e.g., Doksum & Sievers, 1976; Goldman & Kaplan, 2018; Lombard, 2005; Wilcox, 1995; Wilcox et al., 2014). Extant results suggest how to generalize these methods to a between-by-between, 2-by-2 design. One goal here is report results on two methods for controlling the family wise error (FWE) rate, meaning the probability of one or more Type I errors. Here, two distinct approaches are considered. The first defines interactions and main effects with means replaced by a collection of quantiles with an emphasis on the deciles. For example, if \(\theta_{jk}\) denotes the.2 quantile corresponding to level \(j\) of the first factor and level \(k\) of second factor, one goal is to test \[H_{0}:\theta_{11}-\theta_{12}=\theta_{21}-\theta_{22}, \tag{1}\] which mimics the usual notion of an interaction in an obvious way. Here, however, the goal is to use multiple quantiles and to assess how well the FWE rate is controlled. Of course, a related issue is computing a reasonably accurate confidence interval for \(\theta_{11}-\theta_{12}-\theta_{21}+\theta_{22}\). As is evident, main effects can be addressed as well. For example, one can test \[H_{0}:\theta_{11}+\theta_{12}=\theta_{21}+\theta_{22}, \tag{2}\] for a collection of quantiles. The second approach, when dealing with an interaction, has a certain connection to a rank-based method proposed by Patel and Hoel (1973) that in turn has a connection with the classic Wilcoxon-Mann-Whitney test. To describe the Patel-Hoel approach, let \(X_{jk}\) denote four independent random variables where \(X_{11}\) and \(X_{12}\) correspond to the first level of the first factor in a 2-by-2 design, while \(X_{21}\) and \(X_{22}\) correspond to the second level of the first factor. Let \(p_{1}=P(X_{11}<X_{12})\) and \(p_{2}=P(X_{21}<X_{22})\). The hypothesis of no interaction is \[H_{0}:p_{1}=p_{2}. \tag{3}\] And there is the issue of computing a \(1-\alpha\) confidence interval for \(p_{1}-p_{2}\). Wilcox (2022, section 7.9.2) describes a method for making inferences about this measure of effect size that performs well in simulations. For some related rank-based methods, see Gao and Alvo (2005), as well as De Neve and Thas (2017). Let \(X_{ijk}\) denote a random sample from the \(j\)th level of the first factor and the \(k\)th level of the second factor (\(i=1,\ldots,n_{jk}\); \(j=1\), \(2\); \(k=1\), \(2\)). For convenience momentarily focus on \(p_{1}\) and let \(D_{ih}=X_{i11}-X_{h12}\) (\(i=1,\ldots,n_{11}\); \(h=1,\ldots,n_{12}\)). An estimate of \(p_{1}\) is simply \[\hat{p}_{1}=\frac{1}{n_{1}n_{2}}\sum\sum I(D_{ih}), \tag{4}\] where the indicator function \(I(D_{ih})=1\) if \(D_{ih}<0\); otherwise \(I(D_{ih})=0\). The estimator \(\hat{p}_{1}\) is the estimator used by the classic Wilcoxon-Mann-Whitney test. Let \(\theta_{1}\) denote the median of \(D\) and notice that \[H_{0}:p_{1}=.5 \tag{5}\] is the same as \[H_{0}:\theta_{1}=0. \tag{6}\] The parameter \(\theta_{1}\) is defined based on level 1 of the first factor. Let \(\theta_{2}\) denote the analog of \(\theta_{1}\) when dealing with level 2 of the first factor. Then an analog of the Patel-Hoel interaction is \(\theta_{1}-\theta_{2}\). Here, however, the goal is to consider the broader issue of comparing the deciles of these two distributions. More formally, let \(q_{1}\) and \(q_{2}\) denote the \(q\)th quantile of the distribution \(D\) for level 1 of the first factor and level 2 of first factor, respectively. The goal is to test \[H_{0}:q_{1}=q_{2} \tag{7}\] and to compute a \(1-\alpha\) confidence interval for \(q_{1}-q_{2}\) for \(q=.1,.2,\ldots,.9\). A second goal is to control the FWE rate in a reasonably accurate manner. Notice that when testing (1), there is no distinction between \(\theta_{11}-\theta_{12}=\theta_{21}-\theta_{22}\) and \(\theta_{11}-\theta_{22}=\theta_{12}-\theta_{22}\). That is, interchanging the rows and columns does not alter the estimated effect size. However, when dealing with (7), interchanging the rows and columns can yield different results. To illustrate this point, consider, for example, a situation where for the first level of the first factor, both levels of the second factor have standard normal normal distributions, while for the second level of the first factor, the two levels of the second factor have lognormal distributions. Data were generated as just described based on sample sizes of 50 and the 50th quantile was estimated. The estimate of \(q_{1}-q_{2}\) was.027. But interchanging the rows and columns, now the estimate was \(-.088\). The code to reproduce this example can be found in the R notebook apd_ex.Rmd, which is available as part of the companion reproducibility package for this article (Wilcox and Rousselet, 2023) and on GitHub (Rousselet, 2023). ## 2 The Proposed Methods First consider testing (1). The first issue is choosing a reasonable quantile estimator from among the many estimators that might be used. The focus here is on the estimator derived by Harrell and Davis (1982) for two fundamental reasons. The first has to do with tied values. When comparing two independent groups using the usual sample median, tied values can be accommodated using a slight generalization of a standard percentile bootstrap method. However, when comparing other quantiles using an estimator based on only one or two order statistics, such as those summarized by Hyndman and Fan (1996), this approach no longer performs in an adequate manner (e.g., Wilcox, 2022). Simulation results reported in section 3 indicate that this remains the case for the situation at hand. In contrast to the estimators considered by Hyndman and Fan (1996), the Harrell-Davis estimator uses a weighted average of all the order statistics. Moreover, when using the Harrell-Davis estimator in conjunction with a percentile bootstrap method, all indications are that this approach does perform well when comparing two independent groups and there are tied values. An issue here is whether this continues to be the case when dealing with a two-way design. A second reason for using the Harrell-Davis estimator is that is has better efficiency under normality compared to using a weighted average of only two order statistics. For completeness, it is noted that comparing quantiles can be accomplished using the quantile regression estimator derived by Koenker and Bassett (1978). However, this is tantamount to using one or two order statistics. When comparing the medians of two independent groups, for example, and the sample sizes are odd, in effect the usual sample median is being used. Evidently, there is no generalization of the Koenker-Bassett method that captures the spirit of the Harrell-Davis estimator. There are several quantile estimators, in addition to the Harrell-Davis estimator, that use all of the order statistics (e.g., Liu et al., 2022; Navruz and Ozdemir, 2020). A possible criticism is that these estimators, including the Harrell-Davis estimator, have a breakdown point of only \(1/n\). That is, the minimum number of order statistics that must be altered to make the estimate arbitrarily large is one. In practice this issue might not be a serious concern because the extreme order statistics get a relatively small weight. In a situation where the breakdown point is an issue, one possibility is to use the trimmed Harrell-Davis estimator derived by Akinshin (2022). Let \(U\) be a random variable having a beta distribution with parameters \(a=(n+1)q\) and \(b=(n+1)(1-q)\). Let \[W_{i}=P\left(\frac{i-1}{n}\leq U\leq\frac{i}{n}\right).\] The Harrell-Davis estimate of the \(q\)th quantile is \[\hat{\theta}_{q}=\sum_{i=1}^{n}W_{i}X_{(i)}, \tag{8}\] where \(X_{(1)}\leq\cdots\leq X_{(n)}\) are the values written in ascending order. The beta weights used to calculate the deciles for a sample size of \(n=50\) are illustrated in Figure 1. As an alternative to the Harrell-Davis estimator, we considered the default quantile estimator in R, called using the command quantile(x, probs=seq(0.1,0.9,0.1), type=7), to compute the deciles. This estimator, in addition to being widely used, relies on two order statistics (Hyndman & Fan, 1996), and could thus be more robust to outliers than the Harrell-Davis estimator in some situations. To test (1), we considered a percentile bootstrap method combined with the Harrell-Davis estimator and the quantile(type=7) estimator. The percentile bootstrap method begins by generating bootstrap samples by sampling with replacement \(n_{jk}\) values from the data associated with the \(j\)th level of the first factor and the \(k\)th level of second factor yielding \(X_{ijk}^{*}\) (\(i=1,\ldots,n_{jk}\)). Based on these bootstrap samples, compute the \(q\)th quantile using the Harrell-Davis estimator, or the quantile(type=7) estimator, yielding \(\theta_{jk}^{*}\) followed by \[\Psi^{*}=\theta_{11}^{*}-\theta_{12}^{*}-\theta_{21}^{*}+\theta_{22}^{*}. \tag{9}\] Repeat this process \(B\) times yielding \(\Psi_{1}^{*},\ldots,\Psi_{B}^{*}\). Let \(A\) denote the number of \(\Psi^{*}\) values that are less than zero and let \(D\) denote the number of \(\Psi^{*}\) values that are equal to zero. Let \[P=\frac{A}{B}+.5\frac{D}{B}. \tag{10}\] A p-value, when testing (1), is \(2\min\{P,1-P\}\). The term \(D/B\) is important when dealing with tied values (e.g., Wilcox, 2022). To compute a \(1-\alpha\) confidence interval, put \(\Psi_{1}^{*},\ldots,\Psi_{B}^{*}\) in ascending order yielding \(\Psi_{(1)}^{*}\leq\cdots\leq\Psi_{(B)}^{*}\). Let \(\ell=\alpha B/2\), rounded to the nearest integer. Let \(u=n-\ell\). Then a \(1-\alpha\) confidence interval for \(\Psi\) is \[(\Psi_{(\ell+1)}^{*},\Psi_{(u)}^{*}). \tag{11}\] Here, \(B=2000\) is used. Note that the same bootstrap samples were used for each of the quantiles being compared. An alternative approach is to use separate bootstrap samples for each test to be performed (Wilcox, 1995). Both approaches were considered and there is no indication that separate bootstrap samples offer a practical advantage in terms of controlling the Type I error probability. Results of the simulations comparing the two bootstrap approaches are available in the R notebooks sim_fp_b1b9.Rmd and sim_fp_apd_b1b9.Rmd (Wilcox & Rousselet, 2023). Using the same bootstrap samples for all of the tests performed considerably reduces execution time, which is why it is assumed henceforth. Figure 1: **Beta weights used to calculate the Harrell-Davis estimates of the deciles with a sample size of 50.** When performing \(C\) tests, there is the issue of controlling the FWE (familywise error) rate, meaning the probability of one or more Type I errors. Two approaches are considered here. The first is Hochberg's (1988) improvement on the Bonferroni method. Let \(p_{1},\ldots,p_{C}\) be the p-values associated with the \(C\) tests. Put these p-values in descending order, and label the results \(p_{[1]}\geq p_{[2]}\geq\cdots\geq p_{[C]}\). Set \(k=0\) and proceed as follows: 1. Increment \(k\) by 1. If \[p_{[k]}\leq\frac{\alpha}{k},\] stop and reject all hypotheses having a p-value less than or equal to \(p_{[k]}\). 2. If \(p_{[k]}>\alpha/k\), repeat step 1. 3. Repeat steps 1 and 2 until a significant result is obtained or all \(C\) hypotheses have been tested. The second method for controlling the FWE rate was Benjamini and Hochberg (1995), which is aimed at controlling the false discovery rate (FDR). That is, the goal is to control the expected proportion of Type I errors among the null hypotheses that are correct. It is known that there are situations where the Benjamini-Hochberg method ensures that the false discovery rate is less than or equal to \(\alpha\), but it does not necessarily control the FWE rate (Hommel, 1988). However, the simulations described in section 3, below, indicated that when using Hochberg's method, the FWE rate is always below the nominal level. Consequently, there was interest in how well the Benjamini-Hochberg method performs. It is readily verified that the Benjamini-Hochberg method always has as much or more power than Hochberg's method. Consequently, provided the Benjamini-Hochberg method controls the FWE rate for the situation at hand, it has a practical advantage over Hochberg's method. The Benjamini-Hochberg method is applied simply by replacing \(p_{[k]}\leq\alpha/k\) in step 1 of Hochberg's method with \[p_{[k]}\leq\frac{(C-k+1)\alpha}{C} \tag{12}\] As for testing (7), again both the Harrell-Davis and the quantile(type=7) estimators were considered, in conjunction with a percentile bootstrap method. Bootstrap samples are generated as before yielding estimates of \(q_{1}\) and \(q_{2}\), which are labelled \(q_{1}^{*}\) and \(q_{2}^{*}\). This process is repeated \(B\) times and the results are used to compute a \(1-\alpha\) confidence interval for \(q_{1}-q_{2}\), as well as a p-value, in essentially the same manner as done when testing (1). ## 3 Simulation Results Simulations were used to check the FWE rate when testing (1) and (7), as well as how the power of these methods compare to the classic ANOVA F test as well as a method for comparing 20% trimmed mean, which is described in Wilcox, 2022, section 7.4.3). Data were generated from four continuous distributions as well as three discrete distributions. The discrete distributions were a Poisson distribution having mean 9, and two beta-binomial distributions, one with parameter \(r=1\), the other with \(r=9\), and the other parameters set to \(s=9\), and 10 bins. The three discrete distributions were included as a partial check on the impact of tied values. The four continuous distributions were standard normal, mixed normal, lognormal and mixed lognormal. The distribution of the mixed normal is \[H(x)=.9\Phi(x)+.1\Phi(x), \tag{13}\] where \(\Phi(x)\) is the standard normal distribution. The mixed normal is a symmetric distribution with heavy tails, roughly meaning that outliers are likely to occur. The mixed lognormal distribution is given by (13), but with \(\Phi\) replaced by the lognormal distribution. Based on over 1,500 estimates of skewness and kurtosis reported in various journal articles, Cain et al. (2017) report that 99% of the estimates were less than the skewness and kurtosis of a lognormal distribution. This suggests that if a method performs reasonably when dealing with a lognormal distribution, it is highly likely to perform reasonably well in practice. However, a possible concern is that when dealing with heavy-tailed distributions, the standard error of the usual kurtosis estimator can be quite high even when the sample size is fairly large. Moreover, the usual estimate of kurtosis can grossly under estimate the true value. Consider, for example, a lognormal distribution, which has kurtosis 113.9. Based on a sample of 100,000, the kurtosis of the lognormal distribution was estimated and this process was repeated 1000 times. It was found that 79% of the estimates were less than the true value. The median estimate was 82. This process was repeated using the contaminated lognormal distribution which is skewed and very heavy-tailed. The kurtosis of this distribution was estimated based on a sample size of one million. This process was repeated 1000 times yielding estimates ranging between 242 and 16400. The median estimate was 429. Consequently, the contaminated lognormal distribution was used here as an additional check on how well the methods perform. The code for these simulations is available in the notebook kurtosis_estimation.Rmd(Wilcox and Rousselet, 2023). Simulations were performed for sample sizes 20, 30, 40,..., 100, using 10,000 iterations. This was done using both types of quantile estimators and for the two main effects and the interaction. A p-value was computed for each of the nine quantiles to be compared. In terms of controlling the FWE rate, both Hochberg's method and the Benjamini-Hochberg method were considered. And as previously indicated, this was done for seven distributions. So in terms of Type I errors there is a total of 3402 results (7 distributions * 2 quantile estimators * 3 contrasts * 9 deciles * 9 sample sizes). Finally, simulations were performed dealing with power. Various situations were considered, including shifting the four continuous distributions used to estimate Type I errors, varying the skewness of _g-and-h_ distributions (Hoaglin, 1985), and the skewness of Poisson and beta-binomial distributions. Complete details, including the code that was used, are reported in files available on figshare (Wilcox and Rousselet, 2023) and on GitHub (Rousselet, 2023). The main R packages used to perform the simulations and illustrate the results were Rcpp(Eddelbuettel and Francois, 2011), ggplot2(Wickham, 2016) and cowplot(Wilke, 2017). First we consider the results for testing (1), then for testing (7). ### Results when testing (1) The results regarding the FWE rate were highly consistent over the conditions considered, making it a simple matter to briefly summarize the relative merits of the methods being considered. All indications are that the actual FWE rate is less than the nominal.05 level using the Hochberg method as well as the Benjamini-Hochberg method. Consequently, the Benjamini-Hochberg method is recommended because it always has as much or more power than Hochberg's method. #### 3.1.1 Type I errors To illustrate some of the type I error simulation results, Figure 2 contains results for the normal and lognormal populations, separately for the interaction and the two main effects. Results with Hochberg's method are omitted but detailed figures are available in the notebook sim_fp.Rmd. The FWER without correction for multiple comparison (grey lines) is lower than expected if the 9 tests were independent, which in theory would be \(1-(1-\alpha)^{9}=0.37\). The results in 2 and 3 are about half of that expected value, supporting the use of the FDR correction over Hochberg's. As a sanity check, results for the two types of ANOVAs are included, confirming false positives very close to the nominal level when sampling from a normal population. Although the percentile bootstrap led to conservative estimates for both quantile estimators, the Harrell-Davis estimator clearly outperforms quantile(type=7). The gap between the two estimators increases when we consider samples from discrete populations, because the performance of quantile(type=7) deteriorates while that of Harrell-Davis remains stable (Figure 3). Bradley (1978) suggested, as a general guide, that when testing at the.05 level, the actual level should be between.025 and.075. This criterion was met in most situations for the Harrell-Davis estimator but not for the quantile(type=7) estimator. The higher performance of the Harrell-Davis estimator compared to the quantile(type=7) estimator can be observed at individual deciles as well (Figure 4). However, for the first and last deciles, the type I error rate is higher than the nominal level when using the Harrell-Davis estimator and n=20, a result that confirms earlier observations (Wilcox et al., 2014). #### 3.1.2 Power Not surprisingly, there are situations where inferences based on means or a 20% trimmed have more power. But there are situations where comparing deciles provides more power: no single method dominates. To illustrate, Figure 5 presents results for normal and lognormal populations. In each case, data were generated for the four groups by sampling from the standard normal and lognormal distributions, before shifting the groups by different amounts. Figure 2: **False positive results for normal and lognormal populations.** FWER (across quantiles) are plotted as a function of sample size for the interaction and the two main effects A and B. ANOVA.M = ANOVA using means. ANOVA.TM = ANOVA using 20% trimmed means. FDR (Benjamini–Hochberg method) and No corr. (no correction) refer to the percentile bootstrap method in conjunction with the Harrell–Davis estimator (HD) and the quantile(type=7) estimator (QT7). Horizontal ribbons: dark grey indicates Bradley’s (1978) satisfactory range [0.025; 0.075]; light grey indicates Bradley’s (1978) ideal range [0.045; 0.055]. When sampling from normal populations, the ANOVA on means dominates other methods. To compare methods, we report familywise power for the decile methods (bootstrap p values with and without FDR correction), the probability of at least one rejection among the 9 tests. The ANOVA on 20% trimmed means is a bit less powerful, followed by the bootstrap method using the Harrell-Davis estimator, and last the bootstrap method combined with the quantile(type=7) estimator. Switching to lognormal populations, the power of both ANOVA tests is dramatically lower than the bootstrap approach. This figure and the next one were generated using the notebook sim_tp.Rmd. Figure 6 presents results from populations in which tied values were common. In both cases, the ANOVA on means dominates other methods. When using the bootstrap approach, the gap between the Harrell-Davis estimator and the quantile(type=7) estimator is larger than what was observed when sampling from continuous populations. Figure 3: **False positive results for Poisson and one of the two beta-binomial populations.** See details in the Figure 2 caption. ### Compare deciles of distributions of all pairwise differences - test (7) To assess the bootstrap method aimed at testing (7), we used the same approach employed in the previous section. Now only the interaction is considered. The simulations and illustrations of the type I error rates can be found in the notebook sim_fp_apd.Rmd. For power, see notebook sim_tp_apd.Rmd. Figure 4: **False positive results at individual deciles for normal and beta-binomial populations.** Type I error rates are plotted at each decile, separately for the different sample sizes (n) and for the Harrell–Davis estimator (HD) and the quantile(type=7) estimator (QT7). Horizontal ribbons: dark grey indicates Bradley’s (1978) satisfactory range [0.025; 0.075]; light grey indicates Bradley’s (1978) ideal range [0.045; 0.055]. #### 3.2.1 Type I errors Again we observed FWE rates much lower than expected if the deciles were independent (Figure 7). For continuous distributions (panels A-D in Figure 7), results were very similar for the Harrell-Davis and quantile(type=7) estimators. Both methods were conservative. When sampling from distributions in which tied values are likely, now the Harrell-Davis estimator outperforms the quantile(type=7) method (panels E and F in Figure 7). For continuous distributions, the similarity in performance between the two quantile methods is evident at the level of individual deciles as well (Figure 8). Under normality, Figure 5: **Power results for normal and lognormal populations.** ANOVA.M = ANOVA using means. ANOVA.TM = ANOVA using 20% trimmed means. FDR (Benjamini–Hochberg method) and No corr. (no correction) refer to the percentile bootstrap method in conjunction with the Harrell–Davis estimator (HD) and the quantile(type=7) estimator (QT7). The black horizontal line marks 0.05. all deciles were associated with type I error rates close to the nominal level, irrespective of the sample size (panel A in Figure 8). In the worst situation tested, when sampling from a contaminated lognormal, results are a bit conservative, especially for the extreme deciles. In both situation, there is very little separating the two quantile methods. When sampling from a Poisson distribution (mean = 9 in all groups), the type I error rates for the Harrell-Davis estimator remain near 0.05, irrespective of sample size (panel A in Figure 9). However, the quantile(type=7) is conservative and the situation deteriorates with increasing sample size. In the most extreme situation considered, when sampling from a beta-binomial distribution with \(r=1,s=9,nbin=10\), the type I error rates were lower for both quantile estimators relative to the Poisson case (panel B in Figure 9), or when sampling from a beta-binomial distribution with \(r=9\) (not illustrated here, but see notebook sim_fp_apd.Rmd). Although the situation got worse with increasing sample size Figure 6: **Power results for Poisson and beta-binomial populations.** See details in Figure 5 caption. Figure 7: **FWER results for the comparison of the deciles of distributions of all pairwise differences.** for both estimators, Harrell-Davis outperformed quantile(type=7) in all situations. #### 3.2.2 Power Under normality, the ANOVA on means performed best, followed by ANOVA on trimmed means and finally the bootstrap method (panel A in Figure 10). When sampling from log-normal distributions, power was low for the ANOVAs relative to the bootstrap method, and much more so when making inferences about means (panel B in Figure 10). For the contaminated normal distributions, again the ANOVA on means performed poorly, but now the ANOVA on trimmed means dominates the bootstrap approach (panel C in Figure 10). Figure 8: **Type I error rates for the comparison of the deciles of distributions of all pairwise differences: continuous distributions.** If sampling from contaminated lognormal distributions, now the bootstrap method is the most powerful (panel D in Figure 10). In all these situations, the Harrell-Davis and quantile(type=7) estimators gave very similar results. Finally, in the presence of tied values, the ANOVA on means dominated the other approaches, and the Harrell-Davis estimator led to higher power than the quantile(type=7) estimator (panels E and F in Figure 10). Figure 10: **Power results for the comparison of the deciles of distributions of all pairwise differences.** See details in Figure 5 caption. An Illustration Both methods are illustrated using data dealing with perceived health (PH) among older adults (Clark, et al., 2011). The first factor consists of two educational groups: those who did not complete high school and those who have some college or technical training. The other two groups are based on a measure of depressive symptoms (CESD). One group corresponds to participants with a CESD score greater than 15, which is often taken to indicate mild depression or worse. The other level consists of participants with CESD scores less than or equal to 15. The four groups are defined like this: * \(A_{1}B_{1}\) = lower education, lower CESD, * \(A_{1}B_{2}\) = lower education, higher CESD, * \(A_{2}B_{1}\) = higher education, lower CESD, * \(A_{2}B_{2}\) = higher education, higher CESD. Perceived health results are illustrated for the four groups in Figure 11A. The figure was generated using the notebook examples.Rmd. A 2x2 ANOVA on means returns these p values: main effect of A (education) = 0.001; main effect of B (depression) \(<\) 0.0001; interaction = 0.09. Should we conclude that we have failed to obtain sufficient evidence about the presence of an interaction? This conclusion would be appropriate if the populations were symmetric and differed only in central tendency. However, the plot of marginals suggests differences in skewness and spread (Figure 11A). In keeping with this observation, considering the deciles reveals a more complex picture, as illustrated in panels B-F of Figure 11, with patterns of non-uniform group differences. Panels B and C illustrate the shift functions comparing B1 and B2 at each level of A: the decile differences between two groups are plotted as a function of the deciles in one group. Panels D and E illustrate the main effects. The values along the x axis (A1 and B1) correspond to the deciles of observations pooled across groups (for instance A1 = (A1B1, A1B2). Computing the average of the deciles leads to very similar graphs. Finally, panel F illustrates the interaction, which is highly non-linear, growing from the first decile to the median, and then decreasing and reversing sign. Figure 11: **Example: comparison of the deciles of perceived health.** (**A**) Perceived health scores in the 4 groups, with superimposed deciles indicated by horizontal lines. Medians appear as thicker lines. (**B-C**) Comparisons between B1 and B2 (lower/higher CESD) at each level of A. (**D**-**E**) Main effects. (**F**) Interaction. Grey disks: decile differences plotted as a function of the deciles in one group. Thick vertical lines mark 50% confidence intervals. Thin vertical lines mark 95% confidence intervals. Vertical dashed lines mark medians. The output of the decinter R function that tests (1) is shown in Table 1. As can be seen, the unadjusted p-values suggest that there is an interaction for the.2-.6 quantiles. Shown are the adjusted p-values based on Hochberg's method (using the R function p.adjust) in order to underscore the practical advantage of the Benjamini-Hochberg method. As indicated, no significant difference is found at the.05 level using Hochberg's method. But using the Benjamini-Hochberg correction when testing (1), the adjusted p-values, when dealing with the.3,.4 and.5 quantiles are all equal to.030. When testing (7), plots of the difference scores help provide perspective. Figure 12A illustrates this point. For each level of education (A1 = lower level; A2 = higher level), every participant with a lower CESD score was compared to every participant with a higher CESD score. As previously noted, when two distributions are identical, the distribution of \(D\) is symmetric about zero. Figure 12A suggests that for both groups the distribution of the difference scores is shifted to the right, but with a stronger shift for group 2 (completed high school - lower panel in Figure 12A). A positive difference indicates higher perceived health in not depressed participants relative to depressed participants. For the second group, testing the hypothesis that the median of the difference scores is zero, the estimate is 10.5, with a [4.2, 14.7] 95% confidence interval, \(p=0\). In the first group the median is 4.21 [0, 8.4], \(p=0.0765\). \begin{table} \begin{tabular}{r r Table 2 summarizes the results when testing (7). Here, the.1,.25,.5,.75 and.9 quantiles are used, which is the default for the R function iband that was used. Now the unadjusted p-values indicate an interaction in the upper tails of the two distributions. For example, the estimates of the 0.75 quantile indicate that for the first level of the first factor (did not complete high school), when comparing the not depressed group to the depressed group, there is 25% chance of getting a difference between perceived health values greater than 14.75, while for the second group there is 25% chance of getting a difference greater than 22. But using the Hochberg adjustment shown here, no significant difference is found at the.05 level and this remains the case when using the Benjamini-Hochberg method. ## 5 Concluding Remarks The two new methods presented in this article help gain a deeper understanding of where and by how much groups differ in a 2x2 factorial design. These methods are implemented in the functions decinter to test (1) and iband to test (7). The R functions decinter and iband can be found in the file Rallfun-v41.txt, downloadable from [https://osf.io/nvd59/quickfiles](https://osf.io/nvd59/quickfiles). Figure 12: **Example: comparison of the deciles of all pairwise differences of perceived health.** (**A**) Distributions of all pairwise differences between participants with lower and higher CESD scores. Participants with lower levels of education are shown at the top, participants with higher levels at the bottom. The vertical lines mark the deciles, with a thicker line for the median. (**B**). Interaction plot; see details in Figure 11 caption. These functions, the R code for the simulation and the figures are available in the reproducibility package for the article (Wilcox & Rousselet, 2023) and on GitHub (Rousselet, 2023). The reproducibility package also contains Rcpp code that is much faster to execute than the base R version (see notebook examples.Rmd). By default, these functions use 2000 bootstrap samples and correct for multiple comparison using the FDR correction from Benjamini & Hochberg (1995), which outperformed the FWER correction of Hochberg (1988) in our simulations. Even with the FDR correction, the two methods presented here remain conservative, so it would be worthwhile to explore other correction strategies. We have already considered several alternative methods, but they do not improve matters (Benjamini & Yekutieli, 2001; Benjamini et al., 2006; Blanchard & Roquain, 2008). Preliminary investigations of yet other approaches to control the FWER, such as using a maximum statistic distribution (Nichols & Holmes, 2002), have not revealed any method that could significantly improve power in all situations, making recommendations difficult. While this issue requires further work, for the moment we recommend to use the FDR correction from Benjamini & Hochberg (1995) by default. More generally, the obvious concern with comparing multiple quantiles is that power might be negatively impacted due to controlling the FWE rate. But power might also be negatively impacted when focusing on a single quantile simply because other quantiles are being ignored. The example in the previous section demonstrated the risk of drawing conclusions from a single measure of central tendency or quantile. It is also worth keeping in mind that there is no free lunch in inference: methods that can reveal more complex patterns in the data, such as those proposed here, necessarily require larger sample sizes to reveal where and by how much distributions differ. For applications to the deciles considered here, \begin{table} \begin{tabular}{r r given the increased uncertainty associated with the estimation of the 1st and 9th deciles, we recommend sample sizes of at least 30, echoing earlier recommendations (Wilcox et al., 2014). Several strategies are worth considering to boost power, starting with testing more specific hypotheses involving only a subset of quantiles. For instance, one could imagine a cross-validation approach in which a large dataset is split between a discovery set and a testing set. Instead of testing each quantile individually, one could also look for specific patterns across quantiles, such as stochastic dominance, which is characterised by all quantile differences having the same sign, or differences in spread, which are characterised by monotonic trends across quantile differences (Rousselet et al. 2017). Other obvious strategies are to consider different quantile estimators and bootstrap methods. Preliminary investigations suggest that using the quantile(type=8) estimator recommended by Hyndman & Fan (1996) improves type I error rates and power relative to the quantile(type=7) estimator for instance, but is still outperformed by the Harrell-Davis estimator in all situations. As for parametric methods, Goldman & Kaplan (2018) have proposed a fast and powerful extension of the Kolmogorov-Smirnov test to compare all the quantiles of two independent groups (Kolmogorov, 1992; Stephens, 1992). However, their approach assumes no tied values, and it is unclear how it could be generalised to deal with interactions. A bootstrap approach provides enough flexibility to deal with a variety of experimental designs. As such, this work could be extended to mixed designs with within- and between-subject factors. A covariate could be handled using different strategies, including non-parametric methods with smoothers (Wilcox, 1997; Wilcox, 2021, chapter 12).
2306.02425
Curing the high-energy perturbative instability of vector-quarkonium-photoproduction cross sections at order $αα_s^3$ with high-energy factorisation
We cure the perturbative instability of the total-inclusive-photoproduction cross sections of vector $S$-wave quarkonia observed at high photon-proton-collision energies ($\sqrt{s_{\gamma p}}$) in Next-to-Leading Order (NLO) Collinear-Factorisation (CF) computations. This is achieved using High-Energy Factorisation (HEF) in the Doubly-Logarithmic Approximation (DLA), which is a subset of the Leading-Logarithmic Approximation (LLA) of HEF which resums higher-order QCD corrections proportional to $\alpha_s^n \ln^{n-1} (\hat{s}/M^2)$ in the Regge limit $\hat{s}\gg M^2$ with $M^2$ being the quarkonium mass and $\hat{s}$ is the squared partonic-center-of-mass energy. Such a DLA is strictly consistent with the NLO and NNLO DGLAP evolutions of the Parton Distribution Functions. By improving the treatment of the large-$\hat{s}$ asymptotics of the CF coefficient function, the resummation cures the unphysical results of the NLO CF calculation. The matching is directly performed in $\hat{s}$ space using the Inverse-Error Weighting matching procedure which avoids any possible double counting. The obtained cross sections are in good agreement with data. In addition, the scale-variation uncertainty of the matched result is significantly reduced compared to the LO results. Our calculations also yield closed-form analytic limits for $\hat{s}\gg M^2$ of the NLO partonic CF and numerical limits for contributions to those at NNLO scaling like $\alpha_s^2 \ln(\hat{s}/M^2)$.
Jean-Philippe Lansberg, Maxim Nefedov, Melih A. Ozcelik
2023-06-04T18:12:24Z
http://arxiv.org/abs/2306.02425v2
Curing the high-energy perturbative instability of vector-quarkonium-photoproduction cross sections at order \(\alpha\alpha_{s}^{3}\) with high-energy factorisation ###### Abstract We cure the perturbative instability of the total-inclusive-photoproduction cross sections of vector \(S\)-wave quarkonia observed at high photon-proton-collision energies (\(\sqrt{s_{\gamma p}}\)) in Next-to-Leading Order (NLO) Collinear-Factorisation (CF) computations. This is achieved using High-Energy Factorisation (HEF) in the Doubly-Logarithmic Approximation (DLA), which is a subset of the Leading-Logarithmic Approximation (LLA) of HEF which resums higher-order QCD corrections proportional to \(\alpha_{s}^{n}\ln^{n-1}(\hat{s}/M^{2})\) in the Regge limit \(\hat{s}\gg M^{2}\) with \(M^{2}\) being the quarkonium mass and \(\hat{s}\) is the squared partonic-center-of-mass energy. Such a DLA is strictly consistent with the NLO and NNLO DGLAP evolutions of the Parton Distribution Functions. By improving the treatment of the large-\(\hat{s}\) asymptotics of the CF coefficient function, the resummation cures the unphysical results of the NLO CF calculation. The matching is directly performed in \(\hat{s}\) space using the Inverse-Error Weighting matching procedure which avoids any possible double counting. The obtained cross sections are in good agreement with data. In addition, the scale-variation uncertainty of the matched result is significantly reduced compared to the LO results. Our calculations also yield closed-form analytic limits for \(\hat{s}\gg M^{2}\) of the NLO partonic CF and numerical limits for contributions to those at NNLO scaling like \(\alpha_{s}^{2}\ln(\hat{s}/M^{2})\). keywords: Heavy quarkonium, photoproduction, NLO perturbative calculations, Regge limit, high-energy factorisation, resummation, matching + Footnote †: journal: arXiv ## 1 Introduction Historically the motivation for the study of the inclusive production of quarkonia in hadron-hadron and lepton-hadron collisions was to gain novel information on the structure of hadrons, see e.g. [1; 2]. Much experimental and theoretical effort has thus been devoted to it. We guide the reader to reviews [3; 4; 5; 6; 7; 8] from which one quickly realises that, unfortunately, the leading mechanisms of inclusive-quarkonium-production reactions generally remain unclear with several models of the non-perturbative hadronisation of quarkonia being used within the community. To achieve a better understanding of the non-perturbative dynamics of quarkonium production -whatever the motivation behind- it is crucial to ensure some reliability of the perturbative part. Whereas one understands now that the quarkonium-transverse-momentum (\(p_{T}\)) distributions receive large radiative corrections which need to be properly dealt with1, it has been rediscovered a few years ago that \(p_{T}\)-integrated cross sections -referred to here as "total" cross sections- are plagued by perturbative instabilities [18; 19; 20].2 These were identified and cured within a strict NLO Collinear-Factorisation (CF) set up by two of us in the case of pseudoscalar quarkonium hadroproduction [20] and (\(S\)-wave) vector quarkonium photoproduction [23], which is the focus of the present paper. The observed negative and unphysical cross sections as well as the associated large observed factorisation-scale, \(\mu_{F}\), dependence for both processes at high energies were attributed to the subtraction of the collinear divergences into the Parton Distribution Functions (PDFs) in the \(\overline{\text{MS}}\) scheme. In the latter scheme, we could identify an over-subtraction of these divergences which then yield negative partonic cross sections in regions where they ought to be positive in NLO computations. To cure this problem, we proposed [20] a new scale prescription, dubbed as \(\hat{\mu}_{F}\), amounting to considering that NLO QCD radiative corrections in the _partonic_ high-energy limit (\(\hat{s}\to\infty\)) should be accounted for by the -positive definite3- PDFs, as we recapitulate in Section 2. In doing so, the resulting hadronic cross sections then became positive and, in the photoproduction case, relatively close to the data. Footnote 3: As we discussed in [20], we believe that NLO gluon PDFs at low scales should be positive for quarkonium phenomenology to be well behaved in line with [24]. However, see [25] for arguments in favour of the possibility for negative PDFs. Having understood the origin of this instability, we wondered whether a theoretical setup going beyond the NLO of CF such as _High-Energy Factorisation (HEF)_[26, 27, 28, 29], resumming the higher-order corrections to the CF coefficient function which are enhanced by \(\ln(\hat{s}/M^{2})\) at \(\hat{s}\gg M^{2}\), could address the problem in a more general manner. We thus performed a first study [30] of the simpler case of pseudoscalar-quarkonium hadroproduction where the quarkonium is produced in a \(2\to 1\) partonic process at LO in \(\alpha_{s}\). HEF allows one to sum the series of higher-order corrections to the CF coefficient function proportional to \(\alpha_{s}^{n}\ln^{n-1}(\hat{s}/M^{2})\) which, in the context of the present study, we refer to as the _Leading Logarithmic Approximation (LLA)_. Since the resummation of high-energy logarithms affects both the DGLAP evolution of the PDFs [31, 32, 33] and the CF coefficient function, we have appropriately truncated the LLA resummation in the CF coefficient function down to the _Doubly-Logarithmic Approximation (DLA)_, as described in Ref. [30]. Doing so, one can use the standard fixed-order PDFs consistently with the resummed CF coefficient functions. On the other hand, the HEF formalism is valid only up to \(M^{2}/\hat{s}\)-power-suppressed corrections. As such, it cannot provide a good approximation to the CF coefficient function where \(M^{2}\) is not negligible with respect to \(\hat{s}\). To avoid this shortcoming, we use a matching procedure to smoothly interpolate between the \(\hat{s}\gg M^{2}\) asymptotics obtained from HEF and the NLO CF coefficient function at \(\hat{s}\gtrsim M^{2}\). In Ref. [30], we have proposed a version of the _Inverse Error Weighting (InEW)_ matching prescription, first introduced in Ref. [34], which uses our perturbative knowledge to bootstrap the weight-determination procedure of InEW, thus effectively eliminating free parameters from the method. As a result, we have obtained perturbatively-stable predictions for the pseudoscalar-quarkonium hadroproduction cross sections with a scale-dependence reduced in comparison to LO CF predictions. While the case of pseudoscalar quarkonium hadroproduction might be considered as academical owing to the obvious challenge to measure such a \(p_{T}\)-integrated cross section, experimental data exist for \(J/\psi\) photoproduction [35, 36, 37] and further studies could be performed. Let us cite data from AMBER [38] at CERN, those at the future EIC, even for \(\Upsilon\)[23], and those at the LHC in ultra-peripheral collisions up to TeV photon-proton collision energies [39, 40]. In principle, inclusive \(J/\psi\) photoproduction is an interesting source of information to constrain gluon PDFs at low scale \(\mu_{F}\) and low \(x\). This is why we study it here using HEF matched to CF to properly account for the entire energy region among possible future measurements. The structure of the manuscript is as follows. In Section 2, we review the computation of the total cross section of an \(S\)-wave vector quarkonium photoproduction in CF. In Section 3, we explain how the HEF formalism is applied to this process and present the cross checks we have performed. In Section 4, the InEW matching procedure between the NLO CF and HEF computations is described. Numerical results, a discussion of theoretical uncertainties and phenomenological predictions for \(J/\psi\) and \(\Upsilon(1S)\) photoproduction are then presented. Section 5 gathers our conclusion and an outlook on other quarkonium-production processes. A presents the technical details of the computation of the \(p_{T}\)-integrated HEF coefficient function used in the present study and B provides the details of the computation and results for the \(\hat{s}\gg M^{2}\) asymptotics of NLO and of \(\alpha_{s}^{2}\ln(\hat{s}/M^{2})\) NNLO terms of the CF coefficient function. ## 2 \(S\)-wave-vector-quarkonium photoproduction: collinear factorisation As announced, our focus will be on the process of inclusive photoproduction of a vector \(S\)-wave quarkonium state, which we will denote \(\mathcal{Q}\): \[\gamma(q)+p(P)\to\mathcal{Q}(p)+X, \tag{1}\] with the quarkonium mass \(p^{2}=M^{2}\) and \(q^{2}=P^{2}=0\). Assuming factorisation of the quarkonium-hadronisation process from the initial-state proton and its remnants, one can write down the CF formula for the total, \(p_{T}\)-integrated, cross section of the process of Eq. (1): \[\sigma(\sqrt{s_{\gamma p}},M,z_{\rm max})=\int\limits_{0}^{\eta_{\rm max}} \frac{d\eta}{\eta}\,\frac{d\sigma}{d\ln\eta},\,\text{with}\,\,\frac{d\sigma}{ d\ln\eta}=\sum_{i\,g,q,\bar{q}}\frac{d\mathcal{L}_{i}(\eta,\,\sqrt{s_{ \gamma p}},M,\mu_{F})}{d\ln\eta}\hat{\sigma}_{\gamma i}(\eta,z_{\rm max},\mu_{ F},\mu_{R}), \tag{2}\] where \(\hat{\sigma}_{\gamma i}\) is the CF partonic coefficient function for the partonic channel \(\gamma(q)+i(p_{1})\to\mathcal{Q}(p)+X\). Note that we have chosen the dimensionless distance from the partonic threshold, \[\eta=\frac{\hat{s}-M^{2}}{M^{2}}, \tag{3}\] as our convolution variable in Eq. (2). As usual, \(\hat{s}=(p_{1}+q)^{2}\) is the squared center-of-mass energy in the partonic subprocess. We also have that \(\eta_{\rm max}=(s_{\gamma p}-M^{2})/M^{2}\) in Eq. (2). The partonic luminosity factor in Eq. (2) is defined as: \[\frac{d\mathcal{L}_{i}(\eta,\,\sqrt{s_{\gamma p}},M,\mu_{F})}{d\ln\eta}=\frac{ M^{2}\eta}{s_{\gamma p}}f_{i}\left(\frac{M^{2}}{s_{\gamma p}}(1+\eta),\mu_{F} \right), \tag{4}\] where \(f_{i}(x,\mu_{F})\) is the (number density) CF PDF for a parton of flavour \(i=g,q,\bar{q}\) in the proton, whose factorisation-scale, \(\mu_{F}\), dependence is governed by the DGLAP evolution equations. Typically, in experimental measurements of \(J/\psi\) photoproduction, one places a cut on the _elasticity_ kinematical variable: \[z=\frac{P\cdot p}{P\cdot q}, \tag{5}\] which represents the fraction of the large light-cone component of the photon momentum carried away by the vector meson. In the proton rest frame, it equivalently corresponds to the vector-meson energy divided by the photon energy. Indeed, one usually wishes to exclude from the inclusive data the large-\(z\) region, where exclusive production takes place and one imposes \(z<z_{\rm max}<1\). The presence of this cut is indicated as dependence on \(z_{\rm max}\) in Eq. (2) and below. In the present paper, we will use the Colour-Singlet (CS)-dominance approximation4 of the Non-Relativistic QCD (NRQCD) factorisation hypothesis [41], where the CF coefficient function for the quarkonium production is given by the product of the CF coefficient function for the production of a heavy quark-antiquark pair \(Q\bar{Q}\) in the CS state with a total spin \(S=1\) and a relative orbital momentum \(L=0\), and the Long-Distance Matrix Element (LDME) describing the hadronisation of the \(Q\bar{Q}\) pair to the observable quarkonium state. At LO in \(\alpha_{s}\), only one partonic subprocess contributes to the coefficient function: \[\gamma(q)+g(p_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+g(k), \tag{6}\] while, at NLO in \(\alpha_{s}\), besides the virtual contributions via the interference of the one-loop and Born amplitudes of the subprocess Eq. (6), the following real-emission subprocesses also contribute: \[\gamma(q)+g(p_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+g(k_{1})+g(k_{ 2}), \tag{7}\] \[\gamma(q)+g(p_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+q(k_{ 1})+\bar{q}(k_{2}),\] (8) \[\gamma(q)+q(p_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+q(k_{ 1})+g(k_{2}). \tag{9}\] These NLO contributions had been computed for the first time by Kramer in 1995 [42] and we have successfully reproduced [43] these results using the FDC code [44; 12] based on the phase-space slicing method [45] as well as by our in-house implementation of the NLO calculation, based on the dipole-subtraction method [46]. The partonic coefficient function, which includes the process of Eq. (6) at LO and the processes of Eqs. (7-9) as well as the one-loop correction at NLO, can be conveniently expressed as follows: \[\hat{\sigma}^{(\text{CF})}_{\gamma g}(\eta,\mu_{F},\mu_{R},z_{ \text{max}}) = F_{\text{LO}}\left\{c_{0}(\eta,z_{\text{max}})+\frac{\alpha_{s}( \mu_{R})}{2\pi}\left[\beta_{0}(n_{l})c_{0}(\eta,z_{\text{max}})\ln\frac{\mu_{ R}^{2}}{\mu_{F}^{2}}\right.\right. \tag{10}\] \[+ \left.\left.c_{1}^{(\gamma g)}(\eta,z_{\text{max}},n_{l})+\bar{c} _{1}^{(\gamma g)}(\eta,z_{\text{max}})\ln\frac{M^{2}}{\mu_{F}^{2}}\right] \right\},\] \[\hat{\sigma}^{(\text{CF})}_{\gamma q}(\eta,\mu_{F},\mu_{R},z_{ \text{max}}) = F_{\text{LO}}\frac{\alpha_{s}(\mu_{R})}{2\pi}\left[c_{1}^{(\gamma q )}(\eta,z_{\text{max}})+\bar{c}_{1}^{(\gamma q)}(\eta,z_{\text{max}})\ln\frac {M^{2}}{\mu_{F}^{2}}\right], \tag{11}\] where \[F_{\text{LO}}=\frac{16\alpha\alpha_{s}^{2}(\mu_{R})e_{Q}^{2}}{9M^{2}}\frac{ \langle\mathcal{O}\left[{}^{3}S_{1}^{[1]}\right]\rangle}{M^{3}},\] with \(\langle\mathcal{O}\left[{}^{3}S_{1}^{[1]}\right]\rangle\) being NRQCD LDME [41], describing the transition of the CS \(Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right]\) state to the observable quarkonium state \(\mathcal{Q}\) and \(e_{Q}\) is the electric charge of the heavy quark in units of positron charge. At LO in \(v^{2}\), the CS LDME is just proportional to the squared potential-model radial wave function, \(|R(0)|^{2}\), evaluated at the origin in the position space: \(\langle\mathcal{O}\left[{}^{3}S_{1}^{[1]}\right]\rangle=2C_{A}(2J+1)|R(0)|^{2 }/(4\pi)\) with \(J=1\) being the total spin of the produced hadron and \(n_{l}\) is the number of quark flavours \(q\) lighter than the considered heavy-quark \(Q\). The dimensionless scaling functions (SFs) \(c_{0}(\eta,z_{\text{max}})\), \(c_{1}(\eta,z_{\text{max}},n_{l})\) and \(\bar{c}_{1}(\eta,z_{\text{max}})\) in Eq. (10) and Eq. (11) only depend on the partonic energy variable \(\eta\) defined in Eq. (3) and the kinematical cut on the variable \(z\) (Eq. (5)). In addition, the SF \(c_{1}^{(\gamma g)}\) depends on the number of light flavours \(n_{l}\) due to the \(q\bar{q}\) splitting in Eq. (8). Our definition of these scaling functions differs from the original definition of Kramer [42] as well as one of Ref. [43] by the usage of a more traditional expansion parameter, \(\alpha_{s}/(2\pi)\), rather than \(g_{s}^{2}\) as well as by the choice of the \(\mu_{F}\)-dependent logarithm \(\ln(M^{2}/\mu_{F}^{2})\) and the sign convention for \(\bar{c}_{1}\). We stress that when summed together they yield the very same \(\hat{\sigma}^{(\text{CF})}_{\gamma i}\). The partonic-energy dependence of the SFs defined above is illustrated in Fig. 1. The SF \(c_{0}\) is decreasing as \(1/\eta^{2}\), while all the NLO SFs tend to constant values in the high-energy limit. The asymptotic value of the SF \(c_{1}^{(\gamma g)}(\eta,z_{\text{max}}=0.9)\) turns out to be approximately \(-6.978\) and that of \(\bar{c}_{1}^{(\gamma g)}(\eta,z_{\text{max}}=0.9)\) is \(24.76\). In addition, the asymptotic values of the SFs of the \(\gamma q\) channel are related to those in the \(\gamma g\) channel via Casimir scaling, as shown in Fig. 1. With increasing collision energies, \(\sqrt{s_{\gamma p}}\), the partonic luminosity factor, Eq. (4), evaluated at \(\mu_{F}\sim M\sim 3\) GeV no longer suppresses contributions of large values of \(\eta\) to the integral Eq. (2) (see the right panel of Fig. 1). Given the large-\(\eta\) constant behaviour of the NLO corrections to \(\hat{\sigma}_{i}\), the region \(\eta\gg 1\) increasingly contributes to the cross section at high \(\sqrt{s_{yp}}\gg M\). This leads [43] to large negative NLO corrections to the inclusive photoproduction cross section of \(J/\psi\) since the asymptotic value of \(c_{1}^{(yp)}\) is negative and to a catastrophically strong \(\mu_{F}\) dependence at \(\sqrt{s_{yp}}\gg M\). This signals the instability of the perturbative expansion of this observable due to missing large higher-order corrections at \(\eta\gg 1\). All these features are clearly visible in the plot in the left panel of Fig. 2. Large negative NLO corrections are present even in the case of bottomonium production (right panel of Fig. 2). The scale-variation band in Fig. 2 and all other total cross section plots below is obtained through the 5-point scale-variation procedure, i.e. as an envelope of the cross section curves with \(\mu_{F}=\mu_{0}\cdot 2^{\zeta_{1}}\) and \(\mu_{R}=\mu_{0}\cdot 2^{\zeta_{2}}\) taking \((\zeta_{1},\zeta_{2})\in\{(0,0),(0,\pm 1),(\pm 1,0)\}\), where \(\mu_{0}\) is the central-scale choice, e.g. \(\mu_{0}=M\) in Fig. 2. To obtain the numerical value of the \(\langle O\left[3^{5}S_{1}^{[1]}\right]\rangle\) LDME we use the same values of \(|R(0)|^{2}\) as used in the study [23]: 1.25 GeV\({}^{3}\) for \(J/\psi\) and 7.5 GeV\({}^{3}\) for \({}^{\Upsilon}(1S)\). The estimates of feed down production from excited states are also included in our plots using the same method as in Ref. [23]. As a matter of fact, similar perturbative instabilities of \(p_{T}\)-integrated cross sections at high collision energies have been observed in the 1990's in heavy-quarkonium physics as soon as first NLO computations appeared [21; 22] and then were essentially forgotten. Their existence was restated in 2010 [18] and discussed in some details in full NRQCD in 2015 for \(\eta_{c}\) and \(J/\psi\)[48]. Only in 2020, were a first convincing diagnosis and a first solution with CF proposed for \(\eta_{c}\) hadroproduction [20]. The case of \(J/\psi\) photoproduction was then discussed in 2021 [43]. The solution proposed in Refs. [20; 43] to cure these instabilities is based on the fact that the partonic high-energy limits of the scaling functions for all partonic channels are related by Casimir scaling (see the left panel of Fig. 1). As such, a unique factorisation-scale choice for all channels can make all scaling functions tend to zero at large \(\eta\) and thence remove the large NLO corrections coming from the \(\eta\gg 1\) region. It acts as if they were effectively absorbed into the PDF evolution. Considering Eq. (10) and Eq. (11) at \(\eta\to\infty\), one finds such an optimal-scale value for the photoproduction case: \[\hat{\mu}_{F}=M\exp\left[\frac{c_{1}^{(\gamma q/\delta)}(\eta\to\infty,z_{ \rm max})}{2c_{1}^{(\gamma q/\delta)}(\eta\to\infty,z_{\rm max})}\right], \tag{12}\] which evaluates to \(\hat{\mu}_{F}\simeq 0.869M\) for \(z_{\rm max}=0.9\). Predictions corresponding to this scale choice are plotted in Fig. 2 as well as in Fig. 5 with dashed lines. Figure 1: Left panel: Plots of the LO (\(c_{0}\), thick solid blue line, multiplied by 30 for visibility) and NLO (\(c_{1}^{(yp)}\), solid red line; \(c_{1}^{(yp)}\), solid magenta line; \(\tilde{c}_{1}^{(yp)}\), dash-dotted red line; \(\tilde{c}_{1}^{(yp)}\), dash-dotted magenta line. Note that the quark SFs are multiplied by \(C_{A}/C_{F}\)) scaling functions entering Eq. (10) and Eq. (11) as functions of \(\eta\). For the plot, we have set \(n_{l}=3\) for the coefficient function \(c_{1}^{(yp)}\). Right panel: plots of the gluon luminosity factor Eq. (4) for \(M=\mu_{F}=3\) GeV as a function of \(\eta\) for several values of \(\sqrt{s_{yp}}\) using the MSTW 2008 NLO PDF central set [47] for illustration. The \(\hat{\mu}_{F}\) prescription of Eq. (12) legitimately exploits the factorisation-scale ambiguity of the fixed-order CF calculation. One could therefore explore the question of whether the DGLAP evolution of the PDFs _alone_ could correctly capture the high-energy structure of higher-order QCD corrections to the partonic cross section. In other words, if one reexpresses the NLO calculation, done with PDFs at the scale \(\mu_{F}=\hat{\mu}_{F}\), in terms of PDFs at a different scale \(\mu_{F}=M\), will the higher-order corrections (NNLO and beyond) arising from the perturbative expansion of the DGLAP evolution of PDFs between the scales \(\hat{\mu}_{F}\) and \(M\) reproduce the higher-order corrections to \(\hat{\sigma}_{i}(\eta\gg 1,\mu_{F}=M,\mu_{R},z_{\rm max})\) to be obtained in an actual N\({}^{\rm k\text{-}1}\)LO computation of this object? Unfortunately, the answer to this question is negative even in the LLA, i.e. when only considering in \(\hat{\sigma}_{i}\) terms which are proportional to \(\alpha_{s}^{n}\ln^{n-1}(1+\eta)\) at \(\eta\gg 1\). Indeed, the coefficients in front of these terms can not be correctly reproduced by the DGLAP evolution _alone_. Instead, a more complicated formalism like HEF is required to calculate and resum those terms. Figure 3: Typical Feynman diagram and Multi-Regge kinematics of emissions in the CF coefficient function of the process Eq. (13) corresponding to the partonic high-energy (\(\eta\gg 1\)) limit in the LLA. The dashed line in the \(t\)-channel represents a _Reggeised gluon_ and the solid circles represent Lipatov’s vertices, while the open circle corresponds to the “off-shell” subprocess of Eq. (14). Figure 2: Total inclusive photoproduction cross sections of \(J/\psi\) (left panel) and \(\Upsilon(1S)\) (right panel) in CF at LO and NLO in \(\alpha_{s}\) and in NRQCD at LO in \(v^{2}\) (CSM). The solid curves correspond to the default scale-choice, \(\mu_{F}=\mu_{R}=M\), while the shaded bands correspond to the 5-point \(\mu_{F}\) and \(\mu_{R}\) variation prescription described in the text. The dashed curve corresponds to the NLO computation with \(\mu_{F}=\hat{\mu}_{F}\) and \(\mu_{R}=M\). The experimental data in the left plot are taken from the H1 [35], FTPS [36] and NA14 [37] collaborations. ## 3 High-energy factorisation for \(S\)-wave-vector-quarkonium photoproduction ### Basic factorisation formula and coefficient functions of HEF In the resummation part of our calculation, we are going to consider the general N\({}^{k\geq 1}\)LO partonic subprocess: \[\gamma(q)+g/q(p_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+X, \tag{13}\] with \(q^{2}=0\) and \(|\mathbf{q}_{T}|=0\) yielding to the following Sudakov components5 of the photon momentum moving in the negative z direction, \(q_{+}=0\) and \(q_{-}>0\). Moreover, the momentum of the incoming _collinear parton_\(p_{1}^{\mu}=x_{1}P^{\mu}\) is such that \(|\mathbf{p}_{1T}|=p_{1}^{-}=0\) and \(p_{1}^{+}=x_{1}P^{+}\). Footnote 5: We define our Sudakov decomposition in terms of the dimensionless vectors \(n_{-}^{\mu}=P^{\mu}/P_{+}\) and \(n_{+}^{\mu}\) such that \(n_{\pm}^{2}=0\), \(n_{+}n_{-}=2\). For any four-momentum \(k^{\mu}=(k^{+}n_{-}^{\mu}+k^{-}n_{+}^{\mu})/2+k_{T}^{\mu}\) with \(n_{\pm}k_{T}=0\), \(k^{\pm}=n_{\pm}\cdot k\) and \(k^{2}=k_{+}k_{-}-\mathbf{k}_{T}^{2}\). The upper/lower position of indices \(\pm\) does not have any meaning. In the LLA at \(\eta\gg 1\), the process Eq. (13) is factorised into two stages, as depicted diagrammatically in Fig. 3. First, the \(k^{+}\)-ordered cascade of real emissions carries away all but a tiny fraction of the initial \(p_{1}^{+}\) momentum, \(\xi\). At \(\eta\gg 1\), \(\xi\) will be of order \(1/\eta\). This sequence of emissions leads to the appearance of large logarithms scaling like \(\ln 1/\xi\) at each order in \(\alpha_{s}\) and is described in the HEF formalism by the resummation factor \(\mathcal{C}_{gg}(\xi,\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2})\) or \(\mathcal{C}_{gq}(\xi,\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2})\), depending on whether the emission with largest \(k^{+}\) in the cascade was a gluon or a quark. We stress that, in the LLA at leading power in \(\mathcal{O}(1/\eta)\), all other emissions should be of gluons. On the second stage the \(Q\bar{Q}\) pair is produced via the fusion of a _Reggeised gluon_ (\(R_{+}\)), moving in the proton direction and carrying the four-momentum \(q_{1}^{\mu}=\xi x_{1}P^{\mu}+q_{T1}^{\mu}\) and of the photon: \[\gamma(q)+R_{+}(q_{1})\to Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right](p)+g(k), \tag{14}\] where the final state gluon is necessary by virtue of colour and charge-parity conservation [49]. The corresponding HEF coefficient function can be calculated using the following prescription: \[H(\bar{s},\bar{t},\bar{u},(\mathbf{q}_{T1}\cdot\mathbf{p}_{T}),\mathbf{q}_{T 1}^{2})=\frac{(q_{1}^{+})^{2}}{4\mathbf{q}_{T1}^{2}}n_{-}^{\mu}n_{-}^{\nu} \mathcal{M}_{\mu\nu}=\frac{q_{T1}^{\mu}q_{T1}^{\nu}}{\mathbf{q}_{T1}^{2}} \mathcal{M}_{\mu\nu}, \tag{15}\] where the tensor \(\mathcal{M}_{\mu\nu}\) is the squared QCD amplitude of the process Eq. (6): 1. summed over the polarisations of the photon, quarkonium and final-state gluon, but not contracted with respect to the polarisation indices of the initial-state gluon in both the amplitude and its complex conjugate 2. with the momentum of the initial on-shell gluon \(p_{1}\) simply replaced by that of the off-shell Reggeised gluon, namely \(q_{1}\). 3. accordingly, the Mandelstam variables for the subprocess Eq. (14) are defined as \(\bar{s}=(q+q_{1})^{2}\), \(\bar{t}=(q_{1}-p)^{2}\), \(\bar{u}=(q-p)^{2}\). The two equalities in Eq. (15) hold due to the Ward-Takahashi-like6 identity \(q_{1}^{\mu}\mathcal{M}_{\mu\nu}=q_{1}^{\nu}\mathcal{M}_{\mu\nu}=0\) which holds for the tensor \(\mathcal{M}_{\mu\nu}\) of the process Eq. (6) even with \(q_{1}^{2}<0\). The first equality in Eq. (15) uses the simplest coupling between a Reggeised gluon and a "QCD" gluon from the infinite tower of such couplings existing in Lipatov's EFT for multi-Regge processes in QCD [50]. The second equality is referred to as the Gribov's trick and shows that a smooth \(\mathbf{q}_{T1}^{2}\to 0\) limit of Eq. (15) should exist. Footnote 6: As opposed to Slavnov–Taylor identities usually applicable in non-Abelian theories like QCD. All what is needed to perform LLA computations is the coefficient function \(H\) at LO in \(\alpha_{s}\) for the process Eq. (14) first derived in 2004 [51]. In our calculation, we will use a more compact expression [52]: \[H(\bar{s},\bar{t},\bar{u},\mathbf{q}_{T1}\cdot\mathbf{p}_{T},\mathbf{q }_{T1}^{2})=\pi^{3}\alpha\alpha_{s}^{2}e_{Q}^{2}\frac{\langle Q^{1}S_{1}^{[1]} \rangle}{M^{3}}\frac{2048M^{2}}{27(M^{2}-\bar{s})^{2}(M^{2}-\bar{u})^{2}( \mathbf{q}_{T1}^{2}+M^{2}-\bar{t})^{2}}\] \[\times\Big{[}(\mathbf{q}_{T1}^{2})^{4}M^{2}+M^{2}(\bar{s}^{2}+ \bar{s}\bar{u}+\bar{u}^{2}-M^{2}(\bar{s}+\bar{u}))^{2}+(\mathbf{q}_{T1}^{2})^{ 3}(M^{2}(5\bar{s}+3\bar{u})-7M^{4}-\bar{s}\bar{u})\] \[+(\mathbf{q}_{T1}^{2})^{2}(\bar{s}\bar{u}(\bar{u}-\bar{s})+M^{4}( 3\bar{u}-11\bar{s})+M^{2}(7\bar{s}^{2}+2\bar{s}\bar{u}-3\bar{u}^{2}))+\mathbf{ q}_{T1}^{2}\bar{s}(\bar{s}\bar{u}^{2}+M^{4}(\bar{u}-6\bar{s})\] \[+M^{2}(4\bar{s}^{2}+\bar{s}\bar{u}-\bar{u}^{2}))-2(\mathbf{q}_{ T1}\cdot\mathbf{p}_{T})((\mathbf{q}_{T1}^{2})^{3}M^{2}+(\mathbf{q}_{T1}^{2})^{2}(-7M ^{4}-\bar{s}\bar{u}+M^{2}(3\bar{s}+4\bar{u})) \tag{16}\] \[+\mathbf{q}_{T1}^{2}(M^{4}(-7\bar{s}+2\bar{u})-\bar{s}^{2}\bar{u} +M^{2}(2\bar{s}^{2}+\bar{s}\bar{u}-2\bar{u}^{2}))-M^{2}(2M^{4}(\bar{s}+\bar{u })-2M^{2}\bar{u}(3\bar{s}+2\bar{u})\] \[+\bar{u}(3\bar{s}^{2}+4\bar{s}\bar{u}+2\bar{u}^{2})))-2M^{2}( \mathbf{q}_{T1}\cdot\mathbf{p}_{T})^{2}}{\mathbf{q}_{T1}^{2}}((\mathbf{q}_{T 1}^{2})^{3}+M^{2}\bar{s}^{2}+(\mathbf{q}_{T1}^{2})^{2}(M^{2}+2\bar{s})\] \[+\mathbf{q}_{T1}^{2}(2M^{2}\bar{s}+\bar{s}^{2}-2\bar{t}^{2})) \Big{]},\] where \(e_{Q}\) is the electric charge of the heavy quark in units of positron charge. The HEF coefficient function satisfies the following on-shell limit property: \[\int\limits_{0}^{2\pi}\frac{d\phi}{2\pi}\lim\limits_{\mathbf{q}_{T1}^{2}\to 0 }H(\bar{s},\bar{t},\bar{u},\mathbf{q}_{T1}\cdot\mathbf{p}_{T},\mathbf{q}_{T1}^ {2})=\frac{(-g^{\mu\nu})}{2}\mathcal{M}_{\mu\nu}, \tag{17}\] where \(\phi\) is the azimuthal angle of \(\mathbf{q}_{T1}\) relative to \(\mathbf{p}_{T}\), which remains on the l.h.s. even after taking the limit \(\mathbf{q}_{T1}^{2}\to 0\). On the r.h.s. of Eq. (17), the usual on-shell kinematics (\(q_{1}^{2}=0\)) for the tensor \(\mathcal{M}_{\mu\nu}\) of the process Eq. (6) is to be used as \(\mathbf{q}_{T1}^{2}\to 0\). As mentioned before, HEF allows one to resum a class of logarithmically-enhanced QCD corrections to CF. As such, the HEF partonic coefficient function can be used as a CF partonic coefficient function and convoluted with usual PDFs but this should in principle only be done in the region where the high-energy leading-power approximation and LLA is applicable, i.e. \(\eta\to\infty\). The general HEF formula for the CF partonic coefficient function for the process Eq. (13) has the form [26; 27; 28; 29]: \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF)}}}{d\Pi_{f}}=\int\limits_{\xi_{ \text{min}}}^{1}\frac{d\xi}{\xi}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}C_{gi}(\xi, \mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2})\frac{H(\bar{s},\bar{t},\bar{u},( \mathbf{q}_{T1}\cdot\mathbf{p}_{T}),\mathbf{q}_{T1}^{2})}{2M^{2}\xi(1+\eta)} \delta^{(4)}(q+q_{1}-p-k), \tag{18}\] where \(\xi_{\text{min}}=1/z/(1+\eta)\) due to kinematic reasons, so that \(z\geq 1/(1+\eta)\). \(d\Pi_{f}\) is the usual Lorentz-invariant phase-space volume element for the final-state particles with momenta \(p\) and \(k\). Integrating out the momentum conserving \(\delta\) functions, one can derive the following master formula for the \(z\) and \(p_{T}\)-differential partonic cross section: \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF, $\ln 1/\xi$)}}}{dzd^{2}\mathbf{p}_{T}}= \frac{1}{2M^{2}}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\int\limits_{\xi_{\text{ min}}}^{1}\frac{d\xi}{\xi}\ C_{gi}\left(\xi,\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2} \right)\frac{d\mathcal{H}(\xi(1+\eta),z,\mathbf{q}_{T1},\mathbf{p}_{T})}{dzd^{2 }\mathbf{p}_{T}}, \tag{19}\] where \(z\geq 1/(1+\eta)\), \[\frac{d\mathcal{H}(y,z,\mathbf{q}_{T1},\mathbf{p}_{T})}{dzd^{2}\mathbf{p}_{T}}= \frac{H(\bar{s},\bar{t},\bar{u},(\mathbf{q}_{T1}\cdot\mathbf{p}_{T}),\mathbf{q}_ {T1}^{2})}{2(2\pi)^{2}yz}\delta\left((1-z)\left(M^{2}y-\frac{M^{2}+\mathbf{p} _{T}^{2}}{z}\right)-(\mathbf{q}_{T1}-\mathbf{p}_{T})^{2}\right), \tag{20}\] with \(y=\xi(1+\eta)\) and Mandelstam variables can be expressed in terms of \(y\), \(z\), \(\mathbf{q}_{T1}^{2}\) and \(\mathbf{p}_{T}^{2}\) as: \[\bar{s} =M^{2}y-\mathbf{q}_{T1}^{2}, \tag{21}\] \[\bar{t} =-\frac{1}{z}\left[M^{2}(yz-1)-\mathbf{p}_{T}^{2}\right],\] \[\bar{u} =-\frac{1}{z}\left[M^{2}(1-z)+\mathbf{p}_{T}^{2}\right].\] Due to the remaining \(\delta\) function in Eq. (20), one has the identity: \[\mathbf{q}_{T1}\cdot\mathbf{p}_{T}=\frac{1}{2\varepsilon}\left[\mathbf{p}_{T}^{2 }+z\mathbf{q}_{T1}^{2}-M^{2}(z\xi(1+\eta)-1)(1-z)\right], \tag{22}\] which, together with Eq. (21), allows one to remove all explicit scalar products of transverse momenta from Eq. (16) and express these purely as functions of \(\mathbf{p}_{T}^{2}\), \(\mathbf{q}_{T1}^{2}\), \(\eta\), \(z\) and \(\xi\). To compute the HEF contribution to the total \(p_{T}\)-integrated quarkonium photoproduction cross section, one simply has to integrate Eq. (19) and thus only Eq. (20) over \(\mathbf{p}_{T}\) as it appears nowhere else. It turns out that the integration over \(\mathbf{p}_{T}\) can be carried out in a closed analytic form at fixed \(z\), which is very useful for numerical calculations. We explain the integration technique and give the corresponding analytic result for the \(\mathbf{p}_{T}\)-integrated function Eq. (20) in A. ### Strict LLA in \(\ln(1+\eta)\) The resummation of \(\ln(1/\xi)\) large logarithms to all orders in \(\alpha_{s}\) in Eq. (19) is provided by the resummation functions \(\mathcal{C}_{gi}\) which, in the LLA in \(\ln(1/\xi)\), have the following expansions in powers of \(\alpha_{s}\ln\frac{1}{\xi}\): \[\mathcal{C}_{gi}(\xi,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2})=\sum_{n=0}^{ \infty}\left[\alpha_{s}(\mu_{R})\ln\frac{1}{\xi}\right]^{n}\mathcal{C}_{gi}^{ (n)}(\mathbf{q}_{T}^{2},\mu_{F}^{2}). \tag{23}\] Substituting this expansion into Eq. (19) and making the change of integration variable \(\xi=y/(1+\eta)\) one obtains: \[\frac{d\hat{\sigma}_{\gamma i}^{\rm(HEF,\,\ln 1/\xi)}}{dzd^{2} \mathbf{p}_{T}}=\frac{1}{2M^{2}}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\sum_{n=0 }^{\infty}\mathcal{C}_{gi}^{(n)}\left(\mathbf{q}_{T1}^{2},\mu_{F}^{2}\right) \int\limits_{1/z}^{1+\eta}\frac{dy}{y}\left[\alpha_{s}(\mu_{R})\left(\ln(1+ \eta)-\ln y\right)\right]^{n}\frac{d\mathcal{H}(y,z,\mathbf{q}_{T1},\mathbf{p}_ {T})}{dzd^{2}\mathbf{p}_{T}}\] \[=\frac{1}{2M^{2}}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\sum_{n=0} ^{\infty}\mathcal{C}_{gi}^{(n)}\left(\mathbf{q}_{T1}^{2},\mu_{F}^{2}\right) \left[\alpha_{s}(\mu_{R})\ln(1+\eta)\right]^{n}\int\limits_{1/z}^{1+\eta} \frac{dy}{y}\frac{d\mathcal{H}(y,z,\mathbf{q}_{T1},\mathbf{p}_{T})}{dzd^{2} \mathbf{p}_{T}} \tag{24}\] \[-(n+1)\frac{\alpha_{s}(\mu_{R})}{2M^{2}}\int\frac{d^{2}\mathbf{q} _{T1}}{\pi}\sum_{n=0}^{\infty}\mathcal{C}_{gi}^{(n+1)}\left(\mathbf{q}_{T1}^ {2},\mu_{F}^{2}\right)\left[\alpha_{s}(\mu_{R})\ln(1+\eta)\right]^{n}\int \limits_{1/z}^{1+\eta}\frac{dy}{y}\ln y\frac{d\mathcal{H}(y,z,\mathbf{q}_{T1}, \mathbf{p}_{T})}{dzd^{2}\mathbf{p}_{T}}+\ldots,\] where additional terms arising from the expansion of \((\ln(1+\eta)-\ln y)^{n}\) are denoted by the ellipsis. If the function \(\mathcal{H}\) decreases like a power law at large \(y\sim(1+\eta)\), which is true for the case at hand, then the integrals over \(y\) converge and therefore only the first term in Eq. (24) has the form of expansion in \([\alpha_{s}\ln(1+\eta)]^{n}\). In other words, it belongs to the LLA in terms of \(\ln(1+\eta)\), while other terms are further \(\alpha_{s}\)-suppressed as they contribute to N\({}^{\text{\tiny{$\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ }}}}}}}}$}}}$}}}}\)LA with respect to \(\ln(1+\eta)\). Summing the series Eq. (23) only for this first term and extending the integration in \(y\) up to infinity, which amounts to only adding power-suppressed corrections in \(\eta\), one obtains the following resummation formula in the _strict LLA in \(\ln(1+\eta)\)_: \[\frac{d\hat{\sigma}_{\gamma i}^{\rm(HEF,\,\ln(1+\eta))}}{dzd^{2} \mathbf{p}_{T}}=\frac{1}{2M^{2}}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\mathcal{C} _{gi}\bigg{(}\frac{1}{1+\eta},\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2} \bigg{)}\!\!\int\limits_{1/z}^{\infty}\frac{dy}{y}\frac{d\mathcal{H}(y,z, \mathbf{q}_{T1},\mathbf{p}_{T})}{dzd^{2}\mathbf{p}_{T}}. \tag{25}\] In this approximation, there is no longitudinal integration connecting the resummation part with the HEF coefficient function ("projectile impact-factor" in the BFKL terminology), which illustrates the connection of our formalism with small-\(x\) resummation provided by the CGC/dipole-model framework. The possibility to upgrade the latter framework by restoring the mentioned longitudinal momentum integral has been recently discussed [53]. We also note that the development of the formalism to perform the NLL computations in the original HEF framework [26; 27; 28; 29] of Eq. (19) was recently significantly advanced [54; 55] with a clarification of the mechanism of the cancellation of the divergences at NLL. In Section 4.2, we will compare the numerical results of the \(\ln(1/\xi)\) and \(\ln(1+\eta)\) LLA formalisms matched to the NLO CF computation and will assess the phenomenological relevance of the differences between them for the total cross section of inclusive quarkonium photoproduction. ### Resummation functions in the DLA In the framework of CF at leading twist, there are two kinds of large perturbative corrections which enter the cross sections at large \(\sqrt{s_{\gamma p}}\). First, there are the corrections enhanced by logarithms of \(1+\eta\), which we have discussed in Section 3.1 and 3.2. These lead to the perturbative instability of the quarkonium-production cross section. The second type of large logarithms enters the DGLAP evolution of PDFs as corrections enhanced by \(\ln 1/z\) (\(z\) being the parton light-cone momentum fraction) to the DGLAP splitting functions [56; 29; 57] which are functions of \(z\). Most of the existing fits of collinear PDFs do not take into account these corrections to the DGLAP splitting functions to all orders in \(\alpha_{s}\) because evolution of these PDFs is governed by the fixed-order NLO or NNLO DGLAP splitting functions. The resummation of \(\ln(1/z)\)-enhanced corrections in the PDF evolution has proven to be a complicated task, requiring a nontrivial matching of the DGLAP to the BFKL series [57] which only relatively recently has led to significant improvements in the quality of PDF fits [58; 59]. In the case of \(p_{T}\)-integrated quarkonium-production cross sections, the perturbative instability of the cross section sets in at relatively modest collision energies, see e.g. Fig. 2. The fixed-order PDF evolution is still valid at these values of \(x\), the PDF uncertainties are relatively small and usage of high-energy improved PDFs [58] does not resolve the instability of NLO cross section [43]. Instead, the corrections leading to the perturbative instability come from the high-energy behaviour of the coefficient function. Our goal is thus to take them into account consistently with the fixed-order NLO and NNLO PDF evolutions. As we recently discussed [30], in order to achieve the goal stated in the previous paragraph, one cannot use the full LLA of HEF, but the resummation functions \(\mathcal{C}_{gg}\) and \(\mathcal{C}_{gg}\) of the HEF formalism in the DLA, which resums terms scaling like \(\left(\alpha_{s}\ln(1/x)\ln(\mu_{F}^{2}/\mathbf{q}_{T}^{2})\right)^{n}\) to all orders in perturbation theory via the Blumlein-Collins-Ellis formula [60]: \[\mathcal{C}_{gg}^{\rm(DL)}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2})=\frac {\hat{\alpha}_{s}}{\mathbf{q}_{T}^{2}}\begin{cases}J_{0}\left(2\sqrt{\hat{ \alpha}_{s}\ln\left(\frac{1}{x}\right)\ln\left(\frac{\mu_{F}^{2}}{\mathbf{q}_{ T}^{2}}\right)}\right)&\text{if }\mathbf{q}_{T}^{2}<\mu_{F}^{2},\\ I_{0}\left(2\sqrt{\hat{\alpha}_{s}\ln\left(\frac{1}{x}\right)\ln\left(\frac{ \mathbf{q}_{T}^{2}}{\mu_{F}^{2}}\right)}\right)&\text{if }\mathbf{q}_{T}^{2}>\mu_{F}^{2}.\end{cases} \tag{26}\] where \(\hat{\alpha}_{s}=\alpha_{s}(\mu_{R})\mathcal{C}_{A}/\pi\), and \(J_{0}\) (\(I_{0}\)) are the Bessel functions of the first (second) kind7. Any corrections in the LLA beyond this approximation will be inconsistent either with the \(\mu_{F}\) dependence or with the factorisation scheme on which most of the existing NLO and NNLO PDFs are based, except those coming from Refs. [58; 59]. Footnote 7: Despite an apparent non-smoothness at \(\mathbf{q}_{T}^{2}=\mu_{F}^{2}\), both functions actually have the same series expansion in \(\hat{\alpha}_{s}\ln\mu_{F}^{2}/\mathbf{q}_{T}^{2}\), which is convergent for all \(\mathbf{q}_{T}^{2}\). For the case of quark-induced channel, the resummation factor in the LLA (and DLA) has to be modified as: \[\mathcal{C}_{gg}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2})=\frac{C_{F}}{C _{A}}\left[\mathcal{C}_{gg}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2})- \delta(1-x)\delta(\mathbf{q}_{T}^{2})\right], \tag{27}\] which corresponds to the leading in \(k^{+}\) gluon emission8 being replaced by the quark one, while in the \(t\)-channel only gluons still propagate in the leading power (Eikonal) approximation with respect to \(x\). The DLA resummation factors Eq. (26) and Eq. (27) have the following remarkable properties: \[\int\limits_{0}^{\mu_{F}^{2}}d\mathbf{q}_{T}^{2}\ \mathcal{C}_{gg}^{ \text{(DL)}}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2}) = \delta(1-x), \tag{28}\] \[\int\limits_{0}^{\mu_{F}^{2}}d\mathbf{q}_{T}^{2}\ \mathcal{C}_{gg}^{ \text{(DL)}}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2}) = 0, \tag{29}\] which can be most easily proven using their Mellin-space representations. We will often rely on these properties in the calculations below. ### Numerical implementation of HEF and cross checks To implement numerically the resummation formulae Eq. (19) and Eq. (25), we have to deal with the strong oscillatory behaviour of the resummation function Eq. (26) at \(\mathbf{q}_{T}^{2}\to 0\). Fortunately, the convergence properties of the integrals to be computed can be significantly improved using the properties of Eq. (28) and Eq. (29). To this end, we add and then subtract the small-\(\mathbf{q}_{T}\) limit of the HEF coefficient function \(\mathcal{H}\) multiplied by \(\theta(\mu_{F}^{2}-\mathbf{q}_{T1}^{2})\) in Eq. (19) to get: \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF, in 1/\xi)}}}{dzd^{2} \mathbf{p}_{T}}=\frac{1}{2M^{2}}\int\limits_{\xi_{\text{min}}}^{1}\frac{d\xi} {\xi}\ \delta_{ig}\delta(\xi-1)\int\limits_{0}^{2\pi}\frac{d\phi}{2\pi}\frac{d\mathcal{ H}(\xi(1+\eta),z,\mathbf{q}_{T1}^{2}=0)}{dzd^{2}\mathbf{p}_{T}}+\frac{d\hat{ \sigma}_{\gamma i}^{\text{(HEF, in 1/\xi)}}}{dzd^{2}\mathbf{p}_{T}}, \tag{30}\] \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF, in 1/\xi)}}}{dzd^{2} \mathbf{p}_{T}}=\frac{1}{2M^{2}}\int\limits_{\xi_{\text{min}}}^{1}\frac{d\xi} {\xi}\] (31) \[\times\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\mathcal{C}_{gi}^{ \text{(DL)}}(\xi,\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R}^{2})\left[\frac{d \mathcal{H}(\xi(1+\eta),z,\mathbf{q}_{T1}^{2})}{dzd^{2}\mathbf{p}_{T}}- \frac{d\mathcal{H}(\xi(1+\eta),z,\mathbf{q}_{T1}^{2}=0)}{dzd^{2}\mathbf{p}_{T} }\theta(\mu_{F}^{2}-\mathbf{q}_{T1}^{2})\right].\] The first integral over \(\xi\) in Eq. (30) is trivially removed due to the \(\delta\) function and this term just reproduces the known LO CF result, with the LO coefficient function equal to \[c_{0}(\eta,z_{\text{max}})=\frac{1}{2M^{2}F_{\text{LO}}(1+\eta)}\int\limits_{ 0}^{z_{\text{max}}}dz\frac{d\mathcal{H}(1+\eta,z,\mathbf{q}_{T1}^{2}=0)}{dz}. \tag{32}\] Due to the on-shell-limit property of the HEF coefficient function of Eq. (17), Eq. (32) exactly reproduces the well-known LO CF scaling function \(c_{0}\)[42]. The expression in square brackets in the _LO-subtracted_ HEF result, Eq. (31), tends to zero when \(\mathbf{q}_{T1}^{2}\to 0\). As such, it damps the rapid oscillations of the resummation function, Eq. (26), and facilitates the numerical evaluation of the integral Eq. (31). The same procedure should be performed also with the LLA \(\ln(1+\eta)\) resummation formula Eq. (25), yielding the result: \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF, in 1+\eta)}}}{dzd^{2} \mathbf{p}_{T}}=\frac{\delta(\eta)}{2M^{2}}\int\limits_{0}^{2\pi}\frac{d\phi }{2\pi}\int\limits_{1/z}^{\infty}\frac{dy}{y}\frac{d\mathcal{H}(y,z,\mathbf{ q}_{T1}^{2}=0,\mathbf{p}_{T})}{dzd^{2}\mathbf{p}_{T}}+\frac{d\hat{\sigma}_{ \gamma i}^{\text{(HEF, in 1+\eta)}}}{dzd^{2}\mathbf{p}_{T}}, \tag{33}\] where \[\frac{d\hat{\sigma}_{\gamma i}^{\text{(HEF, in 1+\eta)}}}{dzd^{2} \mathbf{p}_{T}} = \frac{1}{2M^{2}}\int\frac{d^{2}\mathbf{q}_{T1}}{\pi}\mathcal{C}_{gi }^{\text{(DL)}}\left(\frac{1}{1+\eta},\mathbf{q}_{T1}^{2},\mu_{F}^{2},\mu_{R} ^{2}\right) \tag{34}\] \[\times\int\limits_{1/z}^{\infty}\frac{dy}{y}\left[\frac{d \mathcal{H}(y,z,\mathbf{q}_{T1},\mathbf{p}_{T})}{dzd^{2}\mathbf{p}_{T}}- \frac{d\mathcal{H}(y,z,\mathbf{q}_{T1}^{2}=0,\mathbf{p}_{T})}{dzd^{2} \mathbf{p}_{T}}\theta(\mu_{F}^{2}-\mathbf{q}_{T1}^{2})\right].\] The first term in Eq. (33) is just a crude approximation to the LO CF coefficient function, which corresponds to the LLA \(\ln(1+\eta)\). It does not contribute at \(\eta\gg 1\) and we will discard it through the matching procedure described in Section 4.1. \(d\hat{\sigma}_{\gamma i}^{\text{(HEF, }\ln(1+\eta))}/dzd^{2}\mathbf{p}_{T}\) in Eq. (34) is the _LO-subtracted HEF result for the strict \(\ln(1+\eta)\) resummation,_ which we will use in the matching procedure below. Let us emphasise now an important cross check of the resummation formalism that we have employed. The resummation formulae Eq. (31) or Eq. (34), when expanded up to \(O(\alpha_{s})\), should reproduce the \(\eta\gg 1\) asymptotics of the NLO scaling functions in CF which we have already mentioned in Section 2. We were able to perform this expansion of the scaling functions differential in \(z\) and \(\mathbf{p}_{T}^{2}\) within the HEF formalism in an analytic form. We have found an excellent agreement with numerical NLO CF results. This agreement constitutes another non-trivial cross check of the HEF formalism at NLO, supplementing other such cross checks at NLO for different processes [61; 62; 63; 30] and at NNLO [64; 65; 66; 67; 68; 69]. Moreover, since the DLA for the resummation factors holds up to \(O(\alpha_{s}^{3})\), we were also able to numerically compute the \(\eta\gg 1\) asymptotics of the \(\alpha_{s}^{2}\ln(1+\eta)\) term in the NNLO CF coefficient function for \(J/\psi\) inclusive photoproduction. We describe all these computations in detail in B. ## 4 Matching HEF to NLO CF ### Inverse-Error-Weighting matching In order to combine the NLO CF estimate of \(\hat{\sigma}_{\gamma i}(\eta)\) -which is the best source information we have on the behaviour of this quantity for \(\eta\) not far from unity- to the corresponding HEF estimate at \(\eta\gg 1\), we use the _Inverse-Error-Weighting (InEW)_ matching prescription. It has been first introduced in Ref. [34] and later improved in our previous paper [30] where we have provided a self-consistent scheme of estimation of errors entering the InEW weights. The InEW matching is based on a weighted sum of both CF and HEF partonic cross sections such that \[\hat{\sigma}_{\gamma i}(\eta)=\hat{\sigma}_{\gamma i}^{\text{(CF, LO)}}(\eta)+\left[w_{\gamma i}^{\text{(CF)}}(\eta)\hat{\sigma}_{\gamma i}^{ \text{(CF, NLO)}}(\eta)+w_{\gamma i}^{\text{(HEF)}}(\eta)\hat{\sigma}_{\gamma i }^{\text{(HEF)}}(\eta)\right], \tag{35}\] with the following prescription for the weight functions: \[w_{\gamma i}^{\text{(CF)}}(\eta)=\frac{\left(\Delta\hat{\sigma}_{\gamma i}^{ \text{(CF)}}(\eta)\right)^{-2}}{\left(\Delta\hat{\sigma}_{\gamma i}^{\text{ (CF)}}(\eta)\right)^{-2}+\left(\Delta\hat{\sigma}_{\gamma i}^{\text{(HEF)}}( \eta)\right)^{-2}},\ \ w_{\gamma i}^{\text{(HEF)}}(\eta)=1-w_{\gamma i}^{\text{(CF)}}(\eta). \tag{36}\] Correspondingly, the local matching uncertainty in \(\eta\) is given by the matching procedure and reads: \[\Delta\hat{\sigma}_{\gamma i}^{\text{(InEW)}}(\eta)=\left[\left(\Delta\hat{ \sigma}_{\gamma i}^{\text{(CF)}}(\eta)\right)^{-2}+\left(\Delta\hat{\sigma}_{ \gamma i}^{\text{(HEF)}}(\eta)\right)^{-2}\right]^{-1/2}. \tag{37}\] An important motivation behind using InEW is to be able to include, via the error estimates, \(\Delta\hat{\sigma}_{\gamma i}^{\text{(CF)}}\) and \(\Delta\hat{\sigma}_{\gamma i}^{\text{(HEF)}}\), all the available perturbative information on the effects _missing_ in each of the contributions in the corresponding limit [30]. Doing so, we aim to reduce as much as possible the arbitrariness of the matching. The NLO CF contribution is obviously missing any kind of NNLO corrections. At \(\eta\gg 1\), the NNLO corrections principally contain the high-energy logarithmic term \(\alpha_{s}^{2}\ln(1+\eta)\) and contributions which are constant in \(\eta\). We estimate the first from our HEF resummed result and we parameterise the second, on which we know nothing but the scaling in \(\alpha_{s}\), from the NLO CF result multiplied by \(\alpha_{s}\). We then combine our _estimates_ of these higher-order contributions in quadratures: \[\Delta\hat{\sigma}_{\gamma i}^{\text{(CF)}}(\eta)=\sqrt{\left(\alpha_{s}^{2}( \mu_{R})C_{\gamma i}^{\text{(LLA)}}(\mu_{F})\ln(1+\eta)\right)^{2}+\left( \alpha_{s}(\mu_{R})\hat{\sigma}_{\gamma i}^{\text{(CF, NLO)}}(\eta)\right)^{2}}, \tag{38}\] where the coefficient of the LLA term, \(C^{\rm(LLA)}_{\gamma i}\), is obtained from the expansion of the HEF resummed result Eq. (25) for \(\hat{\sigma}_{\gamma i}\) up to NNLO: \[C^{\rm(LLA)}_{\gamma i}(\mu_{F})=\frac{F_{LO}}{(2\pi)^{2}}\left[c_{2}^{(\gamma i )}(\infty,z_{\rm max})+2C_{A}c_{1}^{(\gamma i)}(\infty,z_{\rm max})\ln\left( \frac{M^{2}}{\mu_{F}^{2}}\right)+C_{A}\hat{c}_{1}^{(\gamma i)}(\infty,z_{\rm max })\ln^{2}\left(\frac{M^{2}}{\mu_{F}^{2}}\right)\right], \tag{39}\] where the asymptotic values of scaling functions \(c_{1}\), \(c_{2}\) and \(\bar{c}_{1}\) are computed numerically, using respectively Eq. (10), Eq. (11) and Eq. (12) of B. We cannot compute the coefficient of the NNLO term which is constant at \(\eta\gg 1\) so far as it belongs to NLL HEF. The second term under the square root in Eq. (38) thus provides a generic estimate of higher-order corrections to \(\hat{\sigma}_{\gamma i}^{\rm(CF)}\) which are non-logarithmic in \(\eta\). It is simply constructed from \(\hat{\sigma}_{\gamma i}^{\rm(CF,\ NLO)}\), which is constant at \(\eta\gg 1\), multiplied by \(\alpha_{s}\) as an estimate of \(\alpha_{s}^{2}\) corrections. On the other hand, our HEF calculation is done only in the LLA. Corrections beyond LLA are thus missing. Let us also stress that any HEF computation at any logarithmic accuracy is accurate up to power corrections in \(\eta\). It is therefore natural to account for both uncertainties for the missing logarithms and for the missing power corrections in \(\eta\). Combining the estimates of uncertainties from these two sources we get: \[\Delta\hat{\sigma}_{\gamma i}^{\rm(HEF)}(\eta)=\sqrt{\left(\alpha_{s}(\mu_{R} )\hat{\sigma}_{\gamma i}^{\rm(CF,\ NLO)}(\eta)\right)^{2}+\left(C^{\rm(HEF)}_ {\gamma i}\eta^{-\alpha_{\gamma i}^{\rm(HEF)}}\right)^{2}}, \tag{40}\] where the first term under the square root weighs roughly the unknown NLLA corrections, whose perturbative expansion starts from a constant \(O(\alpha_{s}^{2})\) term at \(\eta\gg 1\), and the second term stands for the power corrections missing in the HEF. We compute the exponent \(\alpha_{\gamma i}^{\rm(HEF)}>0\) and the normalisation factor \(C^{\rm(HEF)}_{\gamma i}\) numerically from the deviation of the known NLO CF (\(O(\alpha_{s})\)) coefficient function from its high-energy limit: \(\hat{\sigma}_{\gamma i}^{\rm(CF,\ NLO)}(\eta)-\hat{\sigma}_{\gamma i}^{\rm( CF,\ NLO)}(\infty)\). The behaviour of the error estimates obtained as described above together with the resulting InEW weight functions for the HEF contributions is illustrated in the top panel of Fig. 4. The transition between CF and HEF contributions happens around the point where \(\Delta\hat{\sigma}_{\gamma i}^{\rm(CF)}\simeq\Delta\hat{\sigma}_{\gamma i}^{ \rm(HEF)}\), which occurs at \(\eta\simeq 10\) for the \(\gamma g\) channel which is dominant at high \(\sqrt{s_{\gamma p}}\) and at \(\eta\simeq 6\) for the \(\gamma q\) channel. The bulk of the matching uncertainty, Eq. (37), is concentrated in the neighbourhood of this point as can be seen in Fig. 4. The relative magnitude of the CF and HEF contributions to the final result can be seen on the bottom panel of Fig. 4. For \(\sqrt{s_{\gamma p}}<1\) TeV, the HEF contribution, albeit being very significant, is still smaller than the CF one. As a consequence, for all practically available energies, the matching of HEF contributions to CF ones is necessary and the HEF calculation alone cannot provide a reliable result. ### Numerical results and theory uncertainties. The dynamical scale-choice. For the numerical computation of cross sections, we have used the multi-threaded version of the vegas Monte-Carlo integration algorithm, with cross checks using the suave algorithm as implemented in the CUBA library [70]. The relative uncertainty of the integration for the total cross sections is below 1%. To obtain the numerical results shown in this section, we have used the CT18NLO PDF set [71] and the corresponding value of \(\alpha_{s}(M_{Z})\). The numerical results for the total cross section of inclusive \(J/\psi\) photoproduction as the function of the photon-proton collision energy are shown in the Fig. 5. We first concentrate on the left panel where the results with the default (conventional) scale-choice, \(\mu_{F}=\mu_{R}=M\), are shown. The hatched bands indicate the scale dependence about the central-scale result using the same 5-point scale-variation prescription as in Fig. 2 in Section 2. The key feature of the matched result in Fig. 5 is of course that the scale-variation band does not have the pathological high-energy behaviour of the fixed-order result shown on the left panel of Fig. 2. We also plot in Fig. 5 a separate \(\mu_{F}\)-variation band using another style of shading. The latter band shows that the \(\mu_{F}\) dependence of the matched cross section at high energy is dramatically reduced in comparison with the \(\mu_{F}\) dependence of the LO result. This comes from the partial cancellation of the \(\mu_{F}\) dependence of the resummation factor of Eq. (26) and of the PDF. The latter observation illustrates the consistency of the resummation scheme used for the present study with PDF fixed-order evolution. In Fig. 5, we compare our HEF results, obtained with the central \(\mu_{R}\) and \(\mu_{F}\) scale choices, for both the \(\ln(1+\eta)\) (Eq. (34)) and the \(\ln(1/\xi)\) (Eq. (31)) resummation matched to NLO CF. The former is depicted by the solid red lines and the latter by the dash-dotted red lines. As we have discussed in Section 3.2, the result from the \(\ln(1/\xi)\) resummation contains some NLL contributions relative to the \(\ln(1+\eta)\) resummation. It appears that the difference between both results lies well within the Figure 4: Top panel: plots of the error estimates \(\Delta\hat{\sigma}_{\gamma i}^{\rm(CF)}\) and \(\Delta\hat{\sigma}_{\gamma i}^{\rm(HEF)}\) and of the resulting InEW weights of HEF contributions, \(w_{\gamma i}^{\rm(HEF)}(\eta)\), (bottom plot) as well as of the matching uncertainties, \(\Delta\hat{\sigma}_{\gamma i}^{\rm(HEW)}\) (top plot). Bottom panel: Plots of the \(\eta\) integrand of the expression for the total cross section, Eq. (2), with the LO CF, NLO CF, HEF and matched approximations for \(\hat{\sigma}_{\gamma i}\) as well as of the corresponding matching uncertainty. The plots for \(\gamma q\) channel are multiplied by a factor \(8\) for visibility. scale-variation band of the \(\ln(1+\eta)\) resummation result (see the red hatched band of Fig. 5). This can be seen as a hint that the NLL HEF corrections are under control in our matching approach. A stronger statement would require matching a complete NLL HEF to (N)NLO CF computations. In Fig. 5, we also plot the matching uncertainty estimated with the help of Eq. (37). It turns out to be comparable to the residual \(\mu_{F}\) uncertainty while being significantly larger than the corresponding uncertainty estimated in our previous study [30] of \(\eta_{Q}\) hadroproduction where it was found to be negligible. This observation points at a stronger sensitivity of the process at hands to the details of the matching procedure due to the complicated non-monotonous shape of the functions involved (see the right panel of Fig. 4), which was less of a problem in the computation for \(\eta_{Q}\) hadroproduction. That being said, the matching uncertainty is still reasonably small and the scale uncertainty remains the main source of theoretical uncertainties. These are expected to be mitigated only by increasing the perturbative accuracy of the computation. The eye-catching drawback of the prediction in the left plot of Fig. 5 is the unphysically-looking dip in the scale-variation band for \(\sqrt{s_{\gamma p}}\) between 20 and 100 GeV. This dip arises from the large (negative) contribution of the loop corrections to \(\hat{\sigma}^{\rm(CF)}_{ij}(\eta)\) at \(\eta\simeq 3\) (Fig. 4, right panel) whose \(\mu_{R}\) dependence is not compensated by the running of \(\alpha_{s}\). This becomes clearly problematic when \(\mu_{R}<M\) as it was already discussed in [23]. In this region of relatively small \(\eta\), the leading-power approximation of HEF is certainly invalid and can not be invoked to cure this issue. In addition, the region of large \(\eta\), where HEF is valid, does not sufficiently contribute such that the \(\eta\)-integrated hadronic cross section obtained in our matched computation be different enough from the NLO CF result and thus insensitive to this feature of the loop correction at moderate values of \(\sqrt{s_{\gamma p}}\). Perhaps, the inclusion of threshold effects or high-energy resummation at subleading power in \(\eta\) could solve this problem for \(\mu_{R}<M\) which is de facto used when varying the scale \(\mu_{R}\) by a factor 2 about the "conventional" choice \(M\). Yet, as discussed in [23], the natural scale for the reaction which we discuss, whose Born contribution is a \(2\to 2\) scattering (Eq. (13)), is likely not \(M\), even for \(p_{T}\)-integrated observables. It is rather the invariant mass of the partonic system, \(\sqrt{\hat{s}}\), as the quarkonium is never produced alone. Indeed, in NRQCD, at least one non-soft gluon has to be emitted to photoproduce a heavy-quark pair in the \({}^{3}S^{[1]}_{1}\) state. Therefore, it is natural to choose the average value of \(\sqrt{\hat{s}}\) obtained from the LO CF subprocess, Figure 5: Total \(J/\psi\)-photoproduction cross section for \(z<0.9\) obtained via the matching of our HEF-resummed results matched to NLO CF ones using the default scale choice (left panel) and the dynamical scale choice (right panel). The red solid line corresponds to the \(\ln(1+\eta)\) resummation and the red dash-dotted line to the \(\ln(1/\xi)\) resummation. Eq. (6), for the central values for the scales \(\mu_{R}\) and \(\mu_{F}\). We find that our dynamical scale ranges9 for the \(J/\psi\) case from 3 GeV to 5 GeV for the highest hadronic energies, \(\sqrt{s_{\gamma p}}\), we will consider and from 9.5 to 16 GeV for the \(\Upsilon\) case. Footnote 9: The behaviour of \(\left\langle\hat{s}_{\gamma g}\right\rangle\) as a function of \(\sqrt{s_{\gamma p}}\) in the LO CF approximation of the CSM is well described by the following parametrisation: \(\left\langle\hat{s}_{\gamma g}\right\rangle(\sqrt{s_{\gamma p}})=M^{2}+ \left(\kappa_{1}L+((\hat{s})_{\alpha_{0}}-M^{2})\kappa_{2}L^{3}\right)/(1+ \kappa_{2}L^{3})\) with \(L=\ln\sqrt{s_{\gamma p}}/M\), and \(\left\langle\hat{s}\right\rangle_{\infty}=25\) GeV\({}^{2}\), \(\kappa_{1}=7\) GeV\({}^{2}\) and \(\kappa_{2}=0.03\) for \(M=3\) GeV while \(\left\langle\hat{s}\right\rangle_{\infty}=250\) GeV\({}^{2}\), \(\kappa_{1}=50\) GeV\({}^{2}\) and \(\kappa_{2}=0.1\) for \(M=9.5\) GeV. With this dynamical scale choice, \(\mu_{R}\) values close to \(M/2\) are not used at mid and large \(\sqrt{s_{\gamma p}}\). Consequently, the dip in the scale-variation band simply disappears, see the right panel of Fig. 5. Let us stress that results with both central-scale choices, \(M\) vs \(\sqrt{\hat{s}_{\gamma g}}\), are compatible within the scale uncertainty. One notable difference is indeed the disappearance of the dip, the second is that the results of \(\ln(1+\eta)\) and \(\ln(1/\xi)\) resummations get closer to each other with the dynamical scale choice. We emphasise that simply increasing the value of the scale in the NLO CF computation does not help to solve the problem of negative cross sections at high energy (Fig. 2) because the values of the NLO CF cross section become negative for \(\mu_{F}>M\). The scale choice \(\hat{\mu}_{F}\) of Eq. (12), which is optimal from the point of view of the NLO CF computation [43], is smaller than \(M\) and leads to cross sections which lie at the lower edge of the LO CF scale-uncertainty band. These are clearly below our matching predictions even with the corresponding scale uncertainty, see the dashed purple line in both panels of Fig. 5. Having discussed our parameter choices, we are now in a position to present our final matched results. In Fig. 6, we show our matched predictions with the dynamical scale choice for the total inclusive \(J/\psi\) (left) and \(\Upsilon(1S)\) (right) photoproduction cross sections. We focus on the DLA HEF \(\ln(1+\eta)\)-resummation computation matched to NLO CF as described in Section 4.1 and use the dynamical scale-fixing prescription of Section 4.2. The scale-variation envelope (see the red hatched band) in Fig. 6 is computed with the CT18NLO PDF set. We have also used the three PDF sets which we used in our \(\eta_{c}\)-hadroproduction study [30], MSHT2@nlo_as118[72], NNPDF31_nlo_as_0118[73] and NNPDF31sx_nloMLlx_as_0118[58] with the central scales. From the left panel of Fig. 6, one can see that our predictions reproduce well the shape of the \(\sqrt{s_{\gamma p}}\) dependence and the magnitude of the H1 [35], FTPS [36] and NA14 [37] experimental data shown in the plot, unlike the pure NLO CF results, shown in Fig. 2. Our improved study clearly shows that the leading-\(\nu\) NRQCD contributions from the \({}^{3}S_{1}^{[1]}\) state, or equivalently the CSM, is sufficient to account for the \(J/\psi\) data. Even though colour-octet contributions are not needed here, given the large Figure 6: Total inclusive photoproduction cross section of \(J/\psi\) (left panel) and \(\Upsilon(1S)\) (right panel) with \(z<0.9\) as a function of \(\sqrt{s_{\gamma p}}\) in the DLA HEF\(\ln(1+\eta)\)) matched to NLO CF with our dynamical scale choice together with their scale variation (hatched band) and PDF uncertainties (solid bands). uncertainties of both our computation and the experimental data, a substantial contributions from these colour-octet states, expected from NRQCD NLO fits [3] despite being NNLO in \(v^{2}\), cannot be excluded. In any case, it will be necessary to consider them through HEF DLA matched to NLO CF as they are likely plagued by the same high-energy perturbative instability [22]. It is also worth noting that the PDF uncertainty of our matched results are smaller than their scale uncertainty in the region where experimental data are available. Results from different PDF sets are roughly compatible with each other, which shows that in this region the PDFs are reasonably constrained by the small-\(x\) DIS data from HERA and the problem of the CF NLO computation (Fig. 2) really comes from the poorly controlled high-energy behaviour of the coefficient function \(\hat{\sigma}_{\gamma i}(\eta)\). The PDF uncertainty becomes comparable to the scale uncertainty only above \(\sqrt{s_{\gamma p}}\sim 1\) TeV. Future experimental data at higher energies to be collected from ultra-peripheral collisions at the LHC in the collider mode [39; 40] before possible LHeC or FCC-eh data, as well as data at low energies from EIC [74; 75] and fixed-target experiments at the LHC [76; 77], will allow for more precise theory-data comparison. In the meantime, we are hopeful that theoretical studies could be advanced to higher accuracy. ## 5 Conclusions and outlook In the present paper, we have addressed the high-energy perturbative instability of the total cross section of inclusive photoproduction of vector quarkonia. We have assumed colour-singlet dominance under NRQCD factorisation as what regards the hadronisation of the \(Q\bar{Q}\) pair into a quarkonium state. In other words, we have restricted our analysis to the leading-\(v^{2}\) NRQCD contributions; these come from the \({}^{3}S_{1}^{[1]}\) states of the \(Q\bar{Q}\) pair. The partonic cross section of the process, \(\hat{\sigma}_{\gamma i}\), has been obtained by matching the HEF partonic cross section in the DLA which resums, at all order in \(\alpha_{s}\), doubly-leading-logarithmic terms scaling like \(\alpha_{s}^{n}\ln^{n-1}(\hat{s}/M^{2})\) in the \(\hat{s}\gg M^{2}\) limit to the NLO CF partonic cross section. The HEF resummation has been performed within the DLA in order to remain strictly compatible with the DGLAP evolution of the usual collinear PDFs. The matching has been performed using the InEW matching prescription. We have found that the matching is likely not the main source of the theoretical uncertainties of our final results. Our study leads to a solid conclusion that the resummation of high-energy logarithms in the coefficient function of CF solves the perturbative instability of the NLO CF computation at \(\sqrt{s_{\gamma p}}\gg M\). It also yield an increase of the cross section up to values compatible with experimental data (within large experimental and theoretical uncertainties). In addition, it significantly reduces the \(\mu_{F}\) dependence of the cross section in this region in comparison to the LO CF prediction. Another important lesson which one should take from the present paper, as well as from our earlier study [30], is that, although the HEF contribution to the cross section becomes very significant at high \(\sqrt{s_{\gamma p}}\gg M\), the NLO CF contribution from \(\sqrt{\hat{s}_{\gamma i}}\gg M\) at \(\sqrt{s_{\gamma p}}\) as large as 1 TeV remain significant. In other words, predictions from HEF or \(k_{T}\)-factorisation taken alone [51; 52] would not be sufficient. Our results with the default scale choice \(\mu_{F}\simeq\mu_{R}\simeq M\) are now well behaved at high \(\sqrt{s_{\gamma p}}\) (Fig. 5). However, as we discussed in Section 4.2, we consider that using the invariant mass of the partonic scattering as a dynamical scale choice is better motivated since the CS \({}^{3}S_{1}^{[1]}\) vector \(S\)-wave \(Q\bar{Q}\) state is always produced in association with at least one hard gluon. As such, the partonic invariant mass is always larger than the quarkonium mass, \(M\). Our final predictions are evaluated using this scale choice (Fig. 6) and they agree well with the existing experimental data. Not only did negative cross sections disappear and did the \(\mu_{F}\) dependence decrease, but also theory now fully agrees with data. Our research program of studying quarkonium-production cross sections with matched computations of HEF DLA to NLO CF can be expediently continued in several directions, from resolving the same perturbative instabilities in the \(J/\psi\) total _inclusive_ hadroproduction [48; 18] or _exclusive_ photoproduction [78; 79; 80; 81; 82; 83] cross sections, to reconciling the behaviour of quarkonium \(p_{T}\)-distributions at moderate \(p_{T}\lesssim M\) with NRQCD. However, in order to improve the predictive power of the proposed formalism and to reduce its \(\mu_{R}\) dependence, it is desirable to go beyond the DLA on the resummation side. ## Acknowledgements We thank V. Bertone, M. Fucilla, L. Szymanowski, Y. Yedelkina for useful discussions and Y. Feng for having shared his FDC results with us. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 824093 in order to contribute to the EU Virtual Access NLOAccess and a Marie Sklodowska-Curie action "RadCor4HEF" under grant agreement No. 101065263. This project has also received funding from the Agence Nationale de la Recherche (ANR) via the grant ANR-20-CE31-0015 ("PrecisOnium") and via the IDEX Paris-Saclay "Investissements d'Avenir" (ANR-11-IDEX-0003-01) through the GLU-ODYNAMICS project funded by the "P2IO LabEx (ANR-10-LABX-0038)". This work was also partly supported by the French CNRS via the IN2P3 project GLUE@NLO, via the Franco-Chinese LIA FCPPL (Quarkonium4AFTER) and via the Franco-Polish EIA (GlueGraph). ## Appendix A Derivation of the \(\mathbf{p}_{T}\)-integrated HEF coefficient function In order to integrate Eq. (20) over \(\mathbf{p}_{T}\), we first split the integrations over the azimuthal angle \(\phi\) and \(\mathbf{p}_{T}^{2}\). Due to the identity (22), the \(\phi\) dependence can be eliminated from the factor \(H\) and, hence, the only \(\phi\)-dependent factor under the angular integral will be the \(\delta\) function: \[\int\limits_{0}^{2\pi}d\phi\ \delta\left(\frac{1-z}{z}(M^{2}(yz-1)-\mathbf{p}_{T} ^{2})-\mathbf{q}_{T1}^{2}-\mathbf{p}_{T}^{2}+2|\mathbf{q}_{T1}\|\mathbf{p}_{T }|\cos\phi\right)=\frac{2\theta(D)}{\sqrt{D}}, \tag{10}\] where \[D=4\mathbf{p}_{T}^{2}\mathbf{q}_{T1}^{2}-\left(\mathbf{p}_{T}^{2}+\mathbf{q}_ {T1}^{2}+\frac{1-z}{z}(\mathbf{p}_{T}^{2}-M^{2}(yz-1))\right)^{2}.\] The requirement \(D>0\) leads to the following upper and lower limits of the \(\mathbf{p}_{T}^{2}\) integration: \[\mathbf{p}_{T\pm}^{2}=\frac{1}{\mathbf{q}_{T1}^{2}}\left(z\mathbf{q}_{T1}^{2} \pm\sqrt{D_{1}}\right)^{2},\] with \(D_{1}=\mathbf{q}_{T1}^{2}(1-z)(M^{2}(yz-1)-z\mathbf{q}_{T1}^{2})\). From the requirement \(D_{1}>0\), it follows that \(\mathbf{q}_{T1}^{2}\leq M^{2}(yz-1)/z\). Parametrising \(\mathbf{p}_{T}^{2}=\mathbf{p}_{T-}^{2}+x(\mathbf{p}_{T+}^{2}-\mathbf{p}_{T-} ^{2})\) with \(0\leq x\leq 1\), one finds that the factor \(\sqrt{D}\) from Eq. (10) reduces to \(4\sqrt{D_{1}x(1-x)}\) while the rest of the dependence of the integrand on \(x\) is a rational function. Via a partial-fraction decomposition of the rational dependence on \(x\), the integral is expressed as linear combination of \[j_{n}(a,b)=\int\limits_{0}^{1}\frac{dx}{\sqrt{x(1-x)}}\frac{1}{(ax+b)^{n}}= \frac{\pi}{\sqrt{b(a+b)}}\left\{\begin{array}{cc}1&\text{for }n=1\\ \frac{a+2b}{2b(a+b)}&\text{for }n=2\end{array}\right., \tag{11}\] and the \(\mathbf{p}_{T}\)-integrated coefficient function takes the form: \[\begin{split}\frac{d\mathcal{H}(y,z,\mathbf{q}_{T1}^{2})}{dz}=& \frac{\langle\mathcal{O}[^{3}S_{1}^{[1]}]\rangle}{M^{3}}\frac{64\pi\alpha \alpha_{s}^{2}e_{Q}^{2}}{27yz\left(\tau(z-1)^{2}+d_{1}^{2}\right)^{2}\left(\tau (z-1)(2\tau z+z+1)-d_{1}^{2}\right)^{3}}\\ &\times\Big{\{}f_{1}\cdot\Big{[}j_{1}\left(4d_{1}\tau(z-1),d_{1}^{2 }-2d_{1}\tau(z-1)+\tau(z-1)(\tau(z-2)-1)\right)\\ &+\frac{z}{1-z}j_{1}\left(4d_{1}\tau z,(d_{1}-\tau z)^{2}+\tau \right)\Big{]}\\ &+f_{2}\cdot j_{2}\left(4d_{1}\tau(z-1),d_{1}^{2}-2d_{1}\tau(z-1)+ \tau(z-1)(\tau(z-2)-1)\right)\\ &+f_{3}\cdot j_{2}\left(4d_{1}\tau z,(d_{1}-\tau z)^{2}+\tau \right)\Big{\}},\end{split} \tag{12}\] where \(\tau=\mathbf{q}_{T1}^{2}/M^{2}\), \(d_{1}=\sqrt{D_{1}}/M^{2}\) and \[f_{1}= 2\tau(\tau+1)(z-1)^{3}z\left(d_{1}^{2}-\tau(z-1)(\tau z+1)\right)^ {2}\] \[\times\left[d_{1}^{4}(\tau+z)-d_{1}^{2}\tau\left(\tau\left(z\left(4 z^{2}-6z+5\right)-2\right)+2\tau^{2}(z-1)z+(z-2)z\right)\right.\] \[\left.+\tau^{2}(z-1)\left(\tau\left(z\left(\tau(z(z(3z-8)+8)-2)+(z -2)^{2}\right)-1\right)-z\right)\right],\] \[f_{2}= -\tau(\tau+1)(z-1)^{2}\left(d_{1}^{2}-\tau(z-1)(\tau z+1)\right)^ {2}\] \[\times\left[d_{1}^{6}\left(-\left(z^{2}+2\tau(z-1)\right)\right)+ d_{1}^{4}\tau(z-1)\left(z^{2}+2\tau(z-1)\right)(4\tau z+z+3)\right.\] \[-\left.d_{1}^{2}\tau^{2}(z-1)^{2}\left(-6\tau+3\tau^{2}z^{4}+2(4 \tau+1)\left(\tau^{2}+1\right)z^{3}+(2\tau((7-4\tau)\tau+2)+3)z^{2}+2(1-8\tau) \tau z\right)\right.\] \[\left.-\tau^{3}(z-1)^{3}(2\tau z+z+1)\left(2\tau\left(z^{2}+z+1 \right)(z-1)^{2}-z^{2}+\tau^{2}z(z(z(z+2)-6)+4)\right)\right],\] \[f_{3}= \tau z^{2}\left(d_{1}^{2}-\tau(z-1)(\tau z+1)\right)^{2}\] \[\times\left[d_{1}^{6}\left((z-1)^{2}-\tau((z-2)z+3)\right)\right.\] \[+d_{1}^{4}\tau(z-1)\left(-(z-1)\left(z^{2}-3\right)+4\tau^{2}z((z -2)z+3)-\tau(z-3)(z(3z-2)+3)\right)\] \[-d_{1}^{2}\tau^{2}(z-1)^{2}\left(9\tau+(\tau-1)\tau(5\tau+2)z^{4} +2\tau(3-5(\tau-2)\tau)z^{3}+\right.\] \[\left.2(\tau(\tau(7\tau-12)+3)+1)z^{2}+12\tau(2\tau-1)z-3\right)\] \[\left.+\tau^{3}(z-1)^{3}(2\tau z+z+1)(\tau(z(\tau(z(\tau((z-2)z+2) -(z-6)z-10)+6)+2(z-3))+3)-1)\right]. \tag{10}\] If one computes the HEF coefficient function with a minimum \(\mathbf{p}_{T}^{2}\) cut such that \(\mathbf{p}_{T}^{2}\geq\mathbf{p}_{T\,\mathrm{min}}^{2}\), one simply has to replace the lower limit of \(x\) integration in (10) by \[x_{\mathrm{min}}=\max\left(0,\frac{\mathbf{q}_{T1}^{2}\mathbf{p}_{T\,\mathrm{ min}}^{2}-(z\mathbf{q}_{T1}^{2}-\sqrt{D_{1}})^{2}}{2z\mathbf{q}_{T1}^{2}\sqrt{D_{1}}} \right). \tag{11}\] Integrals with such a cut also can be expressed in terms of elementary functions. ## Appendix B Derivation of the high-energy asymptotics of the NLO and NNLO CF scaling functions In this Appendix, we perform the \(\alpha_{s}\) expansion of the HEF resummation formula Eq. (34). As mentioned before, the \(\alpha_{s}\) expansion of Eq. (31) is different from the expansion of Eq. (34) only by the N\({}^{k\geq 1}\)LLA terms which are outside the scope of the present study. To this end, we rewrite \(\theta(\mu_{F}^{2}-\mathbf{q}_{T1}^{2})=\theta(M^{2}-\mathbf{q}_{T1}^{2})+ \theta(\mathbf{q}_{T1}^{2}-M^{2})\theta(\mu_{F}^{2}-\mathbf{q}_{T1}^{2})\) in Eq. (34) and use the series expansions in \(\alpha_{s}\) of \[\mathcal{C}_{gg}^{\mathrm{(DL)}}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2}) = \frac{\hat{\alpha}_{s}}{\mathbf{q}_{T}^{2}}\left[1+\hat{\alpha}_ {s}\ln\left(\frac{1}{x}\right)\ln\left(\frac{\mathbf{q}_{T}^{2}}{\mu_{F}^{2}} \right)+O(\alpha_{s}^{2})\right], \tag{12}\] \[\int\limits_{M^{2}}^{\mu_{F}^{2}}d\mathbf{q}_{T}^{2}\ \mathcal{C}_{gg}^{ \mathrm{(DL)}}(x,\mathbf{q}_{T}^{2},\mu_{F}^{2},\mu_{R}^{2}) = -\hat{\alpha}_{s}\ln\left(\frac{M^{2}}{\mu_{F}^{2}}\right)-\frac{ \hat{\alpha}_{s}^{2}}{2}\ln^{2}\left(\frac{M^{2}}{\mu_{F}^{2}}\right)\ln \left(\frac{1}{x}\right)+O(\alpha_{s}^{3}), \tag{13}\] to obtain \[\frac{d\hat{\sigma}_{\gamma i}^{\mathrm{(HEF,\,\ln{(1+\eta)})}}}{dzd^{2} \mathbf{p}_{T}}= \frac{F_{\mathrm{LO}}}{\pi M^{2}}\left\{\left(\frac{\alpha_{s}(\mu_ {R})}{2\pi}\right)\left[\frac{dc_{1}^{(\gamma i)}(\infty,z,\rho)}{dzd\rho}+ \frac{d\bar{c}_{1}^{(\gamma i)}(\infty,z,\rho)}{dzd\rho}\ln\frac{M^{2}}{\mu_{F} ^{2}}\right]\right.\] \[+\left(\frac{\alpha_{s}(\mu_{R})}{2\pi}\right)^{2}\ln(1+\eta) \left[\frac{dc_{2}^{(\gamma i)}(\infty,z,\rho)}{dzd\rho}+2C_{A}\frac{dc_{1}^{( \gamma i)}(\infty,z,\rho)}{dzd\rho}\ln\left(\frac{M^{2}}{\mu_{F}^{2}}\right)\right. \tag{14}\] \[+\left.C_{A}\frac{d\bar{c}_{1}^{(\gamma i)}(\infty,z,\rho)}{dzd\rho }\ln^{2}\left(\frac{M^{2}}{\mu_{F}^{2}}\right)\right]+O(\alpha_{s}^{3})\right\},\] where \(\rho={\bf p}_{T}^{2}/M^{2}\). For the asymptotics of the scaling functions in Eq. (B.3), we obtain the following explicit formulae in terms of the HEF coefficient function Eq. (16): \[\frac{d\bar{c}_{1}^{(\gamma)}(\infty,z,\rho)}{dzd\rho} =\frac{z(1-z)C_{i}}{8\pi M^{2}F_{\rm LO}}\int\limits_{0}^{2\pi} \frac{d\phi}{2\pi}\left(\lim_{{\bf q}_{T1}^{2}\to 0}\frac{H(\bar{s},\bar{t}, \bar{u},{\bf q}_{T1}\cdot{\bf p}_{T},{\bf q}_{T1}^{2})}{(1-z+\rho)^{2}}\right),\] (B.4) \[\frac{dc_{1}^{(\gamma)}(\infty,z,\rho)}{dzd\rho} =\frac{z(1-z)C_{i}}{8\pi M^{2}F_{\rm LO}}\int\frac{d^{2}{\bf q}_{T 1}}{\pi{\bf q}_{T1}^{2}}\] (B.5) \[\times\left[\frac{M^{4}H(\bar{s},\bar{t},\bar{u},{\bf q}_{T1} \cdot{\bf p}_{T},{\bf q}_{T1}^{2})}{\left[(M^{2}+{\bf p}_{T}^{2})(1-z)+({\bf q }_{T1}-{\bf p}_{T})^{2}z\right]^{2}}-\theta(M^{2}-{\bf q}_{T1}^{2})\left(\lim_ {{\bf q}_{T1}^{2}\to 0}\frac{H(\bar{s},\bar{t},\bar{u},{\bf q}_{T1}\cdot{\bf p}_{T},{ \bf q}_{T1}^{2})}{(1-z+\rho)^{2}}\right)\right],\] \[\frac{dc_{2}^{(\gamma)}(\infty,z,\rho)}{dzd\rho} =\frac{z(1-z)C_{i}C_{A}}{4\pi M^{2}F_{\rm LO}}\int\frac{d^{2}{\bf q }_{T1}}{\pi{\bf q}_{T1}^{2}}\ln\frac{{\bf q}_{T1}^{2}}{M^{2}}\] (B.6) \[\times\left[\frac{M^{4}H(\bar{s},\bar{t},\bar{u},{\bf q}_{T1} \cdot{\bf p}_{T},{\bf q}_{T1}^{2})}{\left[(M^{2}+{\bf p}_{T}^{2})(1-z)+({\bf q }_{T1}-{\bf p}_{T})^{2}z\right]^{2}}-\theta(M^{2}-{\bf q}_{T1}^{2})\left(\lim_ {{\bf q}_{T1}^{2}\to 0}\frac{H(\bar{s},\bar{t},\bar{u},{\bf q}_{T1}\cdot{\bf p}_{T},{ \bf q}_{T1}^{2})}{(1-z+\rho)^{2}}\right)\right],\] where \(C_{i}=\delta_{ig}C_{A}+\delta_{iq}C_{F}\) and the Mandelstam variables are given by Eq. (21) with \(y=[(M^{2}+{\bf p}_{T}^{2})(1-z)+({\bf q}_{T1}-{\bf p}_{T})^{2}z]/[M^{2}z(1-z)]\). Eq. (B.4) only involves a simple averaging over the azimuthal angle and can be evaluated to (for \(C_{A}=N_{c}=3\)): \[\frac{d\bar{c}_{1}^{(\gamma)}(\infty,z,\rho)}{dzd\rho}=\frac{16\pi^{2}(1-z)z \left(\rho^{2}\left(z^{2}-z+1\right)^{2}+\rho\left(z^{2}-2z+2\right)(z-1)^{2} +(z-1)^{4}\right)}{(\rho+1)^{2}\left(\rho+(z-1)^{2}\right)^{2}(\rho-z+1)^{2}}.\] (B.7) The \({\bf q}_{T1}\) integrals in Eq. (B.5) and Eq. (B.6) are finite in two dimensions thanks to the cancellation of \(1/{\bf q}_{T1}^{2}\) singularity between both terms in the square brackets. As such, these integrals can be easily evaluated numerically. Moreover, one notices that the integrand of Eq. (B.5) is a rational function of \({\bf q}_{T1}^{2}\) and \({\bf q}_{T1}\cdot{\bf p}_{T}\), which suggests the application of standard loop-integral techniques, such as Integration-By-Parts (IBP) Reduction [84]. We use these techniques below in this Appendix to obtain closed-form analytic result of the integrals of Eq. (B.5). In order to be able to split the integration of the first and second terms in Eq. (B.5) we go to \(2-2\epsilon\) dimensions, which will regularise the collinear divergences when the terms are separated. The \({\bf q}_{T1}^{2}\) integral in the second term can be easily evaluated and Eq. (B.5) turns into \[\frac{dc_{1}^{(\gamma)}(\infty,z,\rho)}{dzd\rho} = \frac{z(1-z)C_{i}}{8\pi M^{2}F_{\rm LO}}\left[F_{1}-F_{2}\right],\] (B.8) \[F_{1} = \int\frac{d^{2}{\bf q}_{T1}}{\pi{\bf q}_{T1}^{2}}\frac{M^{4}H( \bar{s},\bar{t},\bar{u},{\bf q}_{T1}\cdot{\bf p}_{T},{\bf q}_{T1}^{2})}{ \left[(M^{2}+{\bf p}_{T}^{2})(1-z)+({\bf q}_{T1}-{\bf p}_{T})^{2}z\right]^{2}},\] (B.9) \[F_{2} = -\frac{(M^{2})^{-\epsilon}}{\epsilon}\int\frac{d\Omega_{2-2 \epsilon}}{2\pi}\left(\lim_{{\bf q}_{T1}^{2}\to 0}\frac{H(\bar{s},\bar{t},\bar{u},{\bf q}_{T1} \cdot{\bf p}_{T},{\bf q}_{T1}^{2})}{(1-z+\rho)^{2}}\right),\] (B.10) where \(\bar{s},\bar{t},\bar{u}\) are given by Eq. (21) with \(y=[(M^{2}+{\bf p}_{T}^{2})(1-z)+({\bf q}_{T1}-{\bf p}_{T})^{2}z]/[M^{2}z(1-z)]\) and \(d\Omega_{2-2\epsilon}\) is an element of the solid angle describing the direction of the vector \({\bf q}_{T1}\) in \(2-2\epsilon\) dimensions. It turns out not to be necessary to recompute the HEF coefficient function Eq. (15) in \(4-2\epsilon\) dimensions for the computation of the integrals Eq. (B.9) and Eq. (B.10) because the original finite integral (B.5) is two-dimensional. We have redone the calculation with the \(4-2\epsilon\)-dimensional version of \(H\) and have obtained the same results. One has however to be careful with the evaluation of Eq. (B.10) because the averaging over the directions of the vector \({\bf q}_{T1}\), which is done in this term after taking the limit \({\bf q}_{T1}^{2}\to 0\), has to be done in \(2-2\epsilon\) dimensions to stay consistent with the evaluation of Eq. (B.9) within dimensional regularisation. The source of the problem lies in the angular integrations of the type: \[\int\frac{d\Omega_{2-2\epsilon}}{2\pi}(\mathbf{p}_{T}\cdot\mathbf{q}_{T1})^{2}= \mathbf{p}_{T}^{2}\mathbf{q}_{T1}^{2}\frac{\pi^{-1/2-\epsilon}}{\Gamma(1/2- \epsilon)}\int\limits_{0}^{\pi}d\phi\sin^{-2\epsilon}\phi\cdot\cos^{2}\phi= \frac{\mathbf{p}_{T}^{2}\mathbf{q}_{T1}^{2}}{2(1-\epsilon)}\frac{\Omega_{2-2 \epsilon}}{2\pi},\] with \(\Omega_{2-2\epsilon}=2\pi^{1-\epsilon}/\Gamma(1-\epsilon)\) being the solid angle in \(2-2\epsilon\) dimensions. The \(\epsilon\) dependence coming from these terms leads to finite terms in \(F_{2}\), scaling like \(\epsilon^{0}\), where \(\epsilon\) has been cancelled. The dependence on \(\mathbf{q}_{T1}\) in the integrand of Eq. (B.9) is rational and one finds the following three denominators in it: \[\mathcal{D}_{1} = (2-z)\mathbf{q}_{T1}^{2}-2(\mathbf{p}_{T}\cdot\mathbf{q}_{T1})+ \mathbf{p}_{T}^{2}+M^{2}(1-z),\] \[\mathcal{D}_{2} = z^{2}\mathbf{q}_{T1}^{2}-2z(\mathbf{p}_{T}\cdot\mathbf{q}_{T1})+ \mathbf{p}_{T}^{2}+M^{2}(1-z)^{2},\] \[\mathcal{D}_{3} = \mathbf{q}_{T1}^{2},\] which are linearly dependent, since we have only two linearly-independent scalar products \(\mathbf{q}_{T1}^{2}\) and \(\mathbf{p}_{T}\cdot\mathbf{q}_{T1}\). The appearance of linearly-dependent denominators is a common feature of quarkonium-related calculations. Due to this, one has to perform a partial-fractioning decomposition of the integrand in Eq. (B.9). We can then split the integrand into three parts, depending on the combinations \((\mathcal{D}_{1},\mathcal{D}_{2})\), \((\mathcal{D}_{1},\mathcal{D}_{3})\) and \((\mathcal{D}_{2},\mathcal{D}_{3})\). In each of these integral families, the scalar products in the numerator can be uniquely expressed in terms of linear combinations of the denominators. As such, the resulting integrals have positive or negative powers of \(\mathcal{D}_{i}\) and these can then be reduced using IBP reduction codes, such as \(\mathtt{LiteRed}\)[85, 86]. The resulting master integrals are then evaluated using Feynman parameters in terms of Gaussian hypergeometric functions \({}_{2}F_{1}\) which can be expanded in \(\epsilon\) using the HypExp[87] package. We have also extensively used various features of the FeynCalc framework [88, 89, 90] on all stages of this computation. With the procedure described above, one obtains the following result for the \(c_{1}\) coefficient differential with respect to \(z\) and \(\rho=\mathbf{p}_{T}^{2}/M^{2}\) with \(C_{A}=N_{c}=3\), \[\begin{split}\frac{dc_{1}^{(\gamma g)}(\infty,z,\rho)}{dzd\rho}=& 8\pi^{2}\bigg{\{}c_{1}^{(\text{R})}(z,\rho)\\ &+c_{1}^{(1)}(z,\rho)\ln\left[\frac{z^{2}(1-z)^{2}}{(\rho+(1-z)^{ 2})^{2}}\right]+c_{1}^{(2)}(z,\rho)\ln\left[\frac{(\rho+1-z)^{2}}{(1-z)(\rho+ 2-z)}\right]\\ &+\frac{c_{1}^{(3)}(z,\rho)}{\sqrt{(1+\rho)((2-3z)^{2}+(2-z)^{2} \rho)}}\\ &\times\ln\left[\frac{\rho(2-z)-(3-2z)z+2-\sqrt{(\rho+1)\left( \rho(z-2)^{2}+(2-3z)^{2}\right)}}{\rho(2-z)-(3-2z)z+2+\sqrt{(\rho+1)\left( \rho(z-2)^{2}+(2-3z)^{2}\right)}}\right]\bigg{\}},\end{split}\] (B.11) and \(c_{1}^{(\gamma q)}(\infty,z,\rho)=(C_{F}/C_{A})c_{1}^{(\gamma q)}(\infty,z,\rho)\), while the rational part of the result is \[c_{1}^{(\mathrm{R})}(z,\rho)=\frac{-2z}{(\rho+1)^{2}\left(\rho+(z- 1)^{2}\right)^{2}(\rho-2z+1)^{2}(\rho-z+1)^{2}(\rho-z+2)\left(\rho(z-2)^{2}+(2- 3z)^{2}\right)^{2}}\] \[\times\left\{-16(\rho+1)^{6}(\rho(\rho+2)-1)+8(\rho(\rho+4)+9)z^{ 14}-4\left(\rho\left(6\rho^{2}+46\rho+135\right)+195\right)z^{13}\right.\] \[+2(\rho(\rho(5\rho(3\rho+44)+942)+1660)+2011)z^{12}\] \[-(\rho(\rho(\rho(20\rho+551)+3762)+8908)+11638)+13273)z^{11}\] \[+(\rho(\rho(\rho(\rho(\rho(7\rho+361)+4074)+15298)+23699)+28573)+ 31476)z^{10}\] \[-(\rho(\rho(\rho(\rho(\rho(\rho(\rho+112)+2289)+14320)+34205)+40 070)+55265)+56458)z^{9}\] \[+(\rho(\rho(\rho(\rho(2\rho(\rho(5\rho+292)+3394)+26973)+42734)+4 6706)+87812)+78121)z^{8}\] \[+(\rho(\rho(\rho(\rho(\rho(\rho(\rho(-32)\rho-1383)-10257)-24354 )-20442)-40831)-112901)-83721)z^{7}\] \[+(\rho(\rho(\rho(\rho(\rho(25-7\rho)\rho+1401)+4754)-5507)-24167) +30395)+113588)+69198)z^{6}\] \[+2(\rho+1)(\rho(\rho(\rho(\rho(\rho(9\rho+35)+396)+5184)+17041)+10 711)-21550)-21826)z^{5}\] \[-(\rho+1)^{2}(\rho(\rho(\rho(\rho(24\rho+355)+4025)+17018)+21830) -7645)-20631)z^{4}\] \[+4(\rho+1)^{3}(\rho(\rho(\rho(\rho(8\rho+181)+1176)+2368)+396)-1 769)z^{3}\] \[-4(\rho+1)^{4}(\rho(\rho(\rho(13\rho+179)+579)+337)-416)z^{2}\] \[+16(\rho+1)^{5}(\rho(\rho(3\rho+19)+21)-15)z\Big{]}\,,\] and the coefficients in front of the logarithmic terms read \[c_{1}^{(1)}(z,\rho)=\frac{-z^{3}}{(\rho+1)^{2}\left(\rho+(z-1)^{ 2}\right)^{2}(\rho-2z+1)^{4}}\] \[\times\left\{5(\rho+1)^{4}+4(2\rho+1)z^{6}-(\rho+1)(23\rho+31)z^{ 5}+(\rho+1)(\rho(12\rho+77)+89)z^{4}\right.\] \[\left.-2(\rho+1)(\rho+3)(\rho(\rho+18)+21)z^{3}+2(\rho+1)^{2}( \rho(3\rho+32)+47)z^{2}-(\rho+1)^{3}(11\rho+35)z\right\},\] \[c_{1}^{(2)}(z,\rho)=\frac{z}{(\rho+1)^{2}(\rho-2z+1)^{4}(\rho-z +1)^{2}}\] \[\times\left\{2(\rho+1)^{4}-4(2\rho+1)z^{6}+(7\rho(\rho+2)-9)z^{5} +(\rho((5-2\rho)\rho+48)+57)z^{4}\right.\] \[\left.+(\rho+1)(\rho((\rho-13)\rho-69)-87)z^{3}+(\rho+1)^{2}(\rho (5\rho+36)+59)z^{2}-6(\rho+1)^{3}(\rho+3)z\right\},\] \[c_{1}^{(3)}(z,\rho)=\frac{z^{3}}{(\rho+1)^{2}(\rho-2z+1)^{4} \left(\rho(z-2)^{2}+(2-3z)^{2}\right)^{2}}\] \[\times\left\{-16(\rho-10)(\rho+1)^{5}+64(\rho(\rho+4)+9)z^{9}-32( \rho(\rho(3\rho+23)+64)+96)z^{8}\right.\] \[+4(\rho(\rho(\rho(10\rho+119)+245)+461)+1093)z^{7}\] \[+(\rho(\rho(\rho(618-\rho(7\rho+135))+7730)+13133)+3461)z^{6}\] \[+(\rho(\rho(\rho(\rho(23\rho-397)-8578)-34546)-45573)-18289)z^{5}\] \[-2(\rho+1)(\rho(\rho(\rho(\rho(-7)\rho-1392)-10488)-21697)-12625)z ^{4}\] \[+2(\rho+1)^{2}(\rho(\rho(\rho(7\rho-99)-2827)-10497)-9200)z^{3}\] \[\left.-4(\rho+1)^{3}(\rho(\rho(9\rho-112)-1269)-1912)z^{2}+8(\rho+ 1)^{4}(5(\rho-11)\rho-214)z\right\}.\] The coefficients \(c_{1}^{(1)}\), \(c_{1}^{(2)}\), \(c_{1}^{(3)}\) contain the denominator \((\rho-2z+1)^{-4}\) which is singular at the physical point \(\rho=2z-1\), while \(c_{1}^{(\mathrm{R})}\) is proportional to \((\rho-2z+1)^{-2}\). Individual terms may be divergent at this point. However, combining all such terms in the complete expression of \(dc_{1}/dzd\rho\) given in Eq. (B.11), the divergences cancel against each other and the limit at this point is actually finite, \[\lim_{\rho\to 2z-1}\frac{dc_{1}^{(\gamma g)}(\infty,z,\rho)}{dzd\rho}= \frac{\pi^{2}}{z^{8}(z+1)}\] \[\left\{z^{3}\left[(z(z(z(2z(2z-9)+191)-52)-339)+513)-121)\log(1-z)\right.\] \[\left.-8(z-1)(z+1)(z(z(2z(2z-5)+13)-8)+2)\log(z)\right.\] \[\left.+(z(z(z(z(4z(2z+7)-119)+68)+251)-449)+105)\log(z+1)\right]\right.\] \[\left.+2(z(z(z(z(4(z-2)z+85)-243)+267)+40)-333)+245)-55)z\right.\] \[\left.+2(z(298z-245)+55)\tanh^{-1}(z)\right],\] where \(\tanh^{-1}(z)=\ln\left[(1+z)/(1-z)\right]/2\) is the hyperbolic arc-tangent. The numerical comparison of our result (B.11) with our calculation of \(dc_{1}^{(\gamma g)}/(dzd\rho)\) using dipole subtraction which was already mentioned in Section 2, is shown in Fig. B.7 for \(\eta=1000\). As one can see, the asymptotic result is in a good agreement with numerical data. We also provide in Table B.1 and in Fig. B.8 the high-energy asymptotic numerical values of scaling functions, obtained via direct numerical evaluation of Eq. (B.4), Eq. (B.5) and Eq. (B.6) with a relative accuracy about \(10^{-3}\) using the regular algorithm cuhre implemented in the CUBA library [70]. Eq. (B.6) provides predictions for the non-trivial NNLO scaling function \(c_{2}\) in the high-energy limit in the LLA. In principle, it is possible to derive the closed-form analytic result for the high-energy asymptotics of the NNLO scaling function from it. However, such an analytic result would be too cumbersome to present and due to Gram-determinant singularities it may be more challenging to evaluate it numerically rather than the integral (B.6) itself. Therefore, we limit ourselves to present here the numerical results for the \(z\)-differential but \(p_{T}\)-integrated scaling function \(c_{2}\) in Fig. B.8 and in Table B.1. It is instructive to look at several limits of the obtained results. The low-\(\textbf{p}_{T}^{2}\) and high-\(\textbf{p}_{T}^{2}\) asymp \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(z\) & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline \(dc_{2}/dz\) & 50.09 & 99.81 & 147.6 & 192.3 & 233.7 & 272.7 & 316.1 & 393.4 & 633.2 \\ \hline \(dc_{1}/dz\) & -0.7195 & -1.394 & -2.063 & -2.820 & -3.832 & -5.573 & -9.554 & -20.65 & -56.55 \\ \hline \(d\bar{c}_{1}/dz\) & 5.267 & 10.55 & 15.90 & 21.39 & 27.20 & 33.74 & 42.02 & 54.57 & 78.93 \\ \hline \end{tabular} \end{table} Table B.1: Numerical results for the high-energy asymptotics of the \(dc_{2}/dz\), \(dc_{1}/dz\) and \(dc_{1}/dz\) scaling functions Figure B.7: The numerical comparison of the asymptotic result (B.11) with the scaling function computed by our numerical NLO code. totics of \(\bar{c}_{1}\) are, \[\rho\to 0 : \frac{d\bar{c}_{1}(\infty,z,\rho)}{dzd\rho}=\frac{16\pi^{2}z}{1-z}+O (\rho),\] (B.12) \[\rho\rightarrow\infty : \frac{d\bar{c}_{1}(\infty,z,\rho)}{dzd\rho}=\frac{16\pi^{2}z(1-z)}{ \rho^{4}}\left(1-z(1-z)\right)^{2}+O(\rho^{-5}),\] (B.13) while for the coefficient \(c_{1}\) we have, \[\rho\to 0:\qquad\frac{dc_{1}(\infty,z,\rho)}{dzd\rho}= \frac{8\pi^{2}z}{(1-2z)^{4}}\] \[\times\left\{(64z^{7}-256z^{6}+116z^{5}+653z^{4}-1213z^{3}+898z^ {2}-308z+40)\right.\] \[\times\left.\frac{z^{2}}{|2-3z|^{3}}\ln\left[\frac{-|2-3z|+2z^{2} -3z+2}{|2-3z|+2z^{2}-3z+2}\right]\right.\] \[+\left(4z^{2}-15z+5\right)z^{2}\ln\left[\frac{(z-1)^{2}}{z^{2}}\right]\] \[+\frac{2\left(2z^{4}-3z^{3}+7z^{2}-5z+2\right)(1-2z)^{4}}{(2-3z) ^{2}(z-1)}\] \[\left.-\frac{\left(4z^{5}+13z^{4}-44z^{3}+43z^{2}-16z+2\right)}{ z-1}\ln\left[\frac{(1-z)^{2}}{z^{2}-3z+2}\right]\right\}+O(\rho),\] (B.14) \[\rho\rightarrow\infty:\qquad\frac{dc_{1}(\infty,z,\rho)}{dzd\rho}= \frac{16\pi^{2}(1-z)z}{\rho^{3}(z-2)^{2}}\] (B.15) \[\times\left\{z^{4}-2z^{3}-z^{2}\ln\left[\frac{(1-z)z^{2}}{\rho(2 -z)^{2}}\right]-4z+4\right\}+O(\rho^{-4}),\] i.e. the coefficient \(c_{1}\) drops like \(1/\mathbf{p}_{T}^{6}\), while \(\bar{c}_{1}\sim 1/\mathbf{p}_{T}^{8}\) at high-\(\mathbf{p}_{T}\) and high partonic energy. Another interesting limit is \(z\to 1\) where both the (\(\rho\)-differential) \(c_{0}\) and \(\bar{c}_{1}\) tend to zero, while \(c_{1}\) tends to a non-zero limit, \[\lim_{z\to 1}\frac{dc_{1}(\infty,z,\rho)}{dzd\rho}=\frac{16\pi^{2}((\rho-2) \rho((\rho-2)\rho+2)+1)}{(\rho-1)^{4}\rho(\rho+1)^{3}},\] (B.16) such that the vanishing of the LO \(Q\bar{Q}\left[{}^{3}S_{1}^{[1]}\right]\) photoproduction cross section at \(z\to 1\) is violated already at NLO and may actually receive \(\ln(1-z)\) corrections at higher orders. Figure B.8: High-energy asymptotics of \(z\)-differential scaling functions: \(dc_{1}/dz\) (solid line), \(d\bar{c}_{1}/dz\) (dashed line) and \(dc_{2}/dz\) (thick solid line, divided by \(2C_{A}\)).
2303.07383
Phylogenomic Models from Tree Symmetries
A model of genomic sequence evolution on a species tree should include not only a sequence substitution process, but also a coalescent process, since different sites may evolve on different gene trees due to incomplete lineage sorting. Chifman and Kubatko initiated the study of such models, leading to the development of the SVDquartets methods of species tree inference. A key observation was that symmetries in an ultrametric species tree led to symmetries in the joint distribution of bases at the taxa. In this work, we explore the implications of such symmetry more fully, defining new models incorporating only the symmetries of this distribution, regardless of the mechanism that might have produced them. The models are thus supermodels of many standard ones with mechanistic parameterizations. We study phylogenetic invariants for the models, and establish identifiability of species tree topologies using them.
Elizabeth A. Allman, Colby Long, John A. Rhodes
2023-03-13T18:01:52Z
http://arxiv.org/abs/2303.07383v1
# Phylogenomic Models from Tree Symmetries ###### Abstract A model of genomic sequence evolution on a species tree should include not only a sequence substitution process, but also a coalescent process, since different sites may evolve on different gene trees due to incomplete lineage sorting. Chifman and Kubatko initiated the study of such models, leading to the development of the SVDQuartets methods of species tree inference. A key observation was that symmetries in an ultrametric species tree led to symmetries in the joint distribution of bases at the taxa. In this work, we explore the implications of such symmetry more fully, defining new models incorporating only the symmetries of this distribution, regardless of the mechanism that might have produced them. The models are thus supermodels of many standard ones with mechanistic parameterizations. We study phylogenetic invariants for the models, and establish identifiability of species tree topologies using them. **Funding:** ESA and JAR were partially supported by NSF grant 2051760 and NIH grant P20GM103395. ## 1 Introduction The SVDQuartets method of Chifman and Kubatko [1, 1] initiated a novel framework for species tree inference from genomic-scale data. Recognizing that individual sites may evolve along different "gene trees" due to the population-genetic effect of incomplete lineage sorting, their method is designed to work with site pattern data generated by the multispecies coalescent model of this process combined with a standard model of site-substitution. However, rather than try to associate particular gene trees to sites, they regard the observed site pattern distribution as a _coalescent mixture_. This effectively integrates the individual gene trees out of the analysis and allows them to formulate statistical tests based on an algebraic understanding of the site pattern frequencies. These tests detect the unrooted species tree topology in the case of four taxa. For a larger set of taxa, species trees can be found by inferring each quartet and then applying some method of quartet amalgamation. This leads to their SVDQuartets method of species tree inference, which is implemented in PAUP* [21] and which continues to be an important tool for practical phylogenetic inference (e.g., [10, RRDV\({}^{+}\)20, CFT\({}^{+}\)22]). The inference of unrooted 4-taxon species tree topologies in the SVDQuartets approach is based on an algebraic insight that a certain flattening matrix built from the site pattern distribution should have low rank on a distribution exactly arising from the model. The mathematical arguments for this in [1] are based on the existence of a rooted cherry (i.e., a 2-clade) on an ultrametric species tree, leading to a symmetry in the site pattern distribution. Since any rooted 4-taxon tree with unrooted topology \(ab|cd\) must display at least one of the clades \(\{a,b\}\) or \(\{c,d\}\), detecting that one or both of these clades is present is equivalent to determining the unrooted tree. The SVDQuartets method tests precisely this, without determining which of the clades is present. In this work, we examine the algebraic framework underlying the work of Chifman and Kubatko and its subsequent extensions. We observe that the symmetry conditions implied by the Chifman-Kubatko model are key to their inference approach. Based on this observation, we formulate several statistical models, encompassing those of [10] as well as several more general mechanistic models, which capture the fundamental assumptions needed to justify SVDquartets. In contrast with the sorts of models generally used in empirical phylogenetics, which have a mechanistic interpretation (e.g., generation of gene trees by the coalescent process, generation of sequences by site-substitution models on the gene trees), the models here have only a descriptive interpretation, as they are defined algebraically by constraints on site pattern distributions. One consequence of defining our models in this way is that it becomes more clear that SVDquartets can give consistent species tree inference for mechanistic mixture models more general than that described in [10] (as hinted by results in [10, 11]). In fact, it is easy to formulate plausible mechanistic models with many parameters (e.g. mixtures with many different base substitution processes) for which many of the numerical parameters must be non-identifiable, but for which SVDquartets inference of the species tree topology is statistically consistent. Such generality can be viewed as a strength of SVDquartets, as model misspecification arising from assumption of a simple substitution process across the entire genome is avoided. A second consequence is that our models highlight a symmetry in the site pattern distribution that reflects the _rooted_ species tree, a symmetry that is present even for 3-taxon trees. Methods for inference of the species tree root in the same framework were proposed in [11, 12], but both of these works considered four taxa at a time, which is the smallest _unrooted_ tree size in which topologies may differ. Since rooted trees are determined by their rooted triples, focusing on the 3-taxon case offers clear advantages for developing new inference methods. Unfortunately, in doing so, we lose the ability to naturally base statistical inference on rank conditions on matrices of the sort that underlie SVDquartets. Indeed, the possible flattening matrices for DNA site pattern data from the Chifman-Kubatko model in the 3-taxon case are all \(4\times 16\) with full rank, so rank alone cannot distinguish them. As a consequence, the matrix Singular Value Decomposition (SVD) of the flattening matrix, which is used to determine approximate rank in the SVDquartets method, has no obvious role. However, we present an alternative matrix that must satisfy certain rank conditions in the 3-taxon case, which suggests it may be possible to develop a 3-taxon method analagous to SVDquartets. Our work here is theoretical, dealing primarily with model definitions and algebraic consequences of those models. We suggest its implications for data analysis, but do not explore possible methods based on these results in depth. We begin the next section with a review of the model of [10] and use it to motivate the introduction of our first model, the _ultrametric exchangeable model_. We then discuss a number of its submodels on ultrametric trees, and show in Section 3 that the species tree parameter of these models is generically identifiable and species tree inference by SVDquartets is justified for all. In Section 4, we give a recursive formula for computing the dimension of the ultrametric exchangeable model, in terms of the dimensions of its subtree models joined at the root. This indicates that the dimension depends on the topology of the tree, which has implications for inference methods. In Section 5, we drop the assumption of an ultrametric species tree, reviewing the model of [10] in this setting and using it to motivate our second model, the _extended exchangeable model_. In Section 6, we explore the extended exchangeable model in more depth by restricting to 3-taxon trees and determining several algebraic invariants of this model. Finally, in Section 7, we show that the species tree parameter of the extended exchangeable model, as well as those of several mechanistic models that it contains, are generically identifiable. A genomic model of site patterns on ultrametric trees We begin by reviewing the simplest mechanistic model of Chifman and Kubatko [1]. For emphasis, we call this model (and others) _mechanistic_ since it incorporates models of both incomplete lineage sorting and of site substitution (e.g., GTR) in its formulation. Many mechanistic models, including that of [1], will be included as submodels of the more general non-mechanistic models we define below and for which the theory underlying SVDquartets applies more broadly. Specifically, let \(\sigma^{+}=(\psi^{+},\lambda)\) be an ultrametric rooted species tree on a set of taxa \(X\), with rooted leaf-labelled topology \(\psi^{+}\) and edge lengths \(\lambda\) in number of generations. Let \(N\) be a single constant population size for all populations (i.e., edges) in the species tree, and \(\mu\) a single scalar mutation rate for all populations. For a DNA substitution model fix some GTR rate matrix \(Q\) with associated stable base distribution \(\pi\). These parameters determine a DNA site pattern distribution as follows: a site is first assigned a leaf-labelled ultrametric gene tree \(T\) sampled under the multispecies coalescent model on \(\sigma^{+}\) with populations size \(N\), with one gene lineage sampled per taxon. Then a site evolves on \(T\) according to the base substitution model with root distribution \(\pi\) and rate matrix \(\mu Q\). Site patterns thus have a distribution which is a _coalescent independent mixture_ of site pattern distributions arising from the same GTR model on individual gene trees. We denote this model by \(\text{CK}=\text{CK}(\sigma^{+},N,\mu,Q,\pi)\). (While CK has a mild non-identifiability issue in that \(\lambda\), \(N\), and \(\mu\) are not separately identifiable, this will not be of concern in this work since our focus is on inferring the topology \(\psi^{+}\).) A key feature of the CK model is an exchangeability property that it inherits from the multispecies coalescent, due to the nature of the substitution model. Specifically, suppose \(\{a,b\}\subseteq X\) is a 2-clade displayed on \(\sigma^{+}\). Then for any metric gene tree \(T\), let \(T^{\prime}\) be the gene tree obtained from \(T\) by switching the labels \(a\) and \(b\). Then the ultrametricity of \(\sigma^{+}\) together with exchangeability of lineages under the coalescent model implies \(T\) and \(T^{\prime}\) are equiprobable. Now consider any site pattern \(z=(z_{1},z_{2},\ldots,z_{n})\) for \(X\), where \(z_{i}\in\{A,G,C,T\}\) is the base for taxon \(x_{i}\in X\), and let \(z^{\prime}\) be the site pattern with the \(a\) and \(b\) entries interchanged. Then under the base substitution model the probability of \(z\) on \(T\) equals the probability of \(z^{\prime}\) on \(T^{\prime}\). Thus, with \(\mathcal{T}\) denoting the space of all metric gene trees \(T\) on \(X\), \[\mathbb{P}(z\mid\sigma^{+},N,\mu,Q,\pi) =\int_{\mathcal{T}}\mathbb{P}(z\mid T,\mu,Q,\pi)\mathbb{P}(T\mid \sigma^{+},N)\,dT \tag{1}\] \[=\int_{\mathcal{T}}\mathbb{P}(z^{\prime}\mid T^{\prime},\mu,Q, \pi)\mathbb{P}(T^{\prime}\mid\sigma^{+},N)\,dT^{\prime}\] \[=\mathbb{P}(z^{\prime}\mid\sigma^{+},N,\mu,Q,\pi).\] Thus any 2-clade on the species tree produces symmetry in the site pattern frequency distribution. Moreover, since both the multispecies coalescent model and the sequence substitution model are well behaved with respect to marginalizing over taxa, it immediately follows that 2-clades on the induced subtrees \(\sigma^{+}|_{Y}\) on subsets \(Y\subset X\) will produce symmetries in the marginalizations of the site pattern distribution to \(Y\). This motivates the following definition of an algebraic model of site pattern probabilities. In this definition and in what follows, it will be convenient to regard a site pattern probability distribution \(P\) from a \(\kappa\)-state model on an \(n\)-leaf tree as an \(n\)_-way site pattern probability tensor_. That is, we regard \(P=(p_{i_{1}\ldots i_{n}})\) as a \(\kappa\times\cdots\times\kappa\) array with non-negative entries adding to \(1\), where \(p_{i_{1}\ldots i_{n}}\) denotes the probability that the \(n\) (ordered) taxa are in state \((i_{1},\ldots,i_{n})\). **Definition 2.1**.: Let \(\psi^{+}\) be a rooted binary topological species tree on \(X\), and \(\kappa\geq 2\). Then the \(\kappa\)-state _ultrametric exchangeable model_, \(\text{UE}_{\kappa}(\psi^{+})\), is the set of all \(|X|\)-way site pattern probability tensors \(P\), such that for every \(Y\subseteq X\), and every 2-clade \(\{a,b\}\) on \(\psi^{+}|_{Y}\), the marginal distribution \(P_{Y}\) of site patterns on \(Y\) is invariant under exchanging the \(a\) and indices. The collection of all distributions as \(\psi^{+}\) ranges over rooted binary topological trees on \(X\) is the UE model (or the \(\mathrm{UE}_{\kappa}\) model to avoid ambiguity). Although this model has 'ultrametric' in its name, note that the tree \(\psi^{+}\) is a topological rooted tree, with no edge lengths. 'Ultrametric' here refers to the motivation for the model, generalizing the CK model on an ultrametric species tree discussed above. While one can contrive mechanistic models on non-ultrametric trees that lead to distributions in the \(\mathrm{UE}_{\kappa}(\psi^{+})\) model, we do not find them very natural, and prefer to highlight the ultrametricity that a plausible mechanistic model is likely to require to lie within \(\mathrm{UE}_{\kappa}(\psi^{+})\). It is important to note that unlike most models in phylogenetics, including the CK model above, the UE model is not defined through mechanistically-interpretable parameters. Rather it has a descriptive form relating entries of the model's joint distributions, chosen to reflect certain implicit features of the CK model. The UE model then can be viewed as a relaxation, or supermodel, of that more restrictive model. **Example 2.2**.: Let \(\psi^{+}\) be the rooted 3-taxon tree \((a,(b,c))\) and consider a 2-state substitution model with states \(\{0,1\}\). A probability distribution for the \(\mathrm{UE}\big{(}\left(a,(b,c)\right)\big{)}\) model is \(P=(p_{ijk})\), a \(2\times 2\times 2\) array with entries the joint probabilities for assignments of states to the taxa, \(p_{ijk}=\mathbb{P}(a=\text{`i'},b=\text{`j'},c=\text{`k'})\). Since the constraints on the model arise only from subsets \(Y\subseteq\{a,b,c\}\) that contain at least two taxa, there are four subsets of interest: \[\{a,b,c\},\ \{b,c\},\ \{a,b\},\ \{a,c\}.\] Then \(\mathrm{UE}_{2}(\psi^{+})\) is a subset of the probability simplex \(\Delta^{7}\subset\mathbb{R}^{8}\) defined by the following linear equations. \[\{a,b,c\}:\ \begin{cases}p_{010}=p_{001}\\ p_{101}=p_{110}\end{cases}\] \[\{b,c\}:\ p_{001}+p_{101}=p_{010}+p_{110}\] \[\{a,b\}:\ p_{010}+p_{011}=p_{100}+p_{101}\] \[\{a,c\}:\ p_{001}+p_{011}=p_{100}+p_{110}\] The first two constraints, for \(\{a,b,c\}\), express that slices on the first index of probability tensors in \(\mathrm{UE}_{2}(\psi^{+})\) are symmetric. Specifically, if \(P_{z}\). denotes the conditional distribution of \(b,c\) when \(a\) is in state \(z\), then the \(2\times 2\) matrix \(P_{z}\). is symmetric for each \(z\in\{0,1\}\). These imply the third equation, for \(\{b,c\}\), expressing that marginalizing over the first index gives a symmetric matrix. The fourth equation, for \(\{a,b\}\), is independent of the first three, but with them implies the fifth one, for \(\{a,c\}\). Taking into account the probabilistic requirement that \(\sum_{i,j,k\in\{0,1\}}p_{ijk}=1\), we see the model is a restriction of a 4-dimensional affine space to the simplex \(\Delta^{7}\) with \(0\leq p_{ijk}\leq 1\). It is clear that far more complicated models of site pattern evolution on a species tree than the CK model give rise to distributions which also lie within the UE model, since the only requirement is that the resulting site pattern distributions reflect the symmetries of the species tree. For instance, in [10], an extension is given to allow for \(\Gamma\)-distributed rate variation across sites. A further generalization, allowing for edge-dependent variation of the population size \(N=N_{e}\), as well as time-dependent variation in the mutation rate \(\mu\) across the species tree, can also easily be seen to produce distributions lying within UE. Since the symmetry conditions arising from the species tree are linear constraints on the site pattern probability distributions, arbitrary mixtures of models exhibiting the same symmetries will again exhibit these symmetries. Thus, the mechanistic models in [1] on ultrametric trees that allow for variation in the substitution rate matrix across sites also are submodels of UE. Similarly, it has been shown that a model of gene flow on a 3-taxon ultrametric species tree will produce site pattern probability distributions that reflect the symmetry in the 2-clade of the species tree [18, Proposition 0.8]. In focusing on the UE model we obtain results that apply to all these models, and possibly more to be formulated in the future. ## 3 Generic identifiability of trees under the UE model To use a statistical model for valid inference, it is necessary that any parameter one wishes to infer be _identifiable_; that is, a probability distribution from the model must uniquely determine the parameter. For phylogenetic models, this strict notion is generally too strong to hold, but one can often establish a similar generic result, that the set of distributions on which identifiability fails is of negligible size (measure zero) within the model. The following theorem is in this vein. **Theorem 3.1**.: _The rooted binary topological tree \(\psi^{+}\) is identifiable from a generic probability distribution in the UE model._ Proof.: Fix \(\kappa\) and a taxon set \(X\). Since for each binary species tree topology \(\psi^{+}\) the symmetry conditions are expressible by linear equations, the UE model for \(\psi^{+}\) is the intersection of a linear space with the probability simplex. We establish the result by showing that the linear model spaces for different \(\psi^{+}\) are not contained in one another, since then their intersection is of lower dimension and hence of measure zero within them. That the linear spaces are not contained in one another will follow by establishing that for each \(\psi^{+}\) there is at least one distribution in \(\mathrm{UE}_{\kappa}(\psi^{+})\) that fails to have any 'extra symmetry' required for it to be in the model for a different tree. To construct such a distribution, assign positive edge lengths to \(\psi^{+}\) so that the tree is ultrametric, and consider on it the \(\kappa\)-state analog of the (non-coalescent) Jukes-Cantor (henceforth denoted JC) model. The resulting site pattern distribution \(P\) is easily seen to have the necessary symmetries to lie in the UE model. To show \(P\) has no extra symmetries, suppose to the contrary that there is a \(Y\subset X\) containing two taxa \(a,c\) where \(P|_{Y}\) is invariant under exchanging the \(a\) and \(c\) indices, yet \(a,c\) do not form a cherry on \(\psi^{+}|_{Y}\). Then, after possibly interchanging the names of \(a,c\), there is a third taxon \(b\) such that the rooted triple \(((a,b),c)\) is displayed on \(\psi^{+}|_{Y}\). Moreover, by further marginalizing to \(Y^{\prime}=\{a,b,c\}\), we have that \(P|_{Y^{\prime}}\) arises from a Jukes-Cantor model on a 3-taxon ultrametric tree with positive edge lengths and rooted topology \(((a,b),c)\), and exhibits \(a,c\) symmetry. To see that this is impossible, note that if \(P|_{Y^{\prime}}\) has both \(a,b\) and \(a,c\) symmetry, then it also exhibits \(b,c\) symmetry. Thus, all marginalizations of \(P|_{Y^{\prime}}\) to two taxa are equal. This implies all JC distances between taxa, which can be computed from these marginalizations, are equal. This contradicts that the tree was binary. Note that the proof above did not consider a coalescent process in any way in order to show that extra symmetries do not generically hold in \(\mathrm{UE}(\psi^{+})\). However, since applications may consider submodels of the UE model, such as the CK model, it is necessary to ensure they do not lie within the exceptional set of non-generic points in the UE model where tree identifiability may fail. To address this issue, we seek an identifiability result for more general mechanistic models that have an _analytic parameterization_, by which we mean that for each topology \(\psi^{+}\) there is an analytic map from a full-dimensional connected subset of \(\mathbb{R}^{k}\), for some \(k\), to the set of probability distributions comprising the model. For example, if \(\sigma^{+}\) is a rooted metric species tree with shape \(\psi^{+}\), and site pattern frequency distributions are generated on gene trees arising under the coalescent using the GTR+I+\(\Gamma\) model, then the collection of such distributions is given by an analytic parameterization, and as such is a submodel of \(\mathrm{UE}(\psi^{+})\). **Theorem 3.2**.: _Consider any submodel of the UE model with an analytic parameterization general enough to have the JC model as a limit. Then for generic parameters the rooted topological tree \(\psi^{+}\) is identifiable._ Proof.: Let \[f:\Theta\to UE(\psi^{+})\] denote the parameterization map for the submodel on tree \(\psi^{+}\). Then \(f(\Theta)\) cannot lie entirely in \(UE(\phi^{+})\) for any \(\phi^{+}\neq\psi^{+}\), since, as shown in the previous proof, there are points from the JC model in the closure of \(f(\Theta)\) which are not in the closed set \(UE(\phi^{+})\). Thus the set \(f^{-1}(UE(\psi^{+})\cap UE(\phi^{+}))\) is a proper analytic subvariety of \(\Theta\), and hence of measure zero in it. Since there are only finitely many \(\phi^{+}\), for generic points in \(\Theta\) the resulting distribution lies in the UE model for \(\psi^{+}\) only. Note that the CK model, which is analytically parameterized, has the JC model as a limit, since after choosing a JC substitution process one can let the population size \(N\to 0^{+}\). This effectively "turns off" the coalescent process, as small population sizes result in rapid coalescence. Geometrically, the UE model on a particular tree is a convex set, since it can be expressed as the solution set for a system of linear equations and inequalities. It immediately follows that mixtures of instances of the UE model on the same tree, whether defined by integrals such as typical rates-across-sites models (e.g., the ultrametric GTR+\(\Gamma\) coalescent mixture of [1]) or as sums (e.g., an ultrametric mixture of coalescent mixtures, as in [1]), or both, are also submodels of UE on that tree. Provided the model has an analytic parameterization, as all these examples do, Theorem 3.2 then says that the tree topology is generically identifiable. Even in cases of mixtures which have so many numerical parameters that dimension arguments show they cannot all be individually identifiable, the species tree topology remains so. This is a potentially valuable observation, as a scheme designed for inference of a tree under the UE model may avoid some issues of model misspecification that might arise with a more standard approach of restricting to very simple models (e.g. constant population size) so that all numerical parameters are identifiable as well. The above theorems of course imply the weaker statement that for the UE model (and many analytic submodels of the UE model) on four or more taxa, the unrooted species tree topology is identifiable. As SVDquartets is designed to infer unrooted 4-taxon trees, this gives hope that it might also be able to infer the unrooted tree topology for distributions from the more general UE model. For this to be possible, it is necessary to prove that the specific flattening matrices considered in the SVDquartets method satisfy certain rank conditions, the content of the next theorem. Recall that if a \(\kappa\times\kappa\times\kappa\times\kappa\) array \(P\) has indices corresponding to taxa \(a,b,c,d\), then the flattening \(\operatorname{Flat}_{abcl}(P)\) is a \(\kappa^{2}\times\kappa^{2}\) matrix with row and column indices in \(\kappa\times\kappa\) and \(((i,j),(k,l))\)-entry \(P(i,j,k,l)\). **Theorem 3.3**.: _For \(P\in\text{UE}_{\kappa}(\psi^{+})\), and \(ab|cd\) any unrooted quartet induced from the tree \(\psi^{+}\), let \(\tilde{P}=P|_{\{a,b,c,d\}}\) denote the marginalization to the taxa \(a,b,c,d\). Then for all such \(P\), \(\operatorname{Flat}_{ab|cd}(\tilde{P})\) has rank at most \(\binom{\kappa+1}{2}\), while for generic \(P\), \(\operatorname{Flat}_{ac|bd}(\tilde{P})\) and \(\operatorname{Flat}_{ad|bc}(\tilde{P})\) have rank \(\kappa^{2}\)._ Proof.: Since \(\psi^{+}|_{\{a,b,c,d\}}\) has at least one cherry, assume one is formed by \(a,b\). Then symmetry under exchanging the \(a,b\) indices of \(\tilde{P}\) shows that for each \(1\leq i<j\leq\kappa\), the \((i,j)\) and \((j,i)\) rows of \(\operatorname{Flat}_{ab|cd}(\tilde{P})\) are identical. Thus that flattening has at most \(\kappa^{2}-\binom{\kappa}{2}=\binom{\kappa+1}{2}\) distinct rows, and its rank is at most \(\binom{\kappa+1}{2}\). We prove the second statement for \(\operatorname{Flat}_{ac|bd}(\tilde{P})\), noting that the argument for \(\operatorname{Flat}_{ad|bc}(\tilde{P})\) is similar. To show that for generic \(P\in\text{UE}_{\kappa}(\psi^{+})\), \(\operatorname{Flat}_{ac|bd}(\tilde{P})\) has full rank, it suffices to construct a single \(P\) for which this flattening matrix is full rank. To see that this is the case, consider the algebraic variety \[V_{ac|bd}=\{P\in\mathbb{R}^{\kappa^{|X|}}|\det(\operatorname{Flat}_{ac|bd}(\tilde {P}))=0\}.\] This variety is defined by a single degree \(\kappa^{2}\) polynomial and contains all of the points \(P\) for which \(\operatorname{Flat}_{ac|bd}(\tilde{P})\) is singular. If there is a single point \(P\in UE_{\kappa}(\psi^{+})\) for which \(\operatorname{Flat}_{ac|bd}(\tilde{P})\neq 0\), then the affine space \(UE_{\kappa}(\psi^{+})\) is not contained in \(V_{ac|bd}\). Thus, the intersection of \(UE_{\kappa}(\psi^{+})\) with \(V_{ac|bd}\) is a proper subvariety of \(UE_{\kappa}(\psi^{+})\), and hence of measure zero within it. Thus, generically, \(\operatorname{Flat}_{ac|bd}(\tilde{P})\) is full rank. To construct such a probability distribution, assign any positive lengths to the edges of \(\psi^{+}\) so that it becomes ultrametric, and consider the \(\kappa\)-state JC model on it (with no coalescent process). This leads to a distribution \(P\in\operatorname{UE}_{\kappa}(\psi^{+})\). Then \(\tilde{P}\) arises from the Jukes-Cantor model on the induced rooted 4-taxon tree. Since the JC model is time reversible, \(\tilde{P}\) is also obtained by rooting the quartet tree at the MRCA of \(a\) and \(b\), with non-identity JC Markov matrices on each of the 5 edges of this rerooted tree. Let \(M_{a},M_{b},M_{c},M_{d}\) denote the Markov matrices on the pendant edges and \(M_{int}\) on the internal edge, so that \(F=(1/\kappa)M_{int}\) is the distribution of pairs of bases at the endpoints of the internal edge. Let \(N_{ac}=M_{a}\otimes M_{c}\) and \(N_{bd}=M_{b}\otimes M_{d}\) denote the Kronkecker products. Then, following the details of [1, Section 4], the flattening matrix may be expressed as \[\operatorname{Flat}_{ac|bd}(\tilde{P})=N_{ac}^{T}DN_{bd},\] where \(D\) is a \(\kappa^{2}\times\kappa^{2}\) diagonal matrix formed from the entries of \(F\). Since \(M_{int}\) is assumed to be a non-identity JC matrix, \(F\) has no zero entries, so \(D\) has rank \(\kappa^{2}\). Similarly, the JC transition matrices \(M_{a},M_{b},M_{c},M_{d}\) are non-singular, and since the Kronecker product of non-singular matrices is non-singular, so are \(N_{ac}^{T}\) and \(N_{bd}\). Thus \(\operatorname{Flat}_{ac|bd}\) generically has full rank. The argument in this proof, that generically the ranks of "wrong" flattenings of quartet distributions are large, proceeded by constructing an element of the UE model using a parameterized model in the absence of a coalescent process. However, just as was done in Theorem 3.2, we can extend the conclusion to analytic submodels of the UE model, such as those incorporating the coalescent. For instance, since the CK model has the non-coalescent JC model as a limit, this implies that there are points in the CK model that are arbitrarily close to the point \(P\) constructed in the proof, which therefore must also have rank \(\kappa^{2}\) flattenings, as matrix rank is lower semicontinuous. We can thus obtain the following generalization of a result from [1]. **Theorem 3.4**.: _Consider any submodel of the \(UE(\psi^{+})\) model with an analytic parameterization general enough to have the JC model as a limit. If \(\psi^{+}\) displays the quartet \(ab|cd\), then for all distributions \(P\) in the model, with \(\tilde{P}=P|_{\{a,b,c,d\}}\), \(\operatorname{Flat}_{ab|cd}(\tilde{P})\) has rank at most \(\binom{\kappa+1}{2}\), while for generic \(P\), \(\operatorname{Flat}_{ac|bd}(\tilde{P})\) and \(\operatorname{Flat}_{ad|bc}(\tilde{P})\) have rank \(\kappa^{2}\)._ We note that our proof of this theorem has avoided the explicit calculations and more intricate arguments that appear in [1] while also establishing the result in a more general setting. This is possible because of our use of a tensor \(P\) in the closure of the CK model, but not in the CK model, as well as adopting the viewpoint of [1] on flattenings as matrix products. Using the two preceding theorems on identifiability, the statistical consistency of the SVDquartets method can be obtained. When Chifman and Kubatko [1] proved essentially the same result on ranks of flattenings for the CK model, they highlighted it as an identifiability result, but did not explicitly make a claim of consistency. The consistency result for SVDquartets was then unambiguously stated and proved in this setting in [20], which also gave an analysis of the convergence rate. Here we show that their argument for the consistency of SVDQuartets applies more generally to site patterns generated under the UE model, as well as many submodels. In particular, it validates the consistency of inference under models allowing mixtures of coalescent mixtures which may have different substitution processes across the genome, as described in [1]. To be precise, we must first specify some method of quartet amalgamation \(M\), which takes a collection of one quartet tree for each 4-taxon subset of \(X\) and produces an unrooted topological tree on \(X\). In order to establish consistency, we require that if all quartet trees in the collection given to the method \(M\) are displayed on a common tree \(T\) on \(X\), then \(M\) returns \(T\). Following [15], we say such a method is _exact_ while recognizing that for large sets \(X\) one generally must use a heuristic method \(M^{\prime}\) that seeks to approximate \(M\). **Theorem 3.5**.: _The SVDQuartets method, using an exact method to construct a tree from a collection of quartets, gives a statistically consistent unrooted species tree topology estimator for generic parameters under the UE model, and under any submodel with an analytic parameterization general enough to have the JC model as a limit._ Proof.: To simplify notation in the argument, let \(\operatorname{Flat}_{ac|bd}(P)\) denote the \(ac|bd\) flattening of the marginalization \(P|_{\{a,b,c,d\}}\). By Theorems 3.3 and 3.4 for generic parameters giving a probability distribution \(P\) in the model and any four taxa \(a,b,c,d\) such that \(ab|cd\) is displayed on the unrooted tree \(\psi\), \(\operatorname{Flat}_{ab|cd}(P)\) has rank at most \(\binom{k+1}{2}\), while \(\operatorname{Flat}_{ac|bd}(P)\) and \(\operatorname{Flat}_{ad|bc}(P)\) have rank \(\kappa^{2}\). This implies that \(\operatorname{Flat}_{ab|cd}(P)\) will have at least \(\binom{\kappa}{2}\) singular values of \(0\), while \(\operatorname{Flat}_{ac|bd}(P)\) and \(\operatorname{Flat}_{ad|bc}(P)\) have all positive singular values. For a finite sample of \(s\) sites from the model, denote the empirical distribution by \(\hat{P}_{s}\). Then for any \(\epsilon>0\) and any norm \[\lim_{s\to\infty}\Pr\big{(}|\hat{P}_{s}-P|<\epsilon\big{)}=1.\] Since the vector \(\sigma(M)\) of ordered singular values of a matrix \(M\) is a continuous function of the matrix, this implies that for each \(q\in\{ab|cd,ac|bd,ad|bc\}\) \[\lim_{s\to\infty}\Pr\bigg{(}\|\sigma(\operatorname{Flat}_{q}(\hat{P}_{s}))- \sigma(\operatorname{Flat}_{q}(P))\|<\epsilon\bigg{)}=1\] where \(\|\cdot\|\) denotes any vector norm. With the SVD score \(\mu(M)\) defined as the sum of the \(\binom{\kappa}{2}\) smallest singular values of a \(\kappa^{2}\times\kappa^{2}\) matrix \(M\), we know \[0=\mu\big{(}\operatorname{Flat}_{ab|cd}(P)\big{)}<\min\bigg{\{}\mu( \operatorname{Flat}_{ac|bd}(P)),\ \mu\big{(}\operatorname{Flat}_{ad|bc}(P)\big{)}\bigg{\}}.\] But it then follows that \[\lim_{s\to\infty}\Pr\bigg{(}\mu(\operatorname{Flat}_{ab|cd}(\hat{P}_{s}))<\min \bigg{\{}\mu(\operatorname{Flat}_{ac|bd}(\hat{P}_{s})),\mu(\operatorname{Flat }_{ad|bc}(\hat{P}_{s}))\bigg{\}}\bigg{)}=1.\] Thus, as the sample size \(s\) grows, the probability that choosing the quartet tree on \(a,b,c,d\) minimizing \(\mu\) gives the quartet tree displayed on \(\psi\) approaches \(1\). Since this probability approaches \(1\) for each of set of four taxa, and there are only finitely many such sets, the probability that all quartet trees inferred by minimizing \(\mu\) are displayed on the species tree approaches \(1\). Thus with probability approaching \(1\), the method \(M\) will return the correct species tree. ## 4 Dimension of UE models on large trees Although the symmetry conditions of the UE model have been expressed as linear constraint equations, these constraints are not in general independent, as was shown for a particular 3-taxon species tree in Example 2.2. In that example, it was easy to determine a basis of constraints, and thus the dimension of the model. In this section we investigate larger trees and determine the model dimension. Knowledge of dimension is important for several reasons. First, it gives us a basic insight into how restrictive the model on a particular tree topology is. Second, if one is to use these models for tree inference, the dimension is important for judging how close a data point is to fitting the model. Intuitively, data is conceptualized as coming from a true model point with 'noise' added, and if a model has high dimension the noise tends to do less to move that data from the model than if it had lower dimension. Such dimensionality considerations are made rigorous in many model selection criteria, for instance the Akaike Information Criterion and Bayesian Information Criterion. For a rooted topological tree \(\psi^{+}\) on taxa \(X\) we consider the model \(\operatorname{UE}_{\kappa}(\psi^{+})\). Let \(d_{\kappa}(\psi^{+})\) denote the dimension of the affine space \(V(\psi^{+})\subset\mathbb{R}^{\kappa^{|X|}}\) of all tensors satisfying the linear equations expressing the symmetry conditions defining the model, as well as that all entries of the distribution tensor sum to 1 (i.e., the affine, or Zariski, closure of the model). By dropping the condition that tensor entries sum to 1, we pass to the cone over the model, a linear space \(L(\psi^{+})\) of dimension \(c_{\kappa}(\psi^{+})=d_{\kappa}(\psi^{+})+1\). We now give a recursive formula for computing the dimension \(c_{\kappa}(\psi^{+})\). **Theorem 4.1**.: _For a rooted binary topological tree \(\psi^{+}\) on a taxon set \(X\), let \(\psi^{+}_{A}\) and \(\psi^{+}_{B}\) be the rooted subtrees descendant from the child nodes of the root of \(\psi^{+}\), on taxa \(A\) and \(B\) respectively, so that \(X=A\sqcup B\) and \(\psi^{+}=(\psi^{+}_{A},\psi^{+}_{B})\). Then_ \[c_{\kappa}(\psi^{+})=c_{\kappa}(\psi^{+}_{A})c_{\kappa}(\psi^{+}_{B})-{ \kappa\choose 2}.\] For a topological rooted species tree \(\psi^{+}\) on \(X\), we can construct a set of equations defining the cone \(L(\psi^{+})\) by considering every subset \(Y\subseteq X\) and every 2-clade \(\{a,b\}\) of each \(\psi^{+}_{|Y}\) as was done in Example 2.2. However, as we saw in that example, the equations we obtain in this way are not necessarily independent. As a first step towards proving Theorem 4.1, we construct a smaller (though still not necessarily independent) set of linear equations defining the cone \(L(\psi^{+})\). This set is defined by associating a set of linear equations to each vertex of the topological rooted tree \(\psi^{+}\) on \(X\). Specifically, for each internal vertex \(v\) of \(\psi^{+}\) choose two taxa \(a,b\) with \(v=\operatorname{MRCA}(a,b)\). Let \(P\) be a \(|X|\)-dimensional \(\kappa\times\cdots\times\kappa\) tensor of indeterminates, with indices corresponding to taxa in \(X\) and let \(P_{ab}\) denote the marginalization of \(P\) over all indices corresponding to taxa in \(\operatorname{desc}(v)\setminus\{a,b\}\). Each choice of the indices corresponding to taxa in \(X\setminus\operatorname{desc}(v)\) determines a matrix slice of \(P_{ab}\), with indices corresponding to \(a,b\). Expressing that each of these slices is symmetric yields a collection of linear equations. Denote this set of equations by \(\mathcal{S}_{v}=\mathcal{S}(\psi^{+},\{a,b\})\). Though the set \(\mathcal{S}_{v}\) will depend on the particular pair of taxa \((a,b)\) chosen, for our purposes the particular pair is irrelevant, so one can designate any consistent rule for selecting the pair \((a,b)\) so that the \(\mathcal{S}_{v}\) are well-defined. If \(v\) is not an internal vertex of \(\psi^{+}\), define \(S_{v}\) to be the empty set. **Lemma 4.2**.: _Let \(\psi^{+}\) be a topological rooted tree on \(X\). Then the set_ \[\mathcal{S}=\bigcup_{v\in V(\psi^{+})}\mathcal{S}_{v}\] _defines the cone \(L(\psi^{+})\)._ Proof.: It is enough to show that if \(v=\operatorname{MRCA}(a,b)=\operatorname{MRCA}(a,c)\), then the linear equations expressing symmetry of slices of \(P_{ac}\) are contained in the span of those expressing symmetry of slices of \(P_{ab}\) together with those equations in \(\mathcal{S}\) arising from nodes descended from \(v\). We show this inductively, proceeding from the leaves of the tree to the root. The base case, when \(v\) has only two leaf descendants, is trivial. Assume the result holds for the internal nodes descended from \(v\). Let the children of \(v\) be \(v_{1}\), which is ancestral to or equal to \(a\), and \(v_{2}\), which is ancestral to \(b,c\) since \(\psi^{+}\) is binary. Then \(w=\operatorname{MRCA}(b,c)\) is a descendent of \(v_{2}\). The equations arising from \(w\) express that any entry of the marginalization of \(P\) over all descendants of \(w\) except \(b,c\) is invariant under exchanging the \(b,c\) indices. Since the entries of \(P_{ab}\) arise from further marginalization, the equations expressing symmetry of the \(ab\)-slices together with those arising from \(w\) imply those expressing the \(ac\)-slices of \(P_{ac}\) are symmetric. The proof of the previous lemma explains the dependence of the equations we see in Example 2.2. The \(\{a,b,c\}\) constraints are the equations arising from \(\operatorname{MRCA}(b,c)\), which in that example, required no marginalization of \(P\). The \(\{a,b\}\) constraints are the equations arising from the root of the tree that express symmetry of \(P_{ab}\) which are obtained by marginalizing \(P\) over \(c\). Together, these constraints imply the \(\{b,c\}\) and \(\{a,c\}\) constraints, the latter of which express symmetry in the slices of \(P_{ac}\). Proof of Theorem 4.1.: Let \(n_{A}=|A|\) and \(n_{B}=|B|\). With \(U=\mathbb{R}^{\kappa^{n_{A}}}\) and \(V=\mathbb{R}^{\kappa^{n_{B}}}\), we identify \(W=U\otimes V=\mathbb{R}^{|X|}\) with the space of \(k^{n_{A}}\times k^{n_{B}}\) real matrices. In particular, we have \(L(\psi^{+}_{A})\subset U\), \(L(\psi^{+}_{B})\subset V\), and \(L(\psi^{+})\subset W\). We first claim that \(L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\) is the subspace \(Z\subset W\) defined by the subset \(\mathcal{S}^{\prime}\) of \(\mathcal{S}=\mathcal{S}(\psi^{+})\) of Lemma 4.2 containing only those equations arising from non-root internal nodes of \(\psi^{+}\). To see \(L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\subseteq Z\), consider an equation in \(\mathcal{S}^{\prime}\) associated to a non-root node \(v\) and its descendant taxa \(a,b\) as in the lemma. Without loss of generality, we may assume \(v\) is a node of \(\psi_{A}\). Then, ordering the taxa so that \(a,b\) are the first two, this equation in \(\mathcal{S}^{\prime}\) has the form \[\sum_{\alpha_{1}}x_{(i,j,\alpha_{1},\alpha_{2}),\beta}-\sum_{\alpha_{1}}x_{( j,i,\alpha_{1},\alpha_{2}),\beta}=0 \tag{2}\] where the summation over \(\alpha_{1}\in[k]^{m}\) runs through all assignments of states to taxa descended from \(v\) other than \(a,b\), \(\alpha_{2}\in[k]^{n_{A}-2-m}\) is a fixed choice of states for taxa in \(A\) not descended from \(v\), \(\beta\in[k]^{n_{B}}\) is a fixed choice of states for the taxa in \(B\), and \(i\neq j\). This equation expresses that column \(\beta\) of a matrix in \(W\) satisfies an equation associated to \(v\), \(a\), and \(b\) in the definition of \(L(\psi^{+}_{A})\). Thus it holds on all of \(L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\), and we obtain the desired inclusion. To see \(L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\supseteq Z\), note that equation (2) has shown that every column of \(z\in Z\) lies in \(L(\psi^{+}_{A})\), and likewise every row of \(z\) lies in \(L(\psi^{+}_{B})\). But from the singular value decomposition of \(z\), \[z=\sum_{i}c_{i}\otimes r_{i}\] where the \(c_{i}\) form a basis for the column space of \(z\) and the \(r_{i}\) form a basis for the row space of \(z\). Since \(c_{i}\in L(\psi^{+}_{A})\) and \(r_{i}\in L(\psi^{+}_{B})\), it follows that \(z\in L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\), establishing the stated inclusion and that \(Z=L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\). Now the space \(L(\psi^{+})\) is the subset of \(Z=L(\psi^{+}_{A})\otimes L(\psi^{+}_{B})\) defined by the equations in \(\mathcal{S}\setminus\mathcal{S}^{\prime}\), associated to the root of \(\psi\). To conclude that \[c_{\kappa}(\psi^{+})=c_{\kappa}(\psi^{+}_{A})c_{\kappa}(\psi^{+}_{B})-{ \kappa\choose 2},\] it is enough to show that we can obtain an independent set of equations defining \(L(\psi^{+})\) by taking an independent set defining \(Z\) and augmenting it by \({\kappa\choose 2}\) additional independent equations associated to the root. Let \(\mathcal{L}\) be any independent subset of equations in \(\mathcal{S}^{\prime}\) that define \(Z\), and \(\mathcal{M}=\mathcal{S}\setminus\mathcal{S}^{\prime}\) the set of \({\kappa\choose 2}\) equations associated to the root of \(\psi^{+}\) (and the choice of \(a\in A\) and \(b\in B\)). Then \(\mathcal{L}\cup\mathcal{M}\) defines \(L(\psi^{+})\). To see that \(\mathcal{L}\cup\mathcal{M}\) is independent, first order indices so that \(a\) and indices are listed first among \(A\) and \(B\). Then, using '\(+\)' in an index to denote the sum over the assignment of all states \([\kappa]=\{1,2,\ldots,\kappa\}\) in that index, for any \(1\leq i<j\leq k\), \[x_{i+\ldots+,\ j+\ldots+}-x_{j+\ldots+,\ i+\ldots+}=0\] must be the unique element of \(\mathcal{M}\) that involves the variable \(x_{ii\cdots i,\ jj\cdots j}\) (noting that each equation in \(\mathcal{L}\) involves variables that have at least two distinct entries in the indices for \(A\) or two distinct entries in the indices for \(B\)). Since \(\mathcal{L}\) is an independent set, this implies \(\mathcal{L}\cup\mathcal{M}\) is independent. The theorem gives insight into model dimensions for families of 'extreme' topologies: rooted caterpillars and fully balanced shapes. **Corollary 4.3**.: _Suppose \(\psi^{+}\) is a rooted caterpillar tree on \(n\) taxa. Then the dimension of the \(\text{UE}_{\kappa}(\psi^{+})\) model is_ \[d_{\kappa}(\psi^{+})=\frac{\kappa^{n}+\kappa}{2}-1.\] Proof.: If \(n=1\), then the model is simply a base distribution for the sole taxon, so \(d_{\kappa}(\psi^{+})=\kappa-1\), consistent with the stated formula. Now inductively assume the stated formula for the rooted caterpillar on \(n-1\) taxa. Then by Theorem 4.1, for \(n\) taxa \[c_{\kappa}(\psi^{+})=\left(\frac{\kappa^{n-1}+\kappa}{2}\right)\kappa-\binom{ \kappa}{2}=\left(\frac{\kappa^{n}+\kappa^{2}}{2}\right)-\left(\frac{\kappa^{2 }-\kappa}{2}\right)=\frac{\kappa^{n}+\kappa}{2},\] and the claim follows. Also from Theorem 4.1 we can compute that the dimension of the UE model on the 4-taxon balanced tree \(((a,b),(c,d))\) is \[d_{\kappa}=\left(\frac{\kappa^{2}+\kappa}{2}\right)^{2}-\binom{\kappa}{2}-1= \frac{\kappa(\kappa^{3}+2\kappa^{2}-\kappa+2)}{4}-1.\] By comparing the dimensions for the 4-taxon caterpillar and balanced trees, we see that \(d_{k}\) depends on the rooted tree topology, and not only on the number of taxa. More generally, for a fully balanced tree \(\psi^{+}\) on \(n=2^{\ell}\) taxa, Theorem 4.1 yields that \[d_{\kappa}(\psi^{+})=\mathcal{O}\left(\left(\frac{\kappa(\kappa+1)}{2}\right) ^{n/2}\right).\] Thus for fully balanced trees the dimension is \(o(\kappa^{n}/2)\), while for rooted caterpillars on \(n\) taxa, Corollary 4.3 shows the dimension is asymptotic to \(\kappa^{n}/2\). For a fixed number of taxa \(n=2^{\ell}\), it follows that the dimension of the balanced tree model will be smaller than that of the caterpillar. This comparison of model dimension for caterpillars and balanced trees is intuitively plausible, as cherries on the full tree lead to more symmetry requirements on a tensor than do cherries on subtrees. In general, the more balanced a tree is, the smaller one might expect the model dimension to be. This leads us to pose the following conjectures, where \(RB(n)\) denotes the set of rooted binary \(n\)-leaf trees. **Conjecture 4.4**.: _For all \(\kappa\), there exists an \(m\), such that for \(n\geq m\), \(d_{\kappa}(\psi^{+})\) is maximized over \(\psi^{+}\in RB(n)\) when \(\psi^{+}\) is the \(n\)-leaf caterpillar tree._ **Conjecture 4.5**.: _For all \(\kappa\), there exists an \(m\), such that for \(\ell\geq m\), \(d_{\kappa}(\psi^{+})\) is minimized over \(\psi^{+}\in RB(2^{\ell})\) when \(\psi^{+}\) is the \(2^{\ell}\)-leaf balanced tree._ A genomic model of site patterns on general trees In this section, we examine a generalization of the CK model to non-ultrametric trees to motivate an algebraic model that encompasses it. Marginalizations (respectively, slices) of a site pattern probability tensor will be denoted by placing a '\(+\)' (resp. \(k\)) in the index summed over (resp. conditioned on). The transpose operator will be denoted with an exponent '\(T\).' For example, we can generalize the equations derived in Example 2.2 for the UE model on the ultrametric 3-leaf rooted tree \((a,(b,c))\) for any value of \(\kappa\) using this notation as follows: \[(1)\;\;P_{k\cdot}=P_{k\cdot\cdot}^{T}, (3)\;\;P_{\cdot\cdot+}=P_{\cdot\cdot+}^{T},\] \[(2)\;\;P_{+\cdot\cdot}=P_{+\cdot\cdot}^{T}. (4)\;\;P_{+\cdot\cdot}=P_{\cdot+\cdot}^{T}.\] In the definition of the UE model, these constraints arise from the taxon subsets \((1)\left\{a,b,c\right\}\), \((2)\left\{b,c\right\}\), \((3)\left\{a,b\right\}\) and \((4)\left\{a,c\right\}\), and it is not hard to see that the equations in (1) and (3) imply those in (2) and (4), just as in Example 2.2. ### The Extended Exchangeability Model In [10], the CK model is extended to permit non-ultrametricity of the species tree. This extension allows, for instance, the modeling of relationships between species when generation times or scalar mutation rates differ across populations in the tree. In this same work, flattening matrices are used to establish the generic identifiability of the unrooted species tree topology of the extended model from which it follows that SVDquartets is still a statistically consistent method of inference of the unrooted species tree topology for these models when combined with any exact method of quartet amalgamation. In order to motivate our algebraic model, first consider a model obtained from the CK by dropping the ultrametricity requirement on the species tree. Suppose \(a\) and \(b\) are taxa in a 2-clade on \(\sigma^{+}\), and let \(v\) be their common parental node. In the special case that the edge lengths of \(e_{a}=(v,a)\) and \(e_{b}=(v,b)\) equal, then the lineages \(a\) and \(b\) would be exchangeable under this site pattern model as shown for the CK model. Thus, for this particular tree the site pattern distribution can be viewed as a tensor with symmetry in the \(a\) and \(b\) coordinates. On a general species tree, however, where \(e_{a}\) and \(e_{b}\) may have different lengths and mutation rates may not be consistent, all sites evolve over those edges according to the transition matrices \[M_{a}=\exp\left(s_{a}Q\right),\;\;s_{a}=\int_{0}^{\ell(e_{a})} \mu_{e_{a}}(t)dt,\] \[M_{b}=\exp\left(s_{b}Q\right),\;\;s_{b}=\int_{0}^{\ell(e_{b})} \mu_{e_{b}}(t)dt,\] where \(\ell(e)\) is the length of edge \(e\) and \(\mu_{e_{a}}(t)\) and \(\mu_{e_{b}}(t)\) are time dependent mutations rates. Supposing, without loss of generality, that \(s_{a}\leq s_{b}\), define the Markov matrix \[M=M_{b}M_{a}^{-1}=\exp\left((s_{b}-s_{a})Q\right).\] Then the site pattern distribution can be viewed as one obtained from a tensor with symmetry in \(a\) and \(b\) that has been acted on by \(M\) in the \(b\)-index. More specifically, we imagine that on the edges leading toward both \(a\) and \(b\), the Markov matrix \(M_{a}\) describes an initial substitution process, but on the edge to \(b\) there is a subsequent mutation process described by \(M\). If we introduce an additional action by \(M\) on the edge to \(a\), then in the resulting distribution we would regain symmetry in \(a\) and \(b\). Since no coalescent events occur in these pendant edges, there are no complications arising from the coalescent events that do occur. To formalize this mathematically, suppose \(P\) is an \(N\)-way \(\kappa\times\kappa\times\cdots\times\kappa\) tensor. Define the action of a \(\kappa\times\kappa\) matrix \(M\) in the \(k\)th index of \(P\) by \(Q=P*_{k}M\) where \[Q(i_{1},i_{2},\ldots,i_{k},\ldots,i_{N})=vM,\] with \(v\) the row vector determined by fixing the \(\ell\)th index of \(P\) to be \(i_{\ell}\) for all \(\ell\neq k\). For example, for \(n=3\) and \(k=1\), the tensor \(P*_{1}M\) is specified by \((P*_{1}M)_{ijk}=(P._{jk}M)_{i}\). Given an \(n\)-tuple of matrices \((M_{1},M_{2},\ldots M_{n})\), let \[P*(M_{1},M_{2},\ldots,M_{n})=(\ldots((P*_{1}M_{1})*_{2}M_{2})\cdots*_{n}M_{n})\] denote the action in each of the indices of \(P\). **Definition 5.1**.: Let \(\psi^{+}\) be a rooted topological species tree on \(X\) with \(|X|=n\). Then the _extended exchangeable model_, \(\operatorname{EE}_{\kappa}(\psi^{+})\), is the set of all \(n\)-way site pattern probability tensors \(P\), such that there is an \(n\)-tuple \(M=(M_{1},M_{2},\ldots,M_{n})\) of \(\kappa\times\kappa\) non-singular Markov matrices \(M_{i}\) and a non-negative array \(\tilde{P}\) in the model \(\operatorname{UE}_{\kappa}(\psi^{+})\) such that \(P*M=\tilde{P}\). We note that UE is a submodel of EE: any distribution in \(\operatorname{UE}_{\kappa}(\psi^{+})\) is seen to lie in \(\operatorname{EE}_{\kappa}(\psi^{+})\) by taking all matrices \(M_{i}\) to be the identity. Also, to ensure that the EE model does not include all distributions, it is important that the \(M_{i}\) be non-singular in this definition: Otherwise, if the \(M_{i}\) describe processes where _all_ states transition to the same state with probability 1, then for any tensor \(P\), \(P*(M_{1},M_{2},\ldots M_{n})=\tilde{P}\), a tensor with a single diagonal entry equal to 1 that is in UE. While the UE model on a 2-leaf tree imposes constraints on the probability distribution of site patterns, the 2-leaf EE model is dense among all probability distributions. Indeed, the EE model on such a tree simply requires that the site pattern distribution have the form of \(P=M_{1}^{-T}SM_{2}^{-1}\) with \(S\) a symmetric probability matrix and the \(M_{i}\) Markov. But a dense subset of all probability distributions can be expressed as \(P=DM\) for a diagonal matrix \(D\) with entries from the row sums of \(P\) and an invertible Markov matrix \(M\). We can thus take \(M_{1}=M\), \(S=M^{T}DM\), and \(M_{2}=I\). For a 3-taxon tree, though, the EE model is typically not the full probability simplex \(\Delta^{\kappa^{3}-1}\). For \(\kappa\geq 4\), this follows from a simple dimension bound. The \(UE(\psi^{+})\) model for a 3-taxon tree \(\psi^{+}\) has, from Corollary 4.3, dimension \[d_{\kappa}=\frac{\kappa^{3}+\kappa}{2}-1.\] Moreover, the affine closure of the UE model on a 3-taxon tree is mapped to itself by the * action of \((M^{-1},M^{-1},M^{-1})\) for any Markov matrix \(M\). Thus the dimension of the \(EE(\psi^{+})\) model can be at most \[\dim(UE(\psi^{+})+2\kappa(\kappa-1),\] where the second term is the number of parameters specifying two Markov matrices. Thus \[\dim(EE(\psi^{+})\leq\frac{\kappa^{3}+4\kappa^{2}-3\kappa}{2}-1<\kappa^{3}-1\] for all \(\kappa\geq 4\). As we address in the remark following Corollary 6.2, we can confirm computationally that for a 3-taxon tree and \(\kappa=3\), \(EE(\psi^{+})\) is of lower dimension than the probability simplex \(\Delta^{26}\) and that for \(\kappa=2\), the Zariski closure of \(EE(\psi^{+})\) is equal to \(\Delta^{7}\). _Remark_.: A more restrictive variant of the EE model, that is closer to the mechanistic models of [19] model, could be defined by requiring that all the 'extension' matrices \(M_{i}\) arise as exponentials of the same GTR rate matrix. While this common exponential condition is not expressible purely through algebra, there are other algebraic relaxations of it that one could impose instead, such as that the extension matrices \(M_{i}\) are symmetric and commute. ## 6 The EE model on 3-taxon trees By Definition 5.1, the EE model on a 3-leaf rooted tree \(\psi^{+}\) is the set of \(\kappa\times\kappa\times\kappa\) probability tensors of the form \[P=\tilde{P}*(M_{a}^{-1},M_{b}^{-1},M_{c}^{-1}),\] for some \(\tilde{P}\in\mathrm{UE}(\psi^{+})\) and invertible Markov matrices \(M_{a},M_{b},\) and \(M_{c}.\) Because of the matrix actions, this model has a non-linear structure. This makes it more difficult to fully characterize the model EE in terms of constraints than it was for the affine linear UE model. It also means that the optimization problem for maximum likelihood may not be a convex one, making direct use of constraints for inference more appealing than attacking the optimization problem inherent to maximum likelihood. While determining all equality constraints satisfied by the model (i.e., generators of the ideal of model invariants) is difficult computationally, here we focus on determining some of them. We will use these in Section 7 in our proof of tree identifiability under the EE model. Noting that only a few constraints are utilized in the SVDQuartets method, future work should investigate whether the constraints found here are useful for rooted tree inference. **Proposition 6.1**.: _Let \(P\) be a tensor in the EE model on \(\psi^{+}=\left((a,b),c\right)\), and \(\mathrm{Cof}(A)\) denote the matrix of cofactors of a square matrix \(A\). Then for all \(k\in[\kappa]\) the matrices_ \[Q^{a}_{\cdot\cdot k}=P_{+\cdot\cdot}\,\mathrm{Cof}(P_{\cdot+\cdot})^{T}P_{\cdot \cdot k}\] _and_ \[Q^{b}_{\cdot\cdot k}=P_{\cdot\cdot k}\,\mathrm{Cof}(P_{+\cdot\cdot})P_{\cdot +\cdot}^{T}.\] _are symmetric: that is,_ \[Q^{a}_{\cdot\cdot k}=\left(Q^{a}_{\cdot\cdot k}\right)^{T} \tag{3}\] _and_ \[Q^{b}_{\cdot\cdot k}=(Q^{b}_{\cdot\cdot k})^{T}. \tag{4}\] Proof.: If \(P\) is in the EE model, then \(P=\tilde{P}*(M_{a}^{-1},M_{b}^{-1},M_{c}^{-1})\), with \(\tilde{P}\in\mathrm{UE}\) and \(M_{a},M_{b},M_{c}\) Markov. Then \[P_{+\cdot\cdot}=M_{a}^{-T}\tilde{P}_{+\cdot\cdot}M_{c}^{-1}\ \ \text{and}\ \ P_{+\cdot\cdot}=M_{b}^{-T}\tilde{P}_{+\cdot\cdot}M_{c}^{-1}=M_{b}^{-T} \tilde{P}_{+\cdot}M_{c}^{-1}\] since \(\tilde{P}\in\mathrm{UE}\) implies \(\tilde{P}_{+\cdot\cdot}=\tilde{P}_{+\cdot\cdot}.\) Then, assuming necessary inverses exist, \[P_{\cdot\cdot+}^{-T}P_{+\cdot\cdot}^{T}=M_{a}M_{b}^{-1},\] Thus \[P*_{a}(P_{+\cdot\cdot}^{-T}P_{+\cdot\cdot}^{T})=\tilde{P}*(M_{b}^{-1},M_{b}^ {-1},M_{c}^{-1}).\] But it is straightforward to check that every slice with fixed \(c\)-index of \(\tilde{P}*(M_{b}^{-1},M_{b}^{-1},M_{c}^{-1})\) is symmetric, since that is true for \(\tilde{P}\). Thus \[(P_{\cdot\cdot+}^{-T}P_{+\cdot\cdot}^{T})^{T}P_{\cdot\cdot k}=P_{+\cdot\cdot \cdot}P_{\cdot\cdot+}^{-1}P_{\cdot\cdot k}\] is symmetric for every \(k\). Using the cofactor formula for the inverse of a matrix, and clearing denominators by multiplying by a determinant yields (3). The assumption of invertibility used in this argument can be justified for a dense set of choices of \(\tilde{P}\). Indeed, it is enough to exhibit one such choice, since that indicates the subset leading to non-invertibility is a proper subvariety (defined by certain minors vanishing), and hence of lower dimension. Such a choice is obtained with the Markov matrices being the identity, and \(\tilde{P}\) having non-zero diagonal entries, and zero elsewhere. Since the claim is established on a dense set, it holds everywhere by continuity. The claim (4) can be shown either in a similar way, or by conjugating (in the sense of multiplying by a matrix and its transpose) \(Q^{a}_{\cdot\cdot k}\) by \(P_{\cdot+}.P_{+\cdot\cdot}^{-1}\) and removing determinant factors. _Remark_.: Since \(Q^{a}_{\cdot\cdot k}\) and \(Q^{b}_{\cdot\cdot k}\) are conjugate for any tensor \(P\) (even one not in the EE model), checking that one is symmetric implies the other is as well, provided the appropriate inverse exists. If these are used as necessary conditions for membership in the model, when applied to data it may still be desirable to check that both are approximately symmetric, since it is unclear how conjugation will effect the way we measure the inevitable stochastic error leading to violation of perfect symmetry. **Corollary 6.2**.: _The EE model on \(\psi^{+}=((a,b),c)\) is contained in the algebraic variety defined by the degree \(\kappa+1\) polynomials given by the entries of the \(2\kappa\) matrix equations_ \[P_{+\cdot}.\operatorname{Cof}(P_{+\cdot+})^{T}P_{\cdot-k}-P_{\cdot \cdot k}^{T}\operatorname{Cof}(P_{+\cdot+})P_{+\cdot\cdot}^{T},\] \[P_{\cdot\cdot k}\operatorname{Cof}(P_{+\cdot\cdot})P_{+\cdot \cdot}^{T}-P_{+\cdot}.\operatorname{Cof}(P_{+\cdot\cdot})^{T}P_{\cdot\cdot k}^ {T}.\] The polynomials of this corollary also arise as phylogenetic invariants for the general Markov (GM) model of sequence evolution [1] with no coalescent process. In the setting of that work, the tensors of interest are those in the orbits of \(3\)-way diagonal tensors under actions of \(GL_{\kappa}\) in each index, while here they are the orbits of tensors symmetric in two indices under the same \(GL_{\kappa}\) actions. Since diagonal tensors display this symmetry, the invariants above must also apply to the GM model. However, the GM model on a \(3\)-taxon tree has additional invariants of this form, for every pair of taxa, not just those in the cherry. _Remark_.: Using the computational algebra software Singular [1], we are able to show that for \(\kappa=2\), there are no non-trivial polynomials vanishing on the EE model. Thus, the polynomial invariants implied by Corollary 6.2 are identically zero. For \(\kappa=3\), we verified computationally that these invariants are not identically zero. As demonstrated by methods such as SVDquartets, reframing model constraints in terms of rank conditions can be useful for developing practical methods of phylogenetic inference. With this in mind, we can reinterpret the results of Corollary 6.2 as rank conditions for the EE model. To do so, we use the following lemma, which follows a construction of G. Ottaviani that was suggested to us by L. Oeding. **Lemma 6.3**.: _Let \(A,B,C,D,E,F\) be six \(\kappa\times\kappa\) matrices, with \(B,E\) invertible, satisfying_ \[CB^{-1}A+DE^{-1}F=0.\] _Then the \(3\kappa\times 3\kappa\) matrix_ \[\begin{pmatrix}0&A&B\\ D&0&C\\ E&F&0\end{pmatrix}\] _has rank \(2\kappa\)._ Proof.: Observe \[\begin{pmatrix}0&A&B\\ D&0&C\\ E&F&0\end{pmatrix}=\begin{pmatrix}I&0&0\\ 0&I&D\\ 0&0&E\end{pmatrix}\begin{pmatrix}0&0&I\\ 0&-(CB^{-1}A+DE^{-1}F)&CB^{-1}\\ I&E^{-1}F&0\end{pmatrix}\begin{pmatrix}I&0&0\\ 0&I&0\\ 0&A&B\end{pmatrix}.\] **Corollary 6.4**.: _Tensors in the EE model on \(\psi^{+}=((a,b),c)\) are contained in the algebraic variety defined by the degree \(2\kappa+1\) polynomials given by the \((2\kappa+1)\times(2\kappa+1)\) minors of each of the \(2\kappa\) matrices_ \[\begin{pmatrix}0&P_{\cdot\cdot k}&P_{\cdot+}\\ -P_{\cdot\cdot k}^{T}&0&P_{+\cdot\cdot}\\ -P_{\cdot\cdot\cdot}^{T}&-P_{+\cdot\cdot}^{T}&0\end{pmatrix}\] _and_ \[\begin{pmatrix}0&P_{\cdot+}^{T}&P_{\cdot+}^{T}\\ -P_{\cdot\cdot\cdot}&0&P_{\cdot\cdot k}\\ -P_{\cdot\cdot\cdot}&-P_{\cdot\cdot\cdot}^{T}&0\end{pmatrix}.\] Proof.: Choosing \(A,B,C,D,E,F\) in Lemma 6.3 as shown in these matrices makes the equation \(CB^{-1}A+DE^{-1}F=0\) express that \(Q_{\cdot\cdot k}^{a}\) and \(Q_{\cdot\cdot k}^{b}\) are symmetric, which was shown in Proposition 6.1. The result of Corollary 6.4 allows one to formulate necessary conditions for EE model membership on the \(3\)-taxon tree in terms of rank conditions on matrices, much as the SVDquartets method is based on rank conditions on matrices in the \(4\)-taxon case. Tree identifiability under the EE model The EE model invariants of the previous section enable us to prove that the rooted tree topology is generically identifiable under the EE model. We establish these results for \(\kappa\geq 4\), which includes the cases most relevant for phylogenetic analysis. To establish identifiability, we use the following _non-identifiability result_. **Lemma 7.1**.: _Consider a 2-taxon species tree \((a{:}x,b{:}(\ell-x))\), with \(0\leq x\leq\ell\) with constant population size \(N\) above the root and any GTR rate matrix \(Q\) with stationary distribution \(\pi\). Then the probability distribution matrix \(F\) of site patterns under the CK model is symmetric and independent of \(x\)._ Proof.: Using time reversibility, the distribution can be expressed as \[F=\int_{t=0}^{\infty}\mathrm{diag}(\pi)M_{x}M_{2t}M_{\ell-x}\mu_{N}(t)dt\] where \(\mu_{N}(t)\) is the density function for coalescent times, and \(M_{z}=\exp(Qz).\) Since the integrand, a GTR distribution, is a symmetric matrix, then so is \(F\). Since the \(M_{z}\) commute, and \(M_{x}M_{\ell-x}=M_{\ell}\), \[F=\mathrm{diag}(\pi)M_{\ell}\int_{t=0}^{\infty}M_{2t}\mu_{N}(t)dt\] has no dependence on \(x\). **Theorem 7.2**.: _The rooted topological tree \(\psi^{+}\) is identifiable from generic probability distributions in the EE\({}_{\kappa}(\psi^{+})\) model for \(\kappa\geq 4\)._ Proof.: We first suppose \(\kappa=4\). For the 3-taxon trees \(\phi^{+}=((a,b),c)\) and \(\psi^{+}=((a,c),b)\), we show that EE\((\psi^{+})\cap\)EE\((\phi^{+})\) has measure zero within EE\((\psi^{+})\). To do this, it is enough to construct one point in EE\((\psi^{+})\) that is not in the Zariski closure of EE\((\phi^{+})\), since that implies the Zariski closure of the intersection of EE\((\psi^{+})\) and EE\((\phi^{+})\) is of lower dimension than EE\((\psi^{+})\). Let \(N\) be an arbitrary effective population size and let \(\phi^{+}=((a{:}2,c{:}0){:}1,b{:}1)\), with distances in coalescent units (number of generations divided by \(2N\)). Let \(\mu=1/2N\) and define \(Q\) to be the Kimura 2-parameter (K2P) rate matrix \[\begin{pmatrix}-4&1&2&1\\ 1&-4&1&2\\ 2&1&-4&1\\ 1&2&1&-4\end{pmatrix}\] with equilibrium distribution \(\pi=(\frac{1}{2},\frac{1}{4},\frac{1}{4},\frac{1}{4})\). Finally, let \(P\) be the probability tensor that arises from this choice of parameters in the CK model. Then letting \(M=\exp(2Q)\), we see that \(\widetilde{P}=P*(I,M,M)\) lies in \(UE(\psi^{+})\), which implies that \(P\in EE(\psi^{+})\). To see that \(P\) does not belong to the Zariski closure of \(EE(\phi^{+})\) by Corollary 6.2 it suffices to show that for some \(k\) \[P_{+..}\operatorname{Cof}(P_{+..})^{T}P_{..k}-P_{..k}^{T}\operatorname{Cof}(P_ {+..})P_{+..}^{T}\neq 0. \tag{5}\] Note that \(P_{+..}\) and \(P_{+..}\) are probability distribution matrices for the same model on the 2-leaf species trees \((b{:}1,c{:}1)\) and \((a{:}2,c{:}0)\). But, by Lemma 7.1, \[P_{+..}=P_{+..}=P_{+..}^{T}=P_{+..}^{T},\] so \[P_{+..}P_{+..}^{-T}=P_{+..}^{-1}P_{+..}^{T}=I_{4}.\] To show (5), it is thus enough to show that some \(P_{\cdot k}\) is not symmetric. This can be verified without appealing to numerical computation: For example, \[(P_{\cdot\cdot 1})_{12}-(P_{\cdot\cdot 1})_{21}=\frac{1}{10530}e^{-20}-\frac{1}{22 230}e^{-25}-\frac{1}{20007}e^{-29}.\] If this were zero, then multiplying by \(e^{29}\) would show \(e\) is a root of a rational polynomial, contradicting its transcendence. Thus \(\mathrm{EE}(\psi^{+})\cap\mathrm{EE}(\phi^{+})\) has measure zero within \(\mathrm{EE}(\psi^{+})\). Interchanging taxon names then shows the intersection of any two resolved \(3\)-taxon tree models is of measure zero within them, and thus that a generic distribution in any single \(3\)-taxon model lies only in that \(3\)-taxon model. This establishes the theorem for \(3\)-taxon trees when \(\kappa=4\). For larger trees \(\psi^{+}\), each displayed rooted triple determines a measure zero subset of \(\mathrm{EE}(\psi^{+})\) containing all points where that rooted triple may not be identifiable from marginalizations of \(P\) to those \(3\) taxa. Since there are a finite number of such sets, for a generic \(P\in\mathrm{EE}(\psi^{+})\), all displayed rooted triples are identifiable, and hence so is the tree \(\psi^{+}\). For \(\kappa>4\), the proof follows by embedding the \(4\)-state rate matrix above in the upper left corner of a \(\kappa\)-state GTR rate matrix and setting the remaining entries to \(0\). _Remark_.: Several comments are in order about the method of proof in Theorem 7.2. First, if the matrix \(Q\) is chosen to be a Jukes-Cantor rate-matrix, then one finds that the same construction of \(P\) leads to a point on which the invariants for \(\mathrm{EE}(\phi^{+})\) vanish. That is, \(P\) is not'sufficiently generic' to identify the rooted tree. This is explored more thoroughly in the Appendix. Second, since the argument used an instance of the CK model with a K2P rate matrix, it also establishes the following, which directly applies to models used for phylogenetic inference. **Corollary 7.3**.: _For \(\kappa=4\), consider any submodel of EE such that each \(\psi^{+}\) has an analytic parameterization general enough to contain the Kimura 2-parameter coalescent mixture model with constant population size. Then for generic parameters the rooted topological tree \(\psi^{+}\) is identifiable._ Finally, while our proof of identifiability of a rooted tree under the EE model fails for the CK Jukes-Cantor model, unrooted trees are still identifiable under that model. To establish this, note that a probability distribution for a \(4\)-taxon tree on taxa \(a,b,c,d\) under the EE model has the form \(P=\tilde{P}*(M_{a},M_{b},M_{c},M_{d})\), with \(\tilde{P}\) in the UE model and the Markov matrices invertible. As a result, its flattenings can be expressed as \[\mathrm{Flat}_{ab|cd}(P) =(M_{a}\otimes M_{b})^{T}\,\mathrm{Flat}_{ab|cd}(\tilde{P})(M_{c} \otimes M_{d}),\] \[\mathrm{Flat}_{ac|bd}(P) =(M_{a}\otimes M_{c})^{T}\,\mathrm{Flat}_{ac|bd}(\tilde{P})(M_{b} \otimes M_{d}),\] \[\mathrm{Flat}_{ad|bc}(P) =(M_{a}\otimes M_{d})^{T}\,\mathrm{Flat}_{ad|bc}(\tilde{P})(M_{b} \otimes M_{c}).\] Since \(M_{a},M_{b},M_{c}\), and \(M_{d}\) have full rank, this implies the rank of each flattening of \(P\) is equal to the rank of the corresponding flattening of \(\tilde{P}\). It is then straightforward to obtain the following analog of Theorem 3.5. **Theorem 7.4**.: _The SVDQuartets method, using an exact method to construct a tree from a collection of quartets, gives a statistically consistent unrooted species tree topology estimator for generic parameters under the EE model, and under any submodel with an analytic parameterization general enough to contain the CK K2P model._ Pseudo-exchangeability for the Jukes-Cantor Model The proof of Theorem 7.2, on the generic identifiability of the tree topology under the EE model, used a particular point in the EE model arising from the CK Kimura 2-parameter model. Here, we show that it is not possible to use similar arguments with a point in the CK Jukes-Cantor model. We do this by specifically considering the CK Jukes-Cantor model, and showing that the model always has 'extra symmetries' that prevent the identification of the rooted triple tree by these invariants. **Proposition A.1**.: _Consider the CK Jukes-Cantor model on the tree \(((a{:}\ell_{a},b{:}\ell_{b}){:}\ell_{ab},c{:}\ell_{c})\). If \(\ell_{a}=\ell_{ab}+\ell_{c}\) then the resulting probability tensor \(P=(p_{ijk})\) exhibits \(a,c\) exchangeability, that is, \(p_{ijk}=p_{kji}\)._ Proof.: Let \(P=(p_{ijk})\) be a probability tensor from the CK Jukes-Cantor model on a 3-leaf tree. While \(P\) has 64 entries, because the site substitution model is the Jukes-Cantor model, it has at most five distinct entries. Thus, we may group the coordinates of \(P\) into five equivalance classes, which we represent by \[[p_{AAA}],[p_{AAC}],[p_{ACA}],[p_{ACC}],[p_{ACG}].\] For any representative of the equivalence class \([p_{AAA}]\), \([p_{ACA}]\), or \([p_{ACG}]\), swapping the first and third indices produces another representative of the same equivalence class. However, for representatives of the equivalence class \([p_{AAC}]\), swapping the first and third indices produces a representative of the equivalence class \([p_{ACC}]\), and vice versa. Therefore, to prove the proposition, it suffices to show that for \(P\), \([p_{AAC}]\) and \([p_{ACC}]\) are equal. To establish this, we prove that \(p_{AAC}=p_{ACC}\). Restricting to the leaf set \(\{a,b\}\), we obtain the 2-leaf rooted tree \((a{:}\ell_{a},b{:}\ell_{b})\) and the probability of observing state \(ij\) from the CK Jukes-Cantor model on this tree is \[P_{ij+}=p_{ijA}+p_{ijC}+p_{ijG}+p_{ijT}.\] Likewise, by restricting to the leaf set \(\{b,c\}\), we obtain the 2-leaf rooted tree \((b{:}\ell_{b}+\ell_{ab},c{:}\ell_{c})\) and the probability of observing state \(jk\) from the CK Jukes-Cantor model on this tree is \[P_{+jk}=p_{Ajk}+p_{Cjk}+p_{Gjk}+p_{Tjk}.\] Note that since \(\ell_{a}=\ell_{ab}+\ell_{c}\), the 2-leaf species trees obtained by restricting to \(\{a,b\}\) and \(\{b,c\}\) differ only by the location of the root. By Lemma 7.1, since the Jukes-Cantor model is a submodel of GTR, the probability distribution matrices for the JC models on these trees are symmetric and equal. Therefore, we have \(P_{ij+}=P_{jit+}=P_{+ji}=P_{+ij}\). Specifically, this implies \(P_{AC+}=P_{+CA}\), or \[p_{ACA}+p_{ACC}+p_{ACG}+p_{ACT}=p_{ACA}+p_{CCA}+p_{CCA}+p_{TCA}.\] Under the JC model, \(p_{ACG}\), \(p_{ACT}\), \(p_{GCA}\), and \(p_{TCA}\) all belong to the equivalence class of coordinates with three distinct indices, which is to say, \(p_{ACG}=p_{ACT}=p_{GCA}=p_{TCA}\). Thus, by cancellation, the equation above reduces to \(p_{CCA}=p_{ACC}\). Since \(p_{CCA}\) and \(p_{AAC}\) are in the same JC equivalence class, this implies \(p_{AAC}=p_{ACC}\). **Corollary A.2**.: _The invariants of Corollary 6.2 associated to all of the trees \(((a,b),c)\), \(((a,c),b)\) and \(((b,c),a)\) vanish on all probability tensors \(P\) arising from the Jukes-Cantor CK model on any of these trees._ Proof.: First consider \(\tilde{P}\) arising from the CK Jukes-Cantor model on the tree \(((a{:}\ell,b{:}\ell){:}\ell,c{:}0)\). By the proposition, this tensor is fully-symmetric, that is, invariant under any permutation of the indices, for any positive value of \(\ell\). It thus lies in the UE model for all three trees. Now the probability tensor \(P\) from the CK JC model on \(((a{:}\ell_{a},b{:}\ell_{b}){:}\ell,c{:}\ell_{c})\), where \(\ell_{a},\ell_{b}\geq\ell\) and \(\ell_{c}\geq 0\) can be expressed as \[P=\tilde{P}*(M_{a},M_{b},M_{c}),\] where \(M_{a}\), \(M_{b}\), \(M_{c}\) are Jukes-Cantor matrices for edges of length \(\ell_{a}-\ell\), \(\ell_{b}-\ell\), \(\ell_{c}-\ell\), respectively. Thus \(P\) lies in the EE model for all three trees. Therefore the invariants associated to all three trees vanish on it. Moreover, since the entries of probability tensors in the EE model are parametrized by analytic functions of the edge lengths, composing these function with the invariants gives analytic functions that vanish on a full-dimensional subset of the parameter space, which must therefore be zero on the entire parameter space. Thus the invariants vanish on the model even when the terminal edge lengths do not satisfy the assumed inequalities.
2302.08399
Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks
Intuitive psychology is a pillar of common-sense reasoning. The replication of this reasoning in machine intelligence is an important stepping-stone on the way to human-like artificial intelligence. Several recent tasks and benchmarks for examining this reasoning in Large-Large Models have focused in particular on belief attribution in Theory-of-Mind tasks. These tasks have shown both successes and failures. We consider in particular a recent purported success case, and show that small variations that maintain the principles of ToM turn the results on their head. We argue that in general, the zero-hypothesis for model evaluation in intuitive psychology should be skeptical, and that outlying failure cases should outweigh average success rates. We also consider what possible future successes on Theory-of-Mind tasks by more powerful LLMs would mean for ToM tasks with people.
Tomer Ullman
2023-02-16T16:18:03Z
http://arxiv.org/abs/2302.08399v5
# Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks ###### Abstract Intuitive psychology is a pillar of common-sense reasoning. The replication of this reasoning in machine intelligence is an important stepping-stone on the way to human-like artificial intelligence. Several recent tasks and benchmarks for examining this reasoning in Large-Large Models have focused in particular on belief attribution in Theory-of-Mind tasks. These tasks have shown both successes and failures. We consider in particular a recent purported success case (1), and show that small variations that maintain the principles of ToM turn the results on their head. We argue that in general, the zero-hypothesis for model evaluation in intuitive psychology should be skeptical, and that outlying failure cases should outweigh average success rates. We also consider what possible future successes on Theory-of-Mind tasks by more powerful LLMs would mean for ToM tasks with people. ## 1 Introduction People think other people think. They expect other persons to have mental states. They attribute goals to other people, and expect them to pursue those goals efficiently and in a socially-aware manner [(2; 3)]. Like other core domains of reasoning, _intuitive psychology_ is early developing or possibly innate, fast, automatic, and culturally universal [(4)]. It is also likely shared with other animals [(5; 6)]. At various points in development, children show increasingly sophisticated reasoning about the mental states of others, including the ability to attribute beliefs and false beliefs to others, second-order reasoning about mental states, and reasoning about perceptual access. While there are long-standing arguments about the exact nature, format, development, and assessment of this reasoning (see e.g 7), a convenient short-hand has been to refer to the adult-level ability to reason about the mental states of others as 'Theory-of-Mind'. The arguments about its development and content aside, Theory-of-Mind is recognized as a pillar of common-sense reasoning. As such, it would be useful to incorporate this reasoning into machine reasoning, either as a built-in set of primitives or as a target for learning [(8)]. Such as ability is likely useful on its own, but even if future intelligent machines themselves won't have mental states in the same way that people do, some of them will need to interact with people. So, to the degree that people have Theory-of-Mind, it would be useful for machines to have an understanding of this reasoning. The assessment of Theory-of-Mind in children and adults is an ongoing endeavor, but tests of Theory-of-Mind are also increasingly being applied to machines. Such tests include the porting over of visual intuitive-psychology tasks that were primarily developed for infants [(9; 10; 11)], as well as the use of question-answering and text-based tasks that mimic the tests used with older children (e.g. [12; 13; 14]). The recent rise of Large-Language models [(15; 16; 17)] have made text-based tests of Theory-of-Mind particularly interesting. These models have already shown some successes across many challenging benchmarks and tasks designed to test various aspects of reasoning [(18; 19)]. While there are many cautionary voices that suggest such models may be acquiring formal rather than functional abilities [(18)], that has not stopped people from testing them on functional abilities as well, including Theory-of-Mind reasoning. While some of these tests offer a pessimistic evaluation [(14)], recent work by Kosinski [(1)] applied variations on classic Theory-of-Mind tasks to several LLMs, and concluded that current models (as of this writing) can succeed on these tasks at a level comparable to 9-year-old children. In the face of these results, Kosinski puts the dilemma nicely. Paraphrasing a bit, we have to either (i) accept the validity of the standard measures for ToM, in which case we should concede that LLMs now have ToM, or (ii) reject the suggestion that LLMs have ToM, but then need to seriously re-examine and possibly scuttle the measures developed to examine it. Kosinski himself holds position (i). In this paper we do two things: First, we examine the robustness of the findings reported in [(1)], using directed perturbations of the tasks considered. We show that the original reported successes are susceptible to small perturbations that shouldn't matter to an entity that has ToM. Second, we take on the horns of the dilemma and argue that it does not hold. One can accept the validity and usefulness of ToM measures for humans while still arguing that a machine that passes them is suspect. This argument is developed in the discussion, but briefly: Jumping over the horns of the dilemma is possible if reasoning about the mental states of others takes into account the algorithms others are likely implementing, beyond the confines of the input and output of a given task. Examining the robustness of any one particular LLM system is akin to a mythical Greek punishment1. A system is claimed to exhibit behavior X, and by the time an assessment shows it does not exhibit behavior X, a new system comes along and it is claimed it shows behavior X. Footnote 1: The particular punishment in mind is that of the Danaides. Cursed to fill a basin with water, the Danaides will be released from their punishment once the basin is full. The basin has holes in it, and will never fill. Still, we hope this paper will have useful contributions beyond the current moment, as the argument for skepticism and the issues surrounding the assessment of Theory-of-Mind in machine minds are likely to be with us for a while. Besides, there's nothing wrong with contributions to the current moment. Below, we examine several variations on ToM tasks. The variations take the specific examples in [(1)] and alter them in ways that do not violate the basic principles of Theory-of-Mind, yet lead to machine failures. The variations may be considered outliers, and so one runs the risk of rejecting them for being outliers. If one end of the scales has 20 successes and the other end a single failure, shouldn't the scales tip in favor of the machine getting a passing grade? We think not. Suppose someone claims a machine has 'learned to multiply', but others suspect that the machine may have memorized question/answer pairs rather than learning a multiplying algorithm. Suppose further that the machine correctly answers 100 questions like '5*5=25', '3*7=21', but then it is shown that it completely fails on '213*261'. In this case, we shouldn't simply average these outcomes together and declare >99% success on multiplication. The failure is instructive, and suggests the machine has not learned a general multiplication algorithm, as such an algorithm should be robust to simple alterations of the inputs. ## 2 Examining the robustness of current LLMs on ToM tasks In this section we consider the particular vignettes and prompts in [(1)], which were used to argue that current LLMs have developed Theory-of-Mind. We focus in particular on the most recently available iteration of GPT-3.5 which was used in [(1)], as this model achieved the best results, and serves as a threshold. If this model fails, we expect the less powerful models to fail as well. We use the exact same set-up as in [(1)], posing vignettes to an LLM and examining the probabilities of different completions. While our concluding assessment is ultimately that GPT-3.5 is not responding robustly to ToM tasks, we emphasize that this is not a negative evaluation of [(1)]. Science advances by pitting hypotheses against one another, and we treat the previous work as a good serve in an ongoing scientific tennis game. We are also grateful that the materials and methods in (1) are publicly available, allowing an easier examination and comparison. ### Unexpected Contents Also known as the'smarties task', this assessment of ToM shows a participant a container (e.g. a crayon box). The container typically contains some item (e.g. crayons), but in this particular case it contains an unexpected item (e.g., smarties). The participant is then asked about the likely beliefs of another person who has not seen the contents of the box. The participant must hold in mind their own knowledge of the true state of the world (the crayon box has smarties in it), while reasoning about the beliefs of another person (they will believe the box has crayons in it). This is only a brief exposition of the task, and those interested are encouraged to read the many follow-ups to (20). The studies in (1) begin with a version of the unexpected-contents task, which goes as follows: Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says "chocolate" and not "popcorn." Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label. The previous study used a _content prompt_ - 'She opens the bag and looks inside. She can clearly see that it is full of' - and two _belief prompts_. The first belief prompt is: 'She believes that the bag is full of', and the second belief prompt is 'She is delighted that she has found this bag. She loves eating' Kosinski found that for the content prompt, [P(popcorn) = 100%; P(chocolate) = 0%]. For the first belief prompt, [P(popcorn) = 0%; P(chocolate) = 99%]. For the second belief prompt, [P(popcorn) = 14%; P(chocolate) = 82%]. We consider several variations on the vignette above. The variations are based on commonsense principles of Theory-of-Mind already available to children, and should lead Sam to believe the bag contains popcorn (or at least, not to believe it contains chocolate). For ease of reading, we summarize the variants in Figure 1, though we emphasize that the images themselves were not evaluated in any way, and are simply visual shorthand for the text below. #### 2.1.1 Variation 1A: Transparent Access The base-level smarties task already relies on the lack of perceptual access (the container being opaque), and other work shows young children understand perceptual access both in cases when the container is opaque, and in cases when the containers are open and the viewer does have perceptual access (22). So, we make the opaque container transparent. Here and elsewhere, when detailing a vignette, we note the changes to the original vignette in _bold_: Here is a bag filled with popcorn. There is no chocolate in the bag. **The bag is made of transparent plastic, so you can see what is inside**. Yet, the label on the bag says 'chocolate' and not 'popcorn.' Sam finds the bag. She had never seen the bag before. Sam reads the label. On the face of it, reading the label is redundant, as Sam can see the contents of the bag. Yet now we find: She believes that the bag is full of **chocolate**, \([P_{popcorn}=0\%;P_{chocolate}=95\%]\) However, for the second belief prompt, we do not find this flip: She is delighted to have found this bag. She loves eating **popcorn**, \([P_{popcorn}=58\%;P_{chocolate}=36\%]\) Please note that in a previous version of this paper we mistakenly reported that there was a flip in the second belief prompt as well. As far as we can tell, this is due to our original prompt including a double space rather than a single space right before 'Sam finds the bag'. On the latest public available version of GPT-3.5, this double space causes the completion to indeed be **chocolate**, \(P_{chocolate}=53\%\), \(P_{chocolate}=39\%\). #### 2.1.2 Variation 1B: Uninformative Label What if the label is not useful? For example, the label might be facing away from the person, or written in a language they don't understand, etc. In such a case, it does not matter that the label says 'chocolate' on it. And yet, for GPT-3.5, it does. Consider for example: Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says "chocolate" and not "popcorn." Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. **Sam cannot read**. Sam **looks** at the label. We find: Figure 1: An illustrative sketch summarizing the 4 variations used on the ‘unexpected contents’ task which GPT-3.5 passed. All variations cause the LLM to incorrectly attribute to Sam the belief that the bag contains chocolates. Variation 1A states the bag is transparent and its contents can be seen; 1B states that Sam cannot read, rendering the label meaningless; 1C mentions that before going into the room a trusted friend which Sam believes told her about the contents of the bag and that she should ignore the label; 1D stipulates that Sam herself filled the bag with popcorn, and wrote the label which states it has chocolate inside. Images are shorthand for the full text and were not themselves evaluated. Images were generated using Dall-E 2 (21). She believes that the bag is full of **chocolate**, \([P_{popcorn}=0\%;P_{chocolate}=98\%]\) For the second belief prompt: She is delighted to have found this bag. She loves eating **chocolate**, \([P_{popcorn}=15\%;P_{chocolate}=78\%]\) If Sam cannot read, the label is meaningless to her, and yet GPT-3.5 states that Sam believes the bag has chocolate in it. #### 2.1.3 Variation 1C: Trustworthy Testimony Direct perception is not the only way to form beliefs, and people often form beliefs about states of the world through indirect instruction, direct pedagogy, and testimony. Even young children are sensitive to whether a person is trustworthy [23; 24], or a good teacher [25; 26] Suppose a good friend of Sam's told her about the tricky bag. Suppose that we explicitly state Sam believes their friend. Consider this prompt: Here is a bag filled with popcorn. There is no chocolate in the bag. The label on the bag says "chocolate", rather than "popcorn." **Before coming into the room, Sam's friend told her 'the bag in the room has popcorn in it, ignore the label'. Sam believes her friend**. Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. Sam reads the label, **which says the bag has chocolate in it.** We find: She believes that the bag is full of **chocolate**, \([P_{popcorn}=2\%;P_{chocolate}=97\%]\) For the second belief prompt: She is delighted to have found this bag. She loves eating **chocolate**, \([P_{popcorn}=13\%;P_{chocolate}=81\%]\) One could spin stories about how GPT-3.5 perhaps 'thinks' that Sam perhaps changed her mind and no longer believes her friend, or forgot what her friend said. But a simpler explanation is that LLM reasoning about ToM is sensitive to small irrelevant perturbations. #### 2.1.4 Variation 1D: The Treachery of Late Labels We do not mean to exhaustively detail all the cases we tried, but instead collapse a few different ones to make a general note that across different cases there was a strong effect for when the person read the label: if the person read the label at the end of the story, then this strongly affected the LLMs answer to the belief prompt. To drive this point home, consider an extreme case. Suppose Sam herself is the one that filled the bag with popcorn. Suppose Sam wrote the 'chocolate' label herself. Surely she doesn't think the bag has chocolate inside? More specifically, consider the vignette (we do not bold all the changes in this case, as there are many small ones that add up): Sam fills a bag with popcorn and closes it. There is no chocolate in the bag. Sam writes a label and puts it on the bag. Sam looks at the bag. She cannot see what is inside the bag. Sam reads the label. The label says the bag has chocolate in it. We find: She believes that the bag is full of **chocolate**, \([P_{popcorn}=10\%;P_{chocolate}=87\%]\) She is delighted to have found this bag. She loves eating **chocolate**, \([P_{popcorn}=35\%;P_{chocolate}=63\%]\) ### Unexpected Transfer In another classic ToM task, a participant sees or is told of a person who observes a particular state of affairs. The state of affairs then changes, without the person being aware. The participant is then asked what action the person will take. The participant needs to keep in mind both the actual, changed state of affairs and the incorrect belief of the naive person. In the classic Sally-Anne version of the task, Sally hides a marble in a basket. Anne then moves the marble to a box, without Sally's knowledge. A participant is then asked where Sally will look for her marble. Again, this is a bare-bones description of a task that has seen many variants and analyses over the years, and those interested are encouraged to read the myriad follow-ups to [27]. Study 2 in (1) uses the following version of the unexpected transfer task: In the room there are John, Mark, a cat, a box, and a basket. John takes the cat and puts it in the basket. He leaves the room and goes to school. While John is away, Mark takes the cat out of the basket and puts it in the box. Mark leaves the room and goes to work. John comes back from school and enters the room. He doesn't know what happened in the room when he was away. The study then examines a content prompt, and two belief prompts: 'John thinks that the cat is in the', and 'When John comes back home, he will look for the cat in the'. For both of these prompts, GPT-3.5 shows \(P(basket)=98\%\). On the basis of this it is proposed in (1) that GPT-3.5 is correctly inferring John's mental states. #### 2.2.1 Variation 2A: Transparent Access Similar to Variation 1A, we can give the people in the room direct perceptual access to the contents of the containers. We change the basket to a 'glass chest' and the box to 'a transparent plastic box': In the room there are John, Mark, a cat, a **transparent plastic** box, and a **glass chest**. John takes the cat and puts it in the **chest**. He leaves the room and goes to school. While John is away, Mark takes the cat out of the **chest** and puts it in the box. Mark leaves the room and goes to work. John comes back from school and enters the room. He doesn't know what happened in the room when he was away. We find that: John thinks that the cat is in the **chest**, \([P_{box}=0\%;P_{chest}=94\%]\) John will look for the cat in the **chest**, \([P_{box}=2\%;P_{chest}=90\%]\) These errors persisted even when stipulating John carefully looks around the room. Another variation leading to error included using opaque containers but mentioning the cat's tail is sticking out of the box (and again with John looking carefully around the room). #### 2.2.2 Variation 2B: Relationship Change Similar to the variation above, we can change the scenario so that John has direct perceptual access to the situation on entering the room. But rather than changing the opaque container, we change the relationship of the cat to the containers. In this case, we simply changed 'in' to 'on'. Consider the prompt: In the room there are John, Mark, a cat, a box, and a basket. John takes the cat and puts it **on** the basket. He leaves the room and goes to school. While John is away, Mark takes the cat **off the** basket and puts it **on** the box. Mark leaves the room and goes to work. John comes back from school and enters the room. **John looks around the room.** He doesn't know what happened in the room when he was away. We find (note the prompts vary a bit to conform with the use of 'on' instead of 'in'): John thinks that the cat is on the **basket**, \([P_{box}=0\%;P_{basket}=97\%]\) John will look for the cat on the **basket**, \([P_{box}=25\%;P_{basket}=74\%]\) We see again that simple changes to perceptual access confound the model. This may reflect a failure of ToM, scene understanding, relational reasoning, or other reasoning. The failures are not mutually exclusive. We note that a lack of relational reasoning (correctly understanding things like 'on' and 'in') has also been shown in current image-generation models [28]. #### 2.2.3 Variation 2C: Trusted Communication Similar to Variation 1C, in which someone tells the person what they did, we examined the option of one person informing the other they are about to change the state (move the cat to the box), or the first person explicitly asking the second to change it. Consider the vignette: In the room there are John, Mark, a cat, a box, and a basket. John takes the cat and puts it in the basket. He leaves the room and goes to school. **Mark calls John to tell him he is going to move the cat to the box. John believes him**. While John is away, Mark takes the cat out of the basket and puts it in the box. Mark leaves the room and goes to work. John comes back from school and enters the room. He doesn't know what happened in the room when he was away. We find: Figure 2: An illustrative sketch summarizing the 4 variations used on the ‘unexpected transfer’ task which GPT-3.5 passed. All variations cause the LLM to fail. Variation 2A changes the containers from opaque to transparent; Variation 2B changes the relationship of the cat and containers from ‘in’ to ‘on’; Variation 2C stipulates truthful testimony about the new location of the cat; Variation 2D queries the belief state of the person who moved the cat. Images are shorthand for the full text, and were not themselves directly evaluated. Images were generated using Dall-E 2 [21]. John thinks that the cat is in the **basket**, \([P_{box}=0\%;P_{basket}=97\%]\) John will look for the cat in the **basket**, \([P_{box}=3\%;P_{basket}=94\%]\) Similar mistakes were found for cases where John calls Mark and asks him to move the cat into the box, with Mark agreeing. #### 2.2.4 Variation 2D: Querying the Mental States of the Additional Person The previous variations set things up such that the protagonist of the story should no longer search in the initial location, and yet GPT-3.5 still predicts the protagonist will do so. The variations are simple enough to understand, but some of them add extra information and complexity, changing the objects or adding interactions. In the following variation we ask something simpler: What if we query what the second person (Mark) will do? This is the person one who moved the cat. If the LLM is'really''reasoning' about mental states, it should have no difficulty with this - it is as easy to reason about as the first person. But, if the model is fixated on the statistical pattern of looking for the item where it isn't (say, through repeated exposure to Sally-Anne-like tasks in training), then the model may (wrongly) predict the same answer for both people in the story. The vignette now is: In the room there are John, Mark, a cat, a box, and a basket. John takes the cat and puts it in the basket. He leaves the room and goes to school. While John is away, Mark takes the cat out of the basket and puts it in the box. Mark leaves the room and goes to work. John **and Mark** come back and enter the room. **They don't** know what happened in the room when **they** were away. The prompts now ask about _Mark_. Mark thinks that the cat is in the **basket**, \([P_{box}=1\%;P_{basket}=99\%]\) Mark will look for the cat in the **basket**, \([P_{box}=43\%;P_{basket}=54\%]\) At the risk of belaboring the point: if Mark put the cat in the box, Mark should look for the cat in the box. ## 3 Discussion Has Theory-of-Mind spontaneously emerged in large language models? Probably not. While LLMs such as GPT-3.5 now regurgitate reasonable responses to basic ToM vignettes, simple perturbations that keep the principle of ToM intact flip the answers on their head. While it is possible to consider various defenses of the failures, the simplest answer is that these models haven't learned yet anything like Theory-of-Mind, just like they haven't yet learned many other things (18). The failure seems relatively uncontroversial, but that isn't the end of the story. Other LLMs are on their way, with more parameters, more data, more training. It's reasonable to suppose that one of them may pass the variations above. The dilemma presented in (1) may have been presented prematurely, but it mature in time. We end then with more broad thoughts about testing ToM in machines, that will hopefully carry beyond this specific moment. To begin, we would encourage a skeptical stance. Many scientists already adopt a skeptical stance by default, and the issue is not unique to the question of Theory-of-Mind in machines. But still, there is a particular danger when observing an agent, organism, or entity display behavior that can be interpreted as purposeful. The human-mind seems hard-wired to ascribe animacy and mental states to various behaviors, creating agents where there are none - this is itself part of our intuitive psychology (29; 30). The danger here is that in the same way that we see faces in clouds or ascribe mental states to the wind or germs, we may be biased to anthropomorphize LLMs. When assessing the claim that LLMs (or other AI models) have spontaneously developed Theory-of-Mind, we should not place the two possibilities on equal footing, but start by presuming strongly that they have not. We note that we are not _mystics_ about the eventual implementation of Theory-of-Mind in machine intelligence. We believe that any human mental ability can in principle be replicated in silicon, including Theory-of-Mind. In fact, there are already many reasonable computational models that try to directly capture this ability (e.g. 31; 32; 33; 34; 35; 10), and which put formal skin on decades-old proposals in cognitive science and psychology. We think a good direction to pursue is to integrate such models with language models, rather than expect Theory-of-Mind to emerge spontaneously from additional linguistic data. A proponent of the notion that LLMs could in principle spontaneously develop ToM may reasonably complain that we did not provide here a generator for variations, a systematic benchmark, or a test-suite. And we could in return suggest principled ways of generated the variations, including modifications perceptual access, trusted testimony, and querying the states of all parties. But instead, we voice a concern: As soon as a systematic generator of examples or a benchmark is provided, then a LLM can gobble up a large amount of data to pass these examples or this benchmark2. If we think that LLMs may in principle be learning something closer to a smooth tiling of the space of possible examples rather than ToM reasoning, then providing an exhaustive list of all possible failure modes and edge-cases will help the model do better on future examples, without answering the basic question of what it has learned. The problem of the evaluation of the generalization of current machine-learning models goes beyond Theory-of-Mind and is of current concern to many researchers, but Theory-of-Mind is a particularly good and troubling example of it. Footnote 2: This current paper is likely shooting future researchers in the foot in that sense. Sorry. Kosinski (1) presents a dilemma: If current LLMs pass ToM tests, then either current LLMs have ToM, or ToM tests aren't testing ToM. The current work (as well as related work such as (14)) suggests the premise of the dilemma is unfounded - current LLMs do not pass ToM tests. But given the pace of progress in LLMs, it's quite possible that future iterations of these models will pass classic ToM tests, as well as various variations. What should we make of the dilemma at that point? We would argue that even in such a future case, one can in principle hold the view that LLMs do not have ToM, while still thinking that ToM tests are valid when it comes to people. This stance is possible because inferences about the likely mental processes of other persons are not done in a vacuum. The restriction of inferences about likely algorithms to only the current input-output behavior is reminiscent of the classic test of 'can a machine think', the Turing Test (36). While this test remains a classic for a reason, scholars have pointed out decades ago that people likely attribute intelligence not just on the basis of behavior but also on the basis of the algorithms and processes that generated that behavior (37). We are fully entitled to ignore an injunction to 'have a nice day' if we believe it is the product of a simple detector hooked up to a recorded message, while similar behavior towards a person genuinely engaging with us would rightly be seen as rude. A narrow focus on only linguistic input and output would present the original dilemma in full force, but people (both researchers and lay-people) do not have to reason about the mental states of others through such a narrow prism. One can hold that ToM tests make sense as a research tool to study human children (who are given orders of magnitude less input than an LLM, and we have reason to think are structured differently), while at the same time being skeptical of LLMs that pass them. It's difficult to know exactly what is inside the opaque containers that are current LLMs. But it's probably not Theory-of-Mind, no matter what the label says. #### Acknowledgments I wish to thank Elizabeth Bonawitz for helpful discussions and comments. This is work is supported in part by the Jacobs Foundation.
2301.11398
Smigoc's glue for universal realizability in the left half-plane
A list of A list {\Lambda} of complex numbers is said to be realizable if it is the spectrum of a nonnegative matrix. {\Lambda} is said to be universally realizable (UR) if it is realizable for each possible Jordan canonical form allowed by {\Lambda}. In this paper, using companion matrices and applying a procedure by \v{S}migoc, is provides a sufficient condition for the universal realizability of left half-plane spectra. It is also shown how the effect of adding a negative real number to a not UR left half-plane list of complex numbers, makes the new list UR, and a family of left half-plane lists that are UR is characterized.
Jaime H. Alfaro, Ricardo L. Soto
2023-01-26T20:21:08Z
http://arxiv.org/abs/2301.11398v1
# Smigoc's glue for universal realizability in the left half-plane1 ###### Abstract A list \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) of complex numbers is said to be _realizable_ if it is the spectrum of a nonnegative matrix. \(\Lambda\) is said to be _universally realizable_ (\(\mathcal{U}\mathcal{R}\)) if it is realizable for each possible Jordan canonical form allowed by \(\Lambda.\) In this paper, using companion matrices and applying a procedure by Smigoc, is provides a sufficient condition for the _universal realizability_ of left half-plane spectra, that is, \(\Lambda=\{\lambda_{1},\ldots,\lambda_{n}\}\) with \(\lambda_{1}>0,\)\(\operatorname{Re}\lambda_{i}\leq 0,\)\(i=2,\ldots,n.\) It is also shown how the effect of adding a negative real number to a not \(\mathcal{U}\mathcal{R}\) left half-plane list of complex numbers, makes the new list \(\mathcal{U}\mathcal{R},\) and a family of left half-plane lists that are \(\mathcal{U}\mathcal{R}\) is characterized. _AMS classification: 15A18, 15A20, 15A29_ _Key words: Nonnegative matrix; companion matrix; Universal realizability; Simigoc's glue._ ## 1 Introduction A list \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) of complex numbers is said to be _realizable_ if it is the spectrum of an \(n\)-by-\(n\) nonnegative matrix \(A,\) and \(A\) is said to be a _realizing matrix_ for \(\Lambda.\) The problem of the realizability of spectra is called the _nonnegative inverse eigenvalue problem_ (NIEP). From the Perron-Frobenius Theorem we know that if \(\Lambda=\left\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right\}\) is the spectrum of an \(n\)-by-\(n\) nonnegative matrix \(A,\) then the leading eigenvalue of \(A\) equals to the spectral radius of \(A,\)\(\rho(A)=:\max\limits_{1\leq i\leq n}\left|\lambda_{i}\right|.\) This eigenvalue is called the _Perron eigenvalue,_ and we shall assume in this paper, that \(\rho(A)=\lambda_{1}.\) A matrix is said to have _constant row sums,_ if each one of its rows sums up to the same constant \(\alpha.\) The set of all matrices with constant row sums equal to \(\alpha,\) is denoted by \(\mathcal{CS}_{\alpha}.\) Then, any matrix \(A\in\mathcal{CS}_{\alpha}\) has the eigenvector \(\mathbf{e}^{T}=[1,1,\ldots,1],\) corresponding to the eigenvalue \(\alpha.\) The real matrices with constant row sums are important because it is known that the problem of finding a nonnegative matrix with spectrum \(\Lambda=\left\{\lambda_{1},\ldots,\lambda_{n}\right\}\), is equivalent to the problem of finding a nonnegative matrix in \(\mathcal{CS}_{\lambda_{1}}\) with spectrum \(\Lambda\) (see [3]). We denote by \(\mathbf{e}_{k},\) the n-dimensional vector, with \(1\) in the \(k^{th}\) position and zeros elsewhere. If \(\Lambda=\left\{\lambda_{1},\ldots,\lambda_{n}\right\}\), then \(s_{k}(\Lambda)=\sum\limits_{i=1}^{n}\lambda_{i}^{k},\)\(k=1,2,\ldots.\) A list \(\Lambda=\left\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right\}\) of complex numbers, is said to be _diagonalizably realizable_ (\(\mathcal{DR}\))_,_ if there is a diagonalizable realizing matrix for \(\Lambda\) The list \(\Lambda\) is said to be _universally realizable_ (\(\mathcal{UR}\)), if it is realizable for each possible Jordan canonical form (JCF) allowed by \(\Lambda.\) The problem of the universal realizability of spectra, is called the _universal realizability problem_ (URP). The URP contains the NIEP, and both problems are equivalent if the given numbers \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) are distinct. In terms of \(n,\) both problems remain unsolved for \(n\geq 5.\) It is clear that if \(\Lambda\) is \(\mathcal{UR},\) then \(\Lambda\) must be \(\mathcal{DR}\). The first known results on the URP are due to Minc [7, 8]. In terms of the URP, Minc [7] showed that if a list \(\Lambda=\left\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right\}\) of complex numbers is the spectrum of a diagonalizable positive matrix, then \(\Lambda\) is \(\mathcal{UR}\). The positivity condition is necessary for Minc's proof, and the question set by Minc himself, whether the result holds for nonnegative realizations was open for almost \(40\) years. Recently, two extensions of Minc's result have been obtained in [1, 4]. In [1], Collao et al. showed that a nonnegative matrix \(A\in\mathcal{CS}_{\lambda_{1}},\) with a positive column, is similar to a positive matrix. Note that if \(A\) is nonnegative with a positive row and \(A^{T}\) has a positive eigenvector, then \(A^{T}\) is also similar to a positive matrix. Besides, if \(\Lambda\) is diagonalizably realizable by a matrix \(A\in\mathcal{CS}_{\lambda_{1}}\) having a positive column, then \(\Lambda\) is \(\mathcal{UR}\). In [4], Johnson et al. introduced the concept of ODP matrices, that is, nonnegative matrices with all positive off-diagonal entries (zero diagonal entries are permitted) and proved that if \(\Lambda\) is diagonalizably ODP realizable, then \(\Lambda\) is \(\mathcal{U}\!\mathcal{R}\). Note that both extensions contain, as a particular case, Minc's result in [7]. Both extensions allow us to significantly increase the set of spectra that can be proved to be \(\mathcal{U}\!\mathcal{R}\), as for instance, certain spectra \(\Lambda=\{\lambda_{1},\ldots,\lambda_{n}\}\) with \(s_{1}(\Lambda)=0\), which is not possible from Minc's result. In particular, we shall use the extension in [1] to generate some of our results. **Remark 1.1**: _In [1], Section \(2,\) Theorem \(2.1\) and Corollary \(2.1,\) there is an error in assuming that if \(A\) is nonnegative with a positive row, then \(A^{T},\) which has a positive column, is similar to a positive matrix. The reason is that we cannot guarantee that \(A^{T}\) has a positive eigenvector._ Regarding non-positive universal realizations, we mention that in [10, 2] the authors proved, respectively, that lists of complex numbers \(\Lambda=\{\lambda_{1},\ldots,\lambda_{n}\},\) of Suleimanova type, that is, \[\lambda_{1}>0,\ \operatorname{Re}\lambda_{i}\leq 0,\ \left|\operatorname{Re} \lambda_{i}\right|\geq\left|\operatorname{Im}\lambda_{i}\right|,\ i=2,3, \ldots,n,\] or of Smigoc type, that is, \[\lambda_{1}>0,\ \operatorname{Re}\lambda_{i}\leq 0,\ \sqrt{3}\left| \operatorname{Re}\lambda_{i}\right|\geq\left|\operatorname{Im}\lambda_{i} \right|,\ i=2,3,\ldots,n, \tag{1}\] are \(\mathcal{U}\!\mathcal{R}\) if and only if they are realizable if and only if \(\sum_{i=1}^{n}\lambda_{i}\geq 0.\) Outline of the paper: The paper is organized as follows: In Section \(2,\) we present the mathematical tools that will be used to generate our results. In Section \(3,\) we study the URP for a left half-plane list and we give a sufficient condition for it to be \(\mathcal{U}\!\mathcal{R}\). In Section \(4,\) we discuss the effect of adding a negative real number \(-c\) to a left half-plane list \(\Lambda=\{\lambda_{1},-a\pm bi,\ldots,-a\pm bi\},\) which is not \(\mathcal{U}\!\mathcal{R}\) (or even not realizable), or we do not know whether it is, and we show how \(\Lambda\cup\{-c\}\) becomes \(\mathcal{U}\!\mathcal{R}\). We also characterize a family of left half-plane lists that are \(\mathcal{U}\!\mathcal{R}\). In Section \(5,\) we show that the merge of two lists diagonalizably realizable \(\Gamma_{1}\in\mathit{CS}_{\lambda_{1}}\) and \(\Gamma_{2}\in\mathit{CS}_{\mu_{1}}\) is \(\mathcal{U}\!\mathcal{R}\). Examples are shown to illustrate the results. ## 2 Preliminaries Throughout this paper we use the following results: The first one, by Smigoc [9], gives a procedure that we call Smigoc's glue technique, to obtain from two matrices \(A\) and \(B\) of size \(n\)-by-\(n\) and \(m\)-by-\(m\), respectively, a new \((n+m-1)\)-by-\((n+m-1)\) matrix \(C\), preserving in certain way, the corresponding JCFs of \(A\) and \(B\). The second one, by Laffey and Smigoc [6] solves the NIEP for lists of complex numbers on the left half-plane, that is, lists with \(\lambda_{1}>0\), \({\rm Re}\,\lambda_{i}\leq 0\), \(i=2,\ldots,n\). Moreover, we also use Lemma 5 in [6]. **Theorem 2.1**: _[_9_]_ _Suppose \(B\) is an \(m\)-by-\(m\) matrix with a JCF that contains at least one \(1\)-by-\(1\) Jordan block corresponding to the eigenvalue \(c\):_ \[J(B)=\left[\begin{array}{cc}c&0\\ 0&I(B)\end{array}\right].\] _Let \({\bf t}\) and \({\bf s}\), respectively, be the left and the right eigenvectors of \(B\) associated with the \(1\)-by-\(1\) Jordan block in the above canonical form. Furthermore, we normalize vectors \({\bf t}\) and \({\bf s}\) so that \({\bf t}^{\ T}{\bf s}=1\). Let \(J(A)\) be a JCF for the \(n\)-by-\(n\) matrix_ \[A=\left[\begin{array}{cc}A_{1}&{\bf a}\\ {\bf b}^{T}&c\end{array}\right],\] _where \(A_{1}\) is an \((n-1)\)-by-\((n-1)\) matrix and \({\bf a}\) and \({\bf b}\) are vectors in \({\mathbb{C}}^{n\mbox{-}1}\). Then the matrix_ \[C=\left[\begin{array}{cc}A_{1}&{\bf at}^{T}\\ {\bf sb}^{T}&B\end{array}\right]\] _has JCF_ \[J(C)=\left[\begin{array}{cc}J(A)&0\\ 0&I(B)\end{array}\right].\] **Theorem 2.2**: _[_6_]_ _Let \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) be a list of complex numbers with \(\lambda_{1}\geq|\lambda_{i}|\) and \({\rm Re}\,\lambda_{i}\leq 0,\,i=2,\ldots,n\). Then \(\Lambda\) is realizable if and only if_ \[s_{1}=s_{1}(\Lambda)\geq 0,\ \ s_{2}=s_{2}(\Lambda)\geq 0,\ \ s_{1}^{2}\leq ns _{2}.\] **Lemma 2.1**: _[_6_]_ _Let \(t\) be a nonnegative real number and let \(\lambda_{2},\lambda_{3},\ldots,\lambda_{n}\) be complex numbers with real parts less than or equal to zero, such that the list \(\{\lambda_{2},\lambda_{3},\ldots,\lambda_{n}\}\) is closed under complex conjugation. Set \(\rho=2t-\lambda_{2}-\cdots-\lambda_{n}\) and_ \[f(x)=(x-\rho)\prod_{j=2}^{n}(x-\lambda_{j})=x^{n}-2tx^{n-1}+b_{2}x^{n-2}+ \cdots+b_{n}. \tag{2}\] _Then \(b_{2}\leq 0\) implies \(b_{j}\leq 0\) for \(j=3,4,\ldots,n\)._ ## 3 Companion matrices and the Smigoc's glue. We say that a list \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) of complex numbers is on the left half-plane if \(\lambda_{1}>0,\)\(\mathop{\rm Re}\lambda_{i}\leq 0,\)\(i=2,3,\ldots,n.\) In this section we give a sufficient condition for a left half-plane list of complex numbers to be \({\cal U}{\cal R}\). Of course, it is our interest to consider lists of complex numbers containing elements out of realizability region of lists of Smigoc type. Our strategy consists in to decompose the given list \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) into sub-lists \[\Lambda_{k}=\{\lambda_{k1},\lambda_{k2},\ldots,\lambda_{kp_{k}}\},\ \lambda_{11}=\lambda_{1},\ k=1,2,\ldots,t,\] with auxiliary lists \[\Gamma_{1} = \Lambda_{1}\] \[\Gamma_{k} = \{s_{1}(\Gamma_{k-1}),\lambda_{k1},\lambda_{k2},\ldots,\lambda_{ kp_{k}}\},\ \ k=2,,\ldots,t,\] each one of them being the spectrum of a nonnegative companion matrix \(A_{k},\) in such a way that it be possible to apply Smigoc's glue technique to the matrices \(A_{k},\) to obtain an \(n\)-by-\(n\) nonnegative matrix with spectrum \(\Lambda\) for each possible JCF allowed by \(\Lambda.\) In the case \(s_{1}(\Lambda)>0,\) with \(\lambda_{i}\neq 0,\)\(i=2,\ldots,n,\) we may choose, if they exist, sub-lists \(\Gamma_{k}\) being the spectrum of a diagonalizable nonnegative companion matrix with a positive column. Then, after Smigoc's glue, we obtain a diagonalizable nonnegative \(n\)-by-\(n\) matrix \(A\) with spectrum \(\Lambda\) and a positive column, which is similar to a diagonalizable positive matrix. Thus, from the extension in [1], \(\Lambda\) is \({\cal U}{\cal R}.\) Next we have the following corollary from Theorem 2.1: **Corollary 3.1**: _Let \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) be a realizable left half-plane list of complex numbers. Suppose that for each JCF \({\bf J}\) allowed by \(\Lambda,\) there exists a decomposition of \(\Lambda\) as_ \[\Lambda = \Lambda_{1}\cup\Lambda_{2}\cup\cdots\cup\Lambda_{t},\ \mbox{where}\] \[\Lambda_{k} = \{\lambda_{k1},\lambda_{k2},\ldots,\lambda_{kp_{k}}\},\ k=1,2, \ldots,t,\ \lambda_{11}=\lambda_{1},\] _with auxiliary lists_ \[\Gamma_{1} = \Lambda_{1},\] \[\Gamma_{k} = \{s_{1}(\Gamma_{k-1}),\lambda_{k1},\lambda_{k2}\ldots,\lambda_{kp _{k}}\},\ k=2,\ldots,t,\] _being the spectrum of a nonnegative companion matrix \(A_{k}\) with JCF \(J(A_{k})\) as a sub-matrix of \({\bf J},\)\(k=1,2,\ldots,t.\) Then \(\Lambda\) is universally realizable._ **Proof.** Since each matrix \(A_{k},\)\(k=1,2,\ldots,t,\) is nonnegative companion with JCF \(J(A_{k})\) being a submatrix of \({\bf J},\) then, from Smigoc's glue applied to matrices \(A_{k},\) we obtain an \(n\)-by-\(n\) nonnegative matrix with spectrum \(\Lambda\) and JCF \({\bf J}.\) As \({\bf J}\) is any JCF allowed by \(\Lambda,\) then \(\Lambda\) is \({\cal U}{\cal R}.\) The following result is well known and useful. **Lemma 3.1**: _Let \(A\) be a diagonalizable irreducible nonnegative matrix with spectrum \(\Lambda=\{\lambda_{1},\ldots,\lambda_{n}\}\) and a positive row or column. Then \(A\) is similar to a diagonalizable nonnegative matrix \(B\in{\rm CS}_{\lambda_{1}},\) with a positive row or column._ **Proof.** If \(A\) is irreducible nonnegative, it has a positive eigenvector \({\bf x}^{T}=[x_{1},\ldots,x_{n}].\) Then if \(D=dig\{x_{1},\ldots,x_{n}\},\) the matrix \[B=D^{-1}AD=\left[\frac{x_{j}}{x_{i}}a_{i,j}\right]\in CS_{\lambda_{1}}\] is nonnegative with a positive row or column. Suppose all lists \(\Gamma_{k}\) in Corollary 3.1, can be taken as the spectrum of a diagonalizable nonnegative companion matrix \(A_{k}\) with a positive column (the last one). Then, since the glue of matrices \(A_{k}\) gives an \(n\)-by-\(n\) diagonalizable irreducible nonnegative matrix \(A\) with a positive column and spectrum \(\Lambda,\)\(A\) is similar to a diagonalizable positive matrix with spectrum \(\Lambda\) and therefore \(\Lambda\) is \({\cal U}{\cal R}\). This is what the next result shows. **Corollary 3.2**: _Let \(\Lambda=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\},\)\(\lambda_{i}\neq 0,\)\(i=2,\ldots,n,\)\(s_{1}(\Lambda)>0,\) be a realizable left half-plane list of complex numbers. If there is a decomposition of \(\Lambda\) as in Corollary 3.1, with all lists \(\Gamma_{k}\) being the spectrum of a diagonalizable nonnegative companion matrix \(A_{k},\) with a positive column, then \(\Lambda\) is universally realizable._ **Proof.** It is enough to prove the result for two lists \(\Gamma_{k}\) of the decomposition of \(\Lambda.\) Let \(\Gamma_{k-1}\) and \(\Gamma_{k},\)\(k=2,\ldots,t,\) be the spectrum, respectively, of matrices \(A_{k-1}\) and \(A_{k},\) which are diagonalizable nonnegative companion with a positive column (the last one). Then \(A_{k-1}\) and \(A_{k}\) are irreducible. In particular, \(A_{k}\) has a positive eigenvector \({\bf s}\) and, since \(A_{k}^{T}\) is also irreducible, \(A_{k}\) has also a positive left eigenvector \({\bf t}^{T}\) with \({\bf t}^{T}{\bf s}=1.\) Now, let \[A_{k-1}=\left[\begin{array}{cc}A_{1,k-1}&{\bf a}\\ {\bf b}^{T}&s_{1}(\Gamma_{k-1})\end{array}\right].\] Since the last column of \(A_{k-1}\) is positive, the vector \({\bf a}\) is also positive and \({\bf at}^{T}\) is a positive submatrix. Therefore, the glue of \(A_{k-1}\) with \(A_{k}\), \[C_{k}=\left[\begin{array}{cc}A_{1,k-1}&{\bf at}^{T}\\ {\bf sb}^{T}&A_{k}\end{array}\right],\] is a diagonalizable nonnegative matrix with its last column being positive. Note that \(C_{k}\) is also irreducible. Then \(C_{k}\) has, besides, a positive eigenvector, and from Lemma 3.1\(C_{k}\) is similar to a matrix with constant row sums and with its last column being positive. Thus, \(C_{k}\) is similar to a diagonalizable positive matrix. Then, Smigoc's glue applied to all matrices \(A_{k}\) gives an \(n\)-by-\(n\) diagonalizable irreducible nonnegative matrix \(A\) with a positive column and spectrum \(\Lambda\). Therefore, \(A\) is similar to a diagonalizable positive matrix with spectrum \(\Lambda\) and from the extension in [1]\(\Lambda\) is \({\cal U}{\cal R}\). Observe that if \(\lambda_{i}\neq 0\), \(i=2,\ldots,n\); \(s_{1}(\Lambda)>0\); \(b_{2}(A_{k})>0\) in Corollary 3.2, then we can guarantee the existence of an \(n\)-by-\(n\) diagonalizable nonnegative irreducible matrix \(A\) with spectrum \(\Lambda\) and a positive column. Thus, this is enough to show the universal realizability of \(\Lambda\). **Example 3.1**: _Consider the list_ \[\Lambda = \{23,-2,-2,-1\pm 5i,-1\pm 5i,-1\pm 5i,-2\pm 7i,-2\pm 7i\},\mbox{ with}\] \[\Gamma_{1} = \{23,-1\pm 5i\},\ \Gamma_{2}=\{21,-2,-1\pm 5i,-2\pm 7i\},\] \[\Gamma_{3} = \{13,-2,-1\pm 5i,-2\pm 7i\}.\] _The diagonalizable companion matrices_ \[A_{1} = \left[\begin{array}{cccc}0&0&598\\ 1&0&20\\ 0&1&21\end{array}\right],\ A_{2}=\left[\begin{array}{cccccc}0&0&0&0&0&57\,87 6\\ 1&0&0&0&0&35\,002\\ 0&1&0&0&0&6266\\ 0&0&1&0&0&1695\\ 0&0&0&1&0&69\\ 0&0&0&0&1&13\end{array}\right],\] \[A_{3} = \left[\begin{array}{cccccc}0&0&0&0&0&35\,828\\ 1&0&0&0&0&20\,618\\ 0&1&0&0&0&3194\\ 0&0&1&0&0&903\\ 0&0&0&1&0&5\\ 0&0&0&0&1&5\end{array}\right]\] realize lists \(\Gamma_{1},\Gamma_{2}\) and \(\Gamma_{3},\) respectively. Smigoc's glue technique applied to matrices \(A_{1},A_{2}\) and \(A_{3}\) gives a \(13\)-by-\(13\) diagonalizable irreducible nonnegative matrix with a positive column and spectrum \(\Lambda.\) Therefore, from Lemma 3.1 and [1], \(\Lambda\) is UR._ ## 4 The effect of adding a negative real number to a not UR list In this section we show how to add a negative real number \(-c\) to a list of complex numbers \[\Lambda=\{\lambda,-a\pm bi,\ldots,-a\pm bi\},\ \lambda,a,b>0,\ \mbox{with}\ s_{1}( \Lambda)>0,\] which is not \({\cal U}{\cal R}\) or we do not know whether it is, makes \[\Lambda_{c}=\{\lambda,-c,\underbrace{-a\pm bi,\ldots,-a\pm bi}_{(n-2)\ \mbox{ complex numbers}}\}\] \({\cal U}{\cal R}\). For instance, the list \(\Lambda_{1}=\{6,-1\pm 3i,-1\pm 3i\}\) is realizable, but we do not know whether it is \({\cal U}{\cal R},\) while \(\Lambda_{2}=\{17,-3\pm 9i,-3\pm 9i\}\) is not realizable. However, both lists become \({\cal U}{\cal R}\) if we add an appropriate negative real number \(-c\) to each of them. We start this section with a lemma which gives a formula to compute the coefficient \(b_{2}\) in (2), Lemma 2.1, for lists \(\Lambda_{c}\) **Lemma 4.1**: _Let_ \[\Lambda_{c}=\{\lambda,-c,\underbrace{-a\pm bi,\ldots,-a\pm bi}_{(n-2)\ \mbox{ complex numbers}}\}\] _be a realizable left half-plane lists of complex numbers and let \(\Lambda_{c}=\Lambda_{1}\cup\Lambda_{2}\cup\cdots\cup\Lambda_{t}\) be a decomposition of \(\Lambda_{c},\)\(-c\in\Lambda_{t},\) with auxiliary lists \(\Gamma_{k}\) with realizing companion matrices \(A_{k},\)\(k=1,2,\ldots,t,\) as in Corollary 3.1, associated with a desired JCF allowed by \(\Lambda_{c}.\) Then the entry in position \((n-1,n)\) of a matrix \(A_{k},\)\(k=1,2,\ldots,t,\) is_ \[b_{2}=p(2a\lambda-2a^{2}n+(4k-2p+1)a^{2}-b^{2})+c(\lambda-(n-2)a), \tag{3}\] _where \((k-1)\) is the number of pairs \(-a\pm bi\) of the last list \(\Gamma_{t}\) of the diagonalizable decomposition of \(\Lambda_{c},\) plus the number of pairs \(-a\pm bi\) of each previous list \(\Gamma_{k},\)\(k=1,\ldots,t-1,\) of the decomposition, and \(p\) is the number of pairs \(-a\pm bi\) of the corresponding list \(\Gamma_{k}.\) Moreover, \(b_{2}\) increases if \(k\) increases._ **Proof.** It is well known that \(b_{2}=\sum_{1\leq j_{1}<j_{2}\leq n}\lambda_{j_{1}}\lambda_{j_{2}},\) with \(\lambda_{ji}\in\Gamma_{k},\) from which \(b_{2}\) in (3) is obtained. Moreover it is clear that \(b_{2}\) increases when \(k\) increases. **Example 4.1**: _Consider_ \[\Lambda_{c}=\{\frac{77}{4},-3,\underbrace{-2\pm 5i,\ldots,-2\pm 5i}_{8\text{ complex numbers}}\}.\] _The last diagonalizable list from the diagonalizable decomposition of \(\Lambda_{c}\) is_ \[\Gamma_{4}:(x-\frac{29}{4})(x+3)(x+2-5i)(x+2+5i)\] _with realizing matrix_ \[A_{4} = \left[\begin{array}{cccc}0&0&0&\frac{2523}{4}\\ 1&0&0&\frac{841}{4}\\ 0&1&0&\frac{\mathbf{39}}{4}\\ 0&0&1&\frac{1}{4}\end{array}\right]\longrightarrow b_{2}(A_{4})=\frac{39}{4}\] \[b_{2} = p(2a\lambda-2a^{2}n+(4k-2p+1)a^{2}-b^{2})+c(\lambda-(n-2)a)\] \[b_{2}(A_{4}) = (4\frac{77}{4}-80+(8-2+1)4-25)+3(\frac{77}{4}-16)=\frac{39}{4}.\] _Suppose we want to obtain a nonnegative matrix with JCF_ \[\mathbf{J}=diag\{J_{1}(\frac{77}{4}),J_{1}(-3),J_{2}(-2+5i),(J_{2}(-2-5i)\}.\] _Then,_ \[\Gamma^{\prime}_{1} = \{\frac{77}{4},-2\pm 5i,-2\pm 5i\}\] \[\Gamma^{\prime}_{2} = \{\frac{45}{4},-3,-2\pm 5i,-2\pm 5i\}.\] _If \(A^{\prime}_{1},A^{\prime}_{2}\) are companion realizing matrices for \(\Gamma^{\prime}_{1}\) and \(\Gamma^{\prime}_{2},\) respectively, then from Lemma 4.1, \(b_{2}(A^{\prime}_{2})=\frac{103}{4},\)\(b_{2}(A^{\prime}_{1})=80\) guarantee that \(A^{\prime}_{1}\) and \(A^{\prime}_{2}\) are nonnegative. Next, the glue of \(A^{\prime}_{1}\) with \(A^{\prime}_{2}\) gives a nonnegative matrix with JCF \(\bf J\)._ **Theorem 4.1**: _Let \(\Lambda=\{\lambda,-a\pm bi,\ldots,-a\pm bi\},\) fixed \(\lambda,\)\(a,b>0,\) be a list of complex numbers with \(s_{1}(\Lambda)>0.\) If_ \[\frac{(2n-11)a^{2}+b^{2}}{2a}\leq\lambda, \tag{4}\] _and there is a real number \(c>0\) such that_ \[\frac{2a(na-\lambda)+b^{2}-7a^{2}}{\lambda-(n-2)a}\leq c\leq\lambda-(n-2)a, \tag{5}\] _then_ \[\Lambda_{c}=\{\lambda,-c,\underbrace{-a\pm bi,\ldots,-a\pm bi}_{(n-2)\text{ complex numbers}}\}\] _becomes universally realizable._ **Proof.** Consider the decomposition \(\Lambda_{c}=\Lambda_{1}\cup\Lambda_{2}\cup\cdots\cup\Lambda_{\frac{n-2}{2}},\) with \[\Lambda_{1} = \{\lambda,-a\pm bi\},\] \[\Lambda_{k} = \{-a\pm bi\},\ k=2,\ldots,\frac{n-4}{2},\] \[\Lambda_{\frac{n-2}{2}} = \{-c,-a\pm bi\}.\] We take the auxiliary sub-lists \[\Gamma_{1} = \Lambda_{1}=\{\lambda,-a\pm bi\}\] \[\Gamma_{2} = \{\lambda-2a,-a\pm bi\}\] \[\Gamma_{3} = \{\lambda-4a,-a\pm bi\}\] \[\vdots\] \[\Gamma_{\frac{n-4}{2}} = \{\lambda-(n-6)a,-a\pm bi\},\] \[\Gamma_{\frac{n-2}{2}} = \{\lambda-(n-4)a,-c,-a\pm bi\},\] where \(\Gamma_{\frac{n-4}{2}}\) and \(\Gamma_{\frac{n-2}{2}}\) are the spectrum of the diagonalizable companion matrices \[A_{\frac{n-4}{2}}=\left[\begin{array}{cccc}0&0&(a^{2}+b^{2})(\lambda-(n-6)a) \\ 1&0&2a\lambda-a^{2}(2n-11)-b^{2}\\ 0&1&\lambda-(n-4)a\end{array}\right]\] and \[A_{\frac{n-2}{2}}=\left[\begin{array}{cccc}0&0&0&(a^{2}+b^{2})(\lambda-(n-4) a)c\\ 1&0&0&(a^{2}+b^{2})(\lambda-(n-4)a)+(7a^{2}-b^{2}+2a\lambda-2a^{2}n)c\\ 0&1&0&(\lambda-(n-2)a)c+(7a^{2}-b^{2}+2a\lambda-2a^{2}n)\\ 0&0&1&\lambda-(n-2)a-c\end{array}\right],\] respectively. Observe that sub-lists \(\Gamma_{\frac{n-6}{2}},\ldots,\Gamma_{2},\Gamma_{1}\) have the same pair of complex numbers that the list \(\Gamma_{\frac{n-4}{2}},\) but with a bigger Perron eigenvalue. Then, if \(\Gamma_{\frac{n-4}{2}}\) is diagonalizably companion realizable, \(\Gamma_{\frac{n-6}{2}},\ldots,\Gamma_{2},\Gamma_{1}\) also are. Thus, from Lemma 2.1 we only need to consider the entries in position \((2,3)\) in \(A_{\frac{n-4}{2}}\) and in position \((3,4)\) in \(A_{\frac{n-2}{2}}.\) From (4) and (5) these entries are nonnegative and therefore \(A_{\frac{n-4}{2}}\) and \(A_{\frac{n-2}{2}}\) are diagonalizable companion realizing matrices. Thus, after applying \(\frac{n-4}{2}\) times Smigoc's glue to the matrices \(A_{1},\ldots,A_{\frac{n-2}{2}},\) we obtain an \(n\)-by-\(n\) diagonalizable nonnegative matrix \(A\) with spectrum \(\Lambda_{c}.\) Thus \(\Lambda_{c}\) is \(\mathcal{DR}.\) To obtain an \(n\)-by-\(n\) nonnegative matrix \(A\) with spectrum \(\Lambda_{c}\) and a nondiagonal JCF \(\mathbf{J},\) we take \(\Lambda_{c}=\Lambda_{1}\cup\cdots\cup\Lambda_{t}\) with auxiliary lists \(\Gamma_{k}\) being the spectrum of a companion matrix \(A_{k}\) with JCF as a sub-matrix of \(\mathbf{J}.\) Next we need to prove that all \(A_{k}\) are nonnegative. To do that, we compute \(b_{2}(A_{t})\) from the formula in (3), where \(A_{t}\) (with \(\Gamma_{t}\) containing \(-c\)) is the last diagonalizable matrix in the diagonalizable decomposition of \(\Lambda_{c}.\) From (4) and (5) \(b_{2}(A_{t})\geq 0.\) From Lemma 4.1 all \(b_{2}(A_{k}),\)\(k=1,\ldots,t-1,\) are nonnegative. Therefore the glue of matrices \(A_{k}\) gives an \(n\)-by-\(n\) nonnegative matrix \(A\) with the desired JCF \(\mathbf{J}.\) **Example 4.2**: \(i)\)_\(\Lambda=\{6,-1\pm 3i,-1\pm 3i\}\) is realizable by the companion matrix_ \[C=\left[\begin{array}{cccc}0&0&0&0&600\\ 1&0&0&0&140\\ 0&1&0&0&104\\ 0&0&1&0&0\\ 0&0&0&1&2\end{array}\right],\] _with a non-diagonal JCF. We do not know whether \(\Lambda\) has a diagonalizable realization. Then, consider the list_ \[\Lambda_{c}=\{6,-c,-1\pm 3i,-1\pm 3i\}.\] _Condition (4) is satisfied and from (5) we have \(1\leq c\leq 2.\) Then for \(c=2,\) we have that_ \[\Gamma_{1}=\{6,-1\pm 3i\},\ \ \Gamma_{2}=\{4,-2,-1\pm 3i\}\] _are the spectrum of diagonalizable nonnegative companion matrices_ \[A_{1}=\left[\begin{array}{ccc}0&0&60\\ 1&0&2\\ 0&1&4\end{array}\right],\ \mbox{and}\ \ A_{2}=\left[\begin{array}{ccc}0&0&0&80 \\ 1&0&0&36\\ 0&1&0&2\\ 0&0&1&0\end{array}\right],\] _respectively. Then, from Smigoc's glue we obtain a diagonalizable nonnegative matrix with spectrum \(\Lambda_{c}.\) It is clear that, from the characteristic polynomial associated to \(\Lambda_{c},\)\(\Lambda_{c}\) has also a companion realization \(A_{3},\)_ \[A_{3}=\left[\begin{array}{cccccc}0&0&0&0&0&1200\\ 1&0&0&0&0&880\\ 0&1&0&0&0&348\\ 0&0&1&0&0&104\\ 0&0&0&1&0&4\\ 0&0&0&0&1&0\end{array}\right],\] _with a JCF with blocks of maximum size. Note that the formula in (3) gives \((k=2,\)\(p=1,\)\(t=2)\)\(b_{2}(A_{2})=2,\) while \((k=3,\)\(p=2)\) gives \(b_{2}(A_{3})=4.\) Therefore \(\Lambda_{c}\) is \(\mathcal{UR}\). Observe that if \(1\leq c\leq 2,\) then_ \[\Lambda_{c}=\{6,-c,-1\pm 3i,-1\pm 3i\}\] _is also \(\mathcal{UR}.\)_ \(ii)\)_Consider the list \(\Lambda=\{17,-3\pm 9i,-3\pm 9i\}.\) Since \(s_{1}(\Lambda)=5\) and \(s_{2}(\Lambda)=1,\)\(\Lambda\) is not realizable. From condition (5), \(\frac{24}{5}\leq c\leq 5.\) Then for \(c=5,\)_ \[\Lambda_{c}=\{17,-5,-3\pm 9i,-3\pm 9i\}\] _is \(\mathcal{U}\!\mathcal{R}\). In fact,_ \[\Gamma_{1}=\{17,-3\pm 9i\}\text{ and }\Gamma_{2}=\{11,-5,-3\pm 9i\}\] _are the spectrum of diagonalizable nonnegative companion matrices, which from Smigoc's glue give rise to a diagonalizable nonnegative matrix with spectrum \(\Lambda_{c}.\) From the characteristic polynomial associated to \(\Lambda_{c}\) we obtain a nonnegative companion matrix with spectrum \(\Lambda_{c}\) and non-diagonal JCF. Therefore, \(\Lambda_{c}\) is \(\mathcal{U}\!\mathcal{R}.\)_ Observe that in Theorem 4.1, in spite that \(s_{1}(\Lambda)>0,\) if \(s_{1}(\Lambda)\) is small enough, there are lists \(\Lambda_{c},\) which are not \(\mathcal{U}\!\mathcal{R}\) or we cannot to prove they are from our procedure. However, from Theorem 4.1 we may compute a Perron eigenvalue \(\lambda,\) which guarantees that for a family of lists \(\Lambda_{c},\) with \(c>0\) and \(n\geq 6,\)\(\Lambda_{c}\) will be \(\mathcal{U}\!\mathcal{R}\). Then, the following result characterizes a family of left half-plane lists, which are \(\mathcal{U}\!\mathcal{R}\). **Corollary 4.1**: _The left half-plane lists of the family_ \[\Lambda_{c}=\{\frac{1}{2a}((2n-7)a^{2}+b^{2}),-c,\underbrace{-a\pm bi,\ldots, -a\pm bi}_{(n-2)\text{ complex numbers}}\},\] _with \(0<\sqrt{3}a<b,\)\(0<c\leq\frac{b^{2}-3a^{2}}{2a},\) are universally realizable.._ **Proof.** It is clear that for \(\lambda=\frac{1}{2a}\left((2n-7)a^{2}+b^{2}\right),\) conditions (4) and (5) in Theorem 4.1 are satisfied. Moreover, from \(0<\sqrt{3}a<b,\)\(\lambda-(n-2)a=\frac{b^{2}-3a^{2}}{2a}>0.\) Then, from Corollary 4.1 some left half-plane lists that are UR are: \[i)\ \Lambda_{c} = \{\frac{2n-3}{2}a,-c,\underbrace{-a\pm 2ai,\ldots,-a\pm 2ai}_{(n-2) \text{ complex numbers}}\},\text{ with }0<c\leq\frac{a}{2}\] \[ii)\ \Lambda_{c} = \{(n+1)a,-c,\underbrace{-a\pm 3ai,\ldots,-a\pm 3ai}_{(n-2) \text{ complex numbers}}\},\text{ with }0<c\leq 3a\] \[\cdot\] \[iii)\ \Lambda_{c} = \{\frac{2n+9}{2}a,-c,\underbrace{-a\pm 4ai,\ldots,-a\pm 4ai}_{(n-2) \text{ complex numbers}}\},\text{ with }0<c\leq\frac{13}{2}a\] \[iv)\ \Lambda_{c} = \{\frac{8n-3}{8}a,-c,\underbrace{-a\pm\frac{5}{2}ai,\ldots,-a\pm \frac{5}{2}ai}_{(n-2)\text{ complex numbers}}\},\text{ with }0<c\leq\frac{13}{8}a,\] and so on. Observe that in Corollary 4.1, if \(c\) is strictly less than its upper bound, then \(\Lambda_{c},\) as we have seen, can be realized by a diagonalizable matrix with its last column being positive. Then, from the extension in [1], \(\Lambda_{c}\) is \({\cal U}{\cal R}.\) ## 5 The merge of spectra Let \(\Gamma_{1}=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\) and \(\Gamma_{2}=\{\mu_{1},\mu_{2},\ldots,\mu_{m}\}\) be lists of complex numbers. In [5] the authors define the concept of the _merge of the spectra_\(\Gamma_{1}\) with \(\Gamma_{2}\) as \[\Gamma=\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots, \mu_{m}\},\] and prove that if \(\Gamma_{1}\) and \(\Gamma_{2}\) are diagonalizably ODP realizable, then the merge \(\Gamma_{1}\)_with_\(\Gamma_{2},\) is also diagonalizably ODP realizable, and therefore from the extension in [4], \(\Gamma\) is \({\cal U}{\cal R}\). Here we set a similar result as follows: **Theorem 5.1**: _Let \(\Gamma_{1}=\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\},\)\(\lambda_{1}>\left|\lambda_{i}\right|,\)\(i=2,\ldots,n,\) be the spectrum of a diagonalizable nonnegative \(n\)-by-\(n\) matrix \(A\in{\cal C}{\cal S}_{\lambda_{1}}\) with its last column being positive. Let \(\Gamma_{2}=\{\mu_{1},\mu_{2},\ldots,\mu_{m}\},\)\(\mu_{1}>\left|\mu_{i}\right|,\)\(i=2,\ldots,m,\) be the spectrum of a diagonalizable nonnegative \(m\)-by-\(m\) matrix \(B\in{\cal C}{\cal S}_{\mu_{1}}\) with its last column being positive. Then_ \[\Gamma=\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots, \mu_{m}\}\] _is universally realizable.._ **Proof.** Let \(A\in{\cal C}{\cal S}_{\lambda_{1}}\) be a diagonalizable nonnegative matrix with spectrum \(\Gamma_{1}\) and with its last column being positive. Then \(A\) is similar to a diagonalizable positive matrix \(A^{\prime}.\) If \(\alpha_{1},\ldots,\alpha_{n}\) are the diagonal entires of \(A^{\prime},\) then \[A_{1}=A^{\prime}+{\bf e}[0,0,\ldots,\mu_{1}]=\left[\begin{array}{cc}A^{ \prime}_{11}&{\bf a}\\ {\bf b}^{T}&\alpha_{n}+\mu_{1}\end{array}\right]\in{\cal C}{\cal S}_{\lambda_ {1}+\mu_{1}}\] is diagonalizable positive with spectrum \(\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n}\}\) and diagonal entries \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n}+\mu_{1}.\) Let \(B\in{\cal C}{\cal S}_{\mu_{1}}\) be a diagonalizable nonnegative matrix with spectrum \(\Gamma_{2}\) and with its last column being positive. Then \(B\) is similar to a diagonalizable positive matrix \(B^{\prime}\) and \[B_{1}=B^{\prime}+{\bf e}[\alpha_{n},0,\ldots,0]\] is diagonalizable positive with spectrum \(\{\mu_{1}+\alpha_{n},\mu_{2},\ldots,\mu_{m}\}.\) Now, by applying the Smigoc's glue to matrices \(A_{1}\) and \(B_{1}\), we obtain a diagonalizable positive matrix \(C\) with spectrum \(\Gamma.\) Hence, \(\Gamma\) is \(\mathcal{UR}\) Theorem 5.1 is useful to decide, in many cases, about the universal realizability of left half-plane list of complex numbers, as for instance: **Example 5.1**: _Is the list_ \[\Gamma=\{30,-1,-5,-1\pm 3i,-1\pm 3i,-1\pm 3i,-3\pm 9i,-3\pm 9i\}\ \mathcal{UR}\text{?}\] _Observe that from the results in Section \(4,\)_ \[\Gamma_{1} = \{21,-5,-3\pm 9i,-3\pm 9i\}.\] \[\Gamma_{2} = \{9,-1,-1\pm 3i,-1\pm 3i,-1\pm 3i\}\] _are the spectrum of a diagonalizably nonnegative matrix with constant row sums and a positive column (the last one). Then, they are similar to diagonalizable positive matrices and from Theorem 5.1, the merge \(\Gamma\) is also the spectrum of a diagonalizable positive matrix. Therefore, \(\Gamma\) is \(\mathcal{UR}.\)_
2305.04504
The Unified Effect of Data Encoding, Ansatz Expressibility and Entanglement on the Trainability of HQNNs
In this paper, we propose a framework to study the combined effect of several factors that contribute to the barren plateau problem in quantum neural networks (QNNs), which is a critical challenge in quantum machine learning (QML). These factors include data encoding, qubit entanglement, and ansatz expressibility. To investigate this joint effect in a real-world context, we focus on hybrid quantum neural networks (HQNNs) for multi-class classification. Our proposed framework aims to analyze the impact of these factors on the training landscape of HQNNs. Our findings show that the barren plateau problem in HQNNs is dependent on the expressibility of the underlying ansatz and the type of data encoding. Furthermore, we observe that entanglement also plays a role in the barren plateau problem. By evaluating the performance of HQNNs with various evaluation metrics for classification tasks, we provide recommendations for different constraint scenarios, highlighting the significance of our framework for the practical success of QNNs.
Muhammad Kashif, Saif Al-Kuwari
2023-05-08T06:57:42Z
http://arxiv.org/abs/2305.04504v1
The Unified Effect of Data Encoding, Ansatz Expressibility and Entanglement on the Trainability of HQNNs ###### Abstract In this paper, we propose a framework to study the combined effect of several factors that contribute to the barren plateau problem in quantum neural networks (QNNs), which is a critical challenge in quantum machine learning (QML). These factors include data encoding, qubit entanglement, and ansatz expressibility. To investigate this joint effect in a real-world context, we focus on hybrid quantum neural networks (HQNNs) for multi-class classification. Our proposed framework aims to analyze the impact of these factors on the training landscape of HQNNs. Our findings show that the barren plateau problem in HQNNs is dependent on the expressibility of the underlying ansatz and the type of data encoding. Furthermore, we observe that entanglement also plays a role in the barren plateau problem. By evaluating the performance of HQNNs with various evaluation metrics for classification tasks, we provide recommendations for different constraint scenarios, highlighting the significance of our framework for the practical success of QNNs. quantum machine learning; entanglement; data encoding; quantum neural networks; trainability + Footnote †: journal: ## 1 Introduction The quest to build practical quantum computers has intensified over the last few years. Several quantum devices, with around one hundred qubits, have already been developed. These devices are known as noisy intermediate-scale quantum (NISQ) devices [1]. Although these devices are limited and susceptible to errors, they demonstrate clear advantage for specific applications as compared to the best existing classical computers [2, 3, 4, 5]. As the NISQ devices require quantum routines of shallow depth and robustness against noise, a hybrid design space integrating classical and quantum processing has become a leading approach to realize the potential of quantum computing for a wide range of applications [6]. In a hybrid design space context, variational quantum algorithms (VQAs) are the most popular class of algorithms. These algorithms utilize NISQ devices for evaluating the objective function through parameterized quantum circuits (PQCs) and classical devices for function optimization with respect to the target application. The VQAs have been studied for a wide range of applications, including quantum chemistry [7], state diagonalization [8], factorization [9], quantum optimization [10], and quantum field theory simulation [11, 12]. Furthermore, these algorithms have also been studied in the context of noise resilience [13], trainability [14, 15, 16] and computational complexity [17, 18]. In other words, VQAs closely resemble machine learning (ML) algorithms as they also train a computer to learn patterns [19]. Therefore, VQAs have been proposed as a quantum analog of various ML algorithms [10, 20, 21, 22, 23, 24]. Consequently, the new field of quantum machine learning (QML) has emerged by merging quantum computation and ML. In recent years, a number of PQCs have been proposed for QML applications [25]. Amongst them are the quantum neural networks (QNNs), which have extensively been explored [26, 27, 28, 29, 30, 31, 32, 33] as quantum extensions of classical deep neural networks (DNNs). The basic building block in QNNs is quantum perceptron, which has been proposed in different ways [34, 35, 36, 37, 38, 39, 40]. To this end, PQCs became a promising quantum analog of artificial neurons [41, 42, 43]. Given the NISQ era limitations, hybrid quantum neural networks (HQNNs) are commonly used to analyze the potential quantum advantage in QNNs. HQNNs replicate the QNN's architecture by enclosing a typical QNN in some classical input pre- and output post-processing. The input preprocessing typically aims to downsize the input to cope with the limitation of NISQ devices (mainly in number of qubits), and is done via a classical neuron layer with fewer neurons or some dimensionality reduction algorithm. The output postprocessing is performed to interpret the output of enclosed QNN output in a meaningful way. The postprocessing is usually done via a classical neuron layer at the end, which also allows to apply the familiar non-linear activation functions to get the final output/prediction. Although QNNs are being extensively explored for various applications, the literature still lacks solid and concrete statements about their quantum advantage [44, 45, 46]. ### Barren Plateaus One of the most challenging problems in HQNNs is the phenomenon of barren plateaus (BP) [14, 15, 16, 47]. In BP, the cost function landscapes during HQNN's training become exponentially flat with an increase in the system size. In other words, the gradients of parameters subject to optimization vanish exponentially as a function of the number of qubits. This implies that the existence of BP in HQNN's training landscapes affects their _trainability_1, resulting in a significant performance degradation and consequently limiting HQNN's applicability in practice. Footnote 1: The trainability essentially ensures that the objective function is optimized after every training iteration until the model convergence. It has been demonstrated in [14] that a sufficiently random ansatz (a quantum subroutine consisting of a sequence of gates applied to specific wires) will experience the BP if its uniform unitaries distribution matches up to the second moment, i.e., it forms a unitary 2-design. Therefore, the choice of ansatz is central to the success of hybrid quantum-classical algorithms. Some frequently explored ansatzes in this regard are the Hamiltonian variational ansatz [48], coupled cluster ansatz [49, 50, 51], quantum alternating operator ansatz [10, 52] and hardware-efficient ansatz [53]. Ideally, the ansatz is required to be both trainable and expressible to reach an optimal solution. The _expressibility_ of the ansatz is specifically desired so that it can provide an accurate approximation to the solution. Simultaneously, the training landscapes also need to have accessible-enough features to find the solution. The _expressibility_ of a quantum ansatz implies how uniformly the given ansatz can explore the unitary space [54]. The authors in [54] extend the original work reporting BP [14] to ansatz expressibility. They propose the idea of problem-inspired ansatz rather than the hardware-efficient ansatz, which are of significant importance in the NISQ era. The expressibility of ansatz plays a significant role in overcoming BP to a certain extent. Ansatz with greater expressibility are more susceptible to BP and vice versa. Similarly, _deep ansatz_ are considered to be more expressible [14]. Moreover, ansatz expressibility can directly be derived from the nature of the target problem (more expressible ansatz for complex problems and vice versa). In addition to ansatz expressibility, the problem of BP in HQNNs may also arise due to the type of entanglement used in PQCs [55], the noise levels in high-depth quantum circuits [1, 47, 56, 57, 58] and how the data is being encoded into the PQCs [43]. Entanglement is a fundamental property of quantum mechanics and is a key to constructing expressible quantum circuits. However, it can also be a potential source of BP, as discussed in [55]. Data encoding is also a crucial step in HQNNs and is often considered to be the performance bottleneck, and it can also affect the trainability and expressive of HQNNs [59]. ### Research Gap As discussed in Section 1.1, data encoding, ansatz expressibility, and the entanglement are simultaneously used in HQNNs. While the BP dependence on all these concepts has separately been explored in different studies [43, 54, 55, 59], to the best of our knowledge, their joint (holistic) effect (with respect to each other) on the trainability of QNNs from the aspect of BP has not been investigated yet. Furthermore, existing work focuses on the standalone mathematical implementation (formulation or modeling) of QNNs, whereas the HQNNs are more relevant for practical applications on NISQ devices, which allows us to experiment with real-world datasets. Therefore, a framework for HQNNs that allows to simultaneously analyze the effect of data encoding, ansatz expressibility, and the entanglement between qubits on the trainability of HQNNs is needed. ### Proposed Framework In this paper, we propose a framework to perform an empirical analysis (based on data obtained from experiments) of the joint effect of data encoding, ansatz expressibility, and entanglement in HQNNs with respect to each another. We typically investigate the effects of aforementioned concepts in feed-forward HQNNs for a hardware-efficient periodic ansatz structure for a real-world application (multi-class classification). An abstract illustration of our analysis is depicted in Figure 1. The classical to quantum feature mapping is achieved via the two frequently used data encoding strategies, called amplitude encoding and angle encoding. For the PQC, we consider two similar ansatz structures (to be used as hidden quantum layers in HQNN) differentiated only by the inclusion/removal of entanglement, which we name as entangled ansatz (shown as Nearest Neighbour Entanglement in Figure 1) and unentangled ansatz (shown as No Entanglement in Figure 1). Both the ansatz structures are separately experimented for each of the encoding scheme. We consider different width (\(n\)) of underlying quantum layers for the analysis of BP in HQNNs. Moreover, different depth (\(n\)) of quantum layers are considered for all the widths to analyze how the ansatz expressibility plays a role in trainability of HQNNs for both the ansatzes. We benchmark accuracy and loss convergence as evaluation metrics for the models used in this article. The HQNNs (with certain \(n\) and \(m\)) achieving higher accuracy with faster and smoother convergence to the optimal solution are considered better and hence yields better performance. In addition to the accuracy and loss convergence, we also evaluate the HQNNs for some additional performance parameters. These additional performance parameters are precision, recall score and and F1-score. ### Contribution We perform an extensive list of experiments to study the combined effect of data encoding, ansatz expressibility and entanglement between qubits on the trainability of HQNNs on a real world dataset. We typically use handwritten digit dataset from _sklearn_[60]. The reason behind selecting this particular dataset over other famous similar datasets like MNIST [61], is because of the smaller image size which is more suitable for NISQ devices. Based on the achieved results, we observe that the problem of BP arises in HQNNs also, as the number of qubits increase, resulting in performance degradation. Furthermore, the occurrence of BP is dependent on the ansatz depth (expressibility) irrespective of the data encoding strategy. Consequently, we perform the trainability and expressibility analysis for both the data encodings first for entangled and then unentangled ansatz. This analysis provides an idea about the appropriate depth of quantum layers for the given width, when working with real-world applications. Moreover, the obtained results also provide an idea about which encoding strategy is slightly more advantageous (from BP and trainability aspect) than the other. In addition, we also observe that the entanglement between the qubits in quantum layers also plays a role in the trainability of HQNNs. However, it's impact (positive or negative) on the overall performance of underlying model is dependent on how the data is being encoded. Moreover, we also evaluate the HQNNs in terms of different evaluation metrics for classification applications (precision, recall and F1-score), which signifies the relevance of HQNNs in real-world applications. Lastly, we illustrate the significance of our proposed framework by considering different constraint scenarios on the primary components of HQNNs (data encoding, ansatz expressibility and entanglement inclusion/removal),and provide recommendations for the optimal set of parameters. It is important to note that the mathematical analysis of individual performance parameters (such as data encoding, ansatz expressibility, and the entanglement) exist in literature [43, 54, 55, 59]. Therefore, the mathematical formulation of various perfor Figure 1: An Overview of Proposed Methodology mance parameters in terms of trainability is out of the scope of this paper. However, a combined effect of all these non-trivial components (performance parameters) of HQNNs for a practical application has not yet been explored. One possible reason behind the lack of such a unified analysis is that it is challenging to theoretically analyze the effect of all these concepts (with respect to each other) simultaneously in a single framework. However, experimental investigation can provide the leverage of such unified analysis. Therefore, we attempt to experimentally/empirically analyze their joint (holistic or simultaneous) effect from practical viewpoint. ### Organization The rest of the paper is organized as follows: Section 2 provides the necessary background of HQNNs along with its main components. The state-of-the-art on potential solutions and analysis of the BP problem in HQNNs is discussed Section 3. The detailed methodology for the framework development of HQNNs analysis is presented in Section 4. The experimentation details including the list of experiments performed is discussed in Section 5. The analysis of proposed framework based on the obtained results is discussed in Section 6. Section 7 illustrates the significance of our proposed framework for various design constraints of HQNNs. Finally, Section 8 concludes the paper. ## 2 Hybrid Quantum Neural Networks In this section, HQNNs are discussed in analogy with classical deep neural networks. Furthermore, the main components of HQNNs in correspondence with what is used in this paper, i.e., data encoding, observable measurement and cost function details are discussed in Section 2.1 and Section 2.2 respectively. Lastly, the ansatz trianability and expressibility are discussed in section 2.3. In a typical setting of classical DNNs (Figure 1(a)), the first step is to map the input data to the feature space via feature embedding layers \(F_{x}(.)\). The embedded data is then trained through fully connected neuron layers \(\prod_{l}W_{l}(.)\) to learn the inherent relationships between the input and output of a particular dataset. The number of layers and neurons in each layer can be customized according to the complexity of the target application. The quantum counterpart of DNNs (i.e. QNNs) has recently attracted attention due to the tremendous success of DNNs, and improved computation power QNNs may offer Figure 2: Schematic Illustration of DNN and QNN [62]. QNNs have a structure similar to DNNs, as shown in Figure 1(b). Analogous to other QML algorithms, QNNs also exploit PQCs, which are parameters (classically optimizable) dependent quantum circuits. Combining both architectures results in HQNN. The HQNNs work in five steps [63]: (1) Input Downscaling: The first step in HQNNs working is input downscaling to cope with the limitations NISQ devices mainly in number of qubits. (2) Feature mapping: Classical data points \(x\) are mapped to \(n-\)qubit quantum state \(\ket{\psi}\) represented by Equation 1, where \(S(x)\) is the mapping function. (3) Training: The prepared quantum state is processed and trained using an ansatz \(U\) via a series of single and multi-qubit unitaries as shown in Equation 2. Rotation angle in the ansatz is parameterized by vector \(\theta\). (4) Measurement: The quantum state is then measured and the corresponding eigenvalue of the measurement observable \(O\) is obtained as shown in Equation 3. (5) Classical postprocessing of the observable: the quantum state measurement results in a classical value and hence can be processed by a classical device. The postprocessing of measurement results is commonly done via a classical neuron layer. It also allows to apply a familiar non-linear activation functions like SoftMax, and an optimization routine to minimize the cost function. The steps \(2-4\) mentioned above forms a typical architecture of QNN as shown in Figure 1(b). Hence in a typical HQNN architecture, the QNN is completely replicated, making the analysis of quantum part (step \(2-5\) above) in HQNNs valid for QNN also. \[\ket{\psi(x)}=S(x)\ket{0}^{\otimes n} \tag{1}\] \[\ket{\phi(x,\theta)}=U(\theta)\ket{\psi(x)} \tag{2}\] \[f(x,\theta)=\bra{\phi(x,\theta)}O\ket{\phi(x,\theta)} \tag{3}\] Generally, in QML, the underlying PQCs are iteratively executed for an input \(x\) and a parameter vector \(\theta\) to approximate its expectation value because of the probabilistic nature of quantum computation. In QNNs, this expectation value is usually considered as the output of the network [63]. In the following subsections, we provide a brief overview of different parts of HQNNs in correspondence with the approaches we use in this work. ### Data Encoding In QNNs, inputting the data in a way that quantum circuit can process has been a pressing challenge, and is often termed data encoding. The data encoding can be considered as a feature map that maps an input feature \(x\) to the quantum system's Hilbert space, thus creating a quantum state \(\ket{\psi_{x}}\)[64]. The encoding process is a crucial step while designing quantum algorithms and can significantly effects their performance[64, 65, 66, 27]. In practice, the transformation (\(x\longrightarrow\ket{\psi_{x}}\)) is typically achieved through a unitary transformation (\(S_{x}\)), implemented using a variational circuit, whose parameters are dependent on the input data being encoded [64, 27]. The circuit (\(S_{x}\)) then acts on the initial state \(\ket{\psi}\), which is usually a ground state, i.e., \(\ket{\psi}=0^{\otimes n}\). The encoding is then realized as in Equation 4. \[x\mapsto E(x)=S_{x}\ket{\phi}\bra{\phi}S_{x}^{\dagger}=\ket{x}\bra{x}=:\rho_{x} \tag{4}\] However, the transformation circuit \(S_{x}\) is required to be hardware-efficient to accommodate the limitations imposed by the NISQ regime. Several encoding strategies have recently been proposed [64, 66, 67]. However, in HQNNs, the frequently used ones are amplitude and angle encoding. #### 2.1.1 Amplitude Encoding In amplitude encoding, data is encoded into quantum state amplitudes. For \(x\in\mathbb{R}^{n}\), the amplitude encoding maps \(x\longrightarrow E(x)\) into the amplitudes of an \(n-\)qubit quantum state as shown in the Equation below: \[\ket{\psi_{x}}=\sum_{i=1}^{N}x_{i}\ket{i} \tag{5}\] where \(N=2^{n}\), \(x_{i}\) is the \(i^{th}\) element of \(x\) and \(\ket{i}\) is the \(i^{th}\) computational basis state. For a classical dataset \(\mathcal{D}\) with \(M\) examples and \(N\) features as shown in Equation below: \[\mathcal{D}=\{x^{(1)},\ldots x^{(m)},\ldots x^{(M)}\} \tag{6}\] where \(x^{(m)}\) is an \(N-\)dimensional feature vector for \(m=1,\ldots M\). The amplitude can then easily be understood by concatenating \(x^{(m)}\) in a single vector as follows: \[\alpha=\mathcal{C}_{norm}=\{x_{1}^{(1)},\ldots,x_{N}^{(1)},x_{1}^{(2)},\ldots, x_{N}^{(2)},x_{1}^{(M)},\ldots,x_{N}^{(M)}\} \tag{7}\] The factor \(\mathcal{C}_{norm}\) is the normalization factor and must be normalized such that \(|\alpha|^{2}=1\). Then the input can be represented in the computational basis using the following Equation: \[\ket{\mathcal{D}}=\sum_{i=1}^{2^{n}}\alpha_{i}\ket{i} \tag{8}\] where, \(\alpha_{i}\) represents amplitude vector elements (\(\alpha\)) and \(\ket{i}\) are the computational basis states. The total number of amplitudes being encoded are \(N\times M\). A system of \(n-\)qubits can encode \(2^{n}\) features. Therefore, amplitude encoding requires \(n\geq log_{2}(NM)\). The constants can be padded if the total number of features being encoded are greater than \(2^{n}\)[65]. #### 2.1.2 Angle Encoding Angle encoding, also called qubit encoding, encodes the input data features into rotation angle of qubits and has been used in various QML algorithms [65, 68, 69]. For a feature vector \(x=[x_{1},x_{2},\ldots x_{N}]^{T}\in\mathcal{X}^{N}\), the following Equation typically represents the angle encoding. \[\ket{x}=\bigotimes_{i=1}^{N}cos(x_{i})\ket{0}+sin(x_{i})\ket{1} \tag{9}\] The angle encoding encodes \(N\) features into an \(n-\)qubit system. The state preparation unitary for a single feature per qubit in qubit encoding is represented by the following Equation: \[S_{x_{j}}=\bigotimes_{i=1}^{N}U_{i}\;\;where\;\;U_{i}:=\begin{bmatrix}\cos(x_{ j}^{(i)})&-\sin(x_{j}^{(i)})\\ \sin(x_{j}^{(i)})&\cos(x_{j}^{(i)})\end{bmatrix} \tag{10}\] The angle encoding approach can also be generalized to encode two features in a single qubit; this is called dense qubit encoding [67]. However, in this work, we adopt simple qubit encoding. ### Observable Measurement and Cost Function The qubits can be measured in different measurement bases. In this work, we use the eigenbasis of \(\sigma^{z}\) for the expectation value of our PQC. For an \(n-\)qubit system, the observable measurement in \(\sigma^{z}\) basis can be described as the tensor product of \(n\) Pauli-Z matrices i.e., \(O=\sigma^{\otimes n}\). The \(\sigma^{z}\) observable returns \(-1\) for odd parity quantum state and \(1\) for even parity, keeping the overall expectation value of PQC in the range \([-1,1]\). The training of quantum layers HQNNs is subjected to finding the parameter vector \(\theta\) that minimizes the loss after every training iteration. Since we consider a multi-class classification problem, we use sparse categorical cross entropy as a cost function, as shown in the Equation below: \[Cost=-\frac{1}{N}\sum_{i=1}^{N}[y_{i}log(\hat{y_{i}})+(1-y_{i})log(1-\hat{y_{i }})] \tag{11}\] where, \(y_{i}\) are true labels and \(\hat{y_{i}}\) are predictions. Classical optimization techniques, like gradient descent, can be used for the optimization of the cost function, which simply takes the partial derivatives of each parameter and decide the next minimum direction. However, the output of PQC, i.e. expectation value of measurement observable, needs to be differentiated for every parameter in the variational ansatz. The expectation value of a PQC can be differentiated with respect to each parameter using the parameter shift rule (Equation 12), first introduced for QML algorithms in [31] and extended in [70]. \[\frac{df}{d\theta_{i}}=\frac{f(\theta_{i}+s)-f(\theta_{i}-s)}{2} \tag{12}\] where, \(s\) is the macroscopic shift and is determined by the corresponding gate's eigenvalue, which is parameterized by \(\theta_{i}\). ### Ansatz Trainability and Expressibility The ansatz can be thought of as a PQC consisting of single-qubit parameterized gates. It may or may not contain multi-qubit (entangling gates) depending on the target problem. These parameterized gates are dependent on adjustable parameters. In the context of HQNNs (or QML in general), these parameters are trained in data-driven tasks, which is analogous to the case of classical NNs. The network is said to be trainable as long as the optimization algorithm is able to minimize the loss (as per the defined cost function) in every training iteration. The trainability of the network is then compromised when the gradients of parameters are not accessible for further optimization, and the optimization algorithm can not reach the optimal solution. PQC can be considered expressive, if it can be exploited to uniformly explore the unitary group \(\mathcal{U}(d)\). Thus, the expressiblity of PQC can be defined in terms of the following super-operator [54]. \[\mathcal{A}_{\mathbb{U}}{}^{(t)}(.):=\int_{\mathcal{U}(d)}d\mu(V)V^{\otimes t} (.)(V^{\dagger})^{\otimes t}-\int_{\mathbb{U}}dUU^{\otimes t}(.)(U^{\dagger}) ^{\otimes t} \tag{13}\] where \(d\mu(V)\) is the volume element of Haar measure and \(dU\) is the volume element corresponding to to uniform distribution over \(\mathbb{U}\). If \(\mathcal{A}_{\mathbb{U}}{}^{(t)}(X)=0\) for all operators \(X\) then averaging over elements of \(\mathbb{U}\) agrees with averaging over Haar distribution up to the \(t\)-th moment. In this case, \(\mathbb{U}\) forms a _t-design_. ## 3 Related Work In this section, we provide details of recent state-of-the-art solutions (mainly inspired by the BP problem in classical NNs) to potentially overcome the issue of BP in QNNs aiming to enhance their trainability. Additionally, some potential sources of BP in QNNs training landscapes are also discussed, where the BP problem is analyzed from the aspect of underlying ansatz. The BP phenomenon in QNNs was first studied in [14], where the random PQCs were initialized, and then the variance of partial derivatives was calculated i.e., \(var[\partial C]=(\partial C^{2})-(\partial C)^{2}\). The authors then show that for deep circuits of order \(poly(n)\), the variance exponentially vanishes with the number of qubits i.e., \(var[\partial C]=\frac{1}{2^{n}}\), due to the fact that the circuit forms 2-design (sample all the unitaries in Hilbert space). Unlike the classical case, where a gradient-based backpropagation algorithm solves this trainability problem, its quantum analogous solution is challenging to implement [71]. In HQNNs, the PQC is run on a quantum device, whereas its optimization is performed classically. The composition of two fundamentally different computation approaches makes it challenging to implement the backpropagation algorithms in QNNs. Recently, the problem of BP in QNNs has caught significant attention. While some proposed solutions to potentially overcome the BP problem, others analyze the fundamental structure of QNNs to determine the parameters which give rise BP problem. In the following sections, we present some recent state-of-the-art focusing on tackling and analyzing the issue of BP in QNNs. ### Potential Solutions of BP Several solutions were recently proposed to potentially overcome the BP problem in QNNs. In [68], a small portion of the quantum circuit is initialized randomly while the remaining parameters are carefully chosen to implement the identity operation as a whole. This approach avoids the initialization on a plateau only for the first training step, and the learnability is still affected during the subsequent training iterations. Similarly, in [72], a strategy to avoid BP was introduced by enforcing the assignment of multiple parameters in the circuit to reduce the total number of parameters subject to training. However, this approach limits the optimization process to a particular set of parameters and eventually increases the circuit depth for convergence. Inspired by the layer-wise training of classical NNs, which were shown to potentially prevent the BP caused by random initialization in classical NNs [73], a similar approach has also recently been used in QNNs [71]. The layer-wise training approach in QNNs focuses on training a small subset of parameters in each training iteration by incrementally increasing the circuit depth, which results in larger gradients magnitude because of a smaller number of parameters as compared to training the complete circuit. A novel approach for mitigating BP in QNN is recently proposed in [74], where the residual approach from classical NNs is exploited in QNNs. The study demonstrates that incorporating the residual approach in QNNs leads to a significant enhancement in their training performance. Since BP is fundamentally the problem of vanishing gradients, it may seem that gradient-free optimization approaches can help overcome this issue. However, it has recently been proved that even gradient-free optimization cannot escape the BP issue in QNNs [75]. In fact, it was shown that the cost function differences (deciding factor to make optimzation decisions in gradient-free approaches) are exponentially suppressed in BP. ### Analysis of BP BP is a problem corresponding to the cost function landscapes, where the partial derivatives of parameters become exponentially flat with system size. As mentioned earlier, the initial study discusses the occurrence of BP for _deep ansatz_, which usually is believed to be more expressible. A recent study investigating the existence of BP for shallow circuits in comparison with deep circuits is in [76]. The authors show that by making BP dependent on the cost function, it can be extended to shallow circuits. To this end they study two cost functions namely: local cost function (measuring single qubit in a multi-qubit systems) and global cost function (measuring all qubits in a multi-qubit systems). The authors then conclude that for the global cost function, the QNNs will experience BP irrespective of underlying PQC's depth (\((\mathcal{O}(1),\mathcal{O}(log(n)),\mathcal{O}(poly(log(n))),\mathcal{O}(poly (n)))\)). On the other hand, in the case of local cost function, the gradients vanish at worst polynomially and are therefore trainable up to a depth of order \((\mathcal{O}(log(n)))\). The BP starts appearing for depth of order \((\mathcal{O}(poly(n)))\) for the local cost function and in between these regions, there is a transition region where gradients decay from polynomial to exponential. In a recent study conducted by [77], a follow-up investigation was conducted on the globality and locality of cost functions in real-world applications. The study posits that, in the context of multi-class classification, the use of a global cost function results in significantly improved performance as compared to local cost functions. However, for binary classification, both global and local cost functions demonstrate similar levels of effectiveness. These findings suggest that the choice of cost function must be carefully considered when designing classification models for multi-class scenarios, whereas in the case of binary classification, either approach may be equally viable. Recent work [54] analyzed the ansatz expressibility to gradient magnitudes and showed that the more expressible the ansatz is, the more likely it is to have a BP. On the other hand, ansatz with relatively lower expressibility would have a delayed BP up to a certain depth, but in principle, expressible ansatz is still favorable because they might provide solutions for multiple problems as compared to less expressible ansatz, which would be problem specific. Therefore, the greater expressibility in deep ansatzes makes them more susceptible to experiencing BP during training. Excess entanglement in the hidden layers of QNNs can also cause BP and hinder the learning process [55]. Exploiting the volume law from quantum thermodynamics in [55], the authors show the existence of BP in the cost function landscape for both gradient-based and gradient-free optimization approaches. This observation was made for both feedforward QNNs and quantum Boltzmann machines. Moreover, the way of encoding data into the QNN, can also give rise to BP [43]. In this paper, we do not intend to propose a solution for BP problem. Instead, we focus on the analysis of various components of HQNNs inline with section 3.2. ## 4 Methodology This section presents the detailed methodology to obtain the relevant results for the proposed analysis. We start with providing the details of dataset used for training the HQNNs and how is it preprocessed to cope up with the limitations of NISQ devices. Afterwards, the details of QNNs construction right from qubit initialization to final measurement are presented. Finally, the discussion on methodology is concluded by providing the details on the classical postprocessing of QNN results. The typical architecture of HQNN for the unified analysis of joint effect of Data Encoding, Ansatz Expressibility, and Entanglement on the Trainability, used in this paper, is depicted in Figure 3. The HQNN architecture completely replicates QNN's architecture from Figure 1(b). We call the architecture (used in this paper) hybrid Figure 3: Proposed Methodology for the unified analysis of HQNN because of two reasons; 1) the optimization is classically performed, which is also the case in general standalone QNNs (in NISQ regime). Secondly, we add a classical neuron layer at the end of the quantum layer(s). The advantage of the classical layer is that it allows the application of familiar non-linear activation functions from traditional ML. Furthermore, it enables to experiment on real-world datasets. The HQNN used in this paper have three primary ingredients namely; data preparation, QNN construction, and classical post-processing. Here, we present the typical workflow of HQNN used in this paper. In particular we present the details of every step of HQNN, from input to the output. ### Data Preparation The first step in the workflow of HQNN used in this paper is to prepare the data such that it can be efficiently encoded into quantum states for further processing. The data preparation here is in context of the limitations of NISQ devices (mainly in number of qubits). Depending upon the size of input data and choice of encoding scheme, it is sometimes required to reduce the input features dimmension. In such a case, the input (image(s) in our case) is first passed to the dimensionality reduction algorithm before encoding. We also use dimensionality reduction for one of the encoding schemes to cope up with the restrictions of NISQ era, details of which are presented in the following section, where we explain data encoding used in this paper as part of QNN construction. ### QNN Construction Once the data is prepared, the next important step in HQNN workflow is the construction of QNN, which again has four ingredients namely; qubit initialization, data encoding, unitary evolution and qubit measurement. #### 4.2.1 Qubit Initialization It is the first step of QNN construction, which typically defines the total number of qubits or in other words, the width of quantum layers. A qubit can be initialized in any random state. However, we initialize all the qubits in the default ground state i.e., \(|\psi\rangle^{\otimes n}=|0\rangle^{\otimes n}\), which is a relatively common practice. #### 4.2.2 Data Encoding Once the qubits are defined, the next step is the data encoding (classical to quantum feature mapping), such that the subsequent quantum layer(s) 2 can process the input. We use two frequently used data encoding strategies in QNNs, i.e., amplitude and angle encoding. As discussed above the limited number of qubits in NISQ devices enforces to reduce the input feature dimension, which depends on the input feature size and the way data is being encoded in to quantum states. The input feature dimension for the dataset we have used is 64 (details in section 5). In context of the data encoding used in this paper, we reduce the input feature dimensions every time the angle encoding is used because angle encoding needs \(n\) qubits to encode \(n\) features (see section 2.1). For amplitude encoding, the input features dimensionality reduction is not performed because the amplitude encoding can encode \(2^{n}\) features in \(n\) qubits (see section 2.1). #### 4.2.3 Unitary Evolution The unitary evolution primarily consists of single-qubit parameterized (parameter dependent) gates and multi-qubit gates (for qubit entanglement). There is no hard-and-fast rule to insert a combination of gates for unitary evolution and different gate combinations are heuristically used for a given problem. We use two similar ansatz structures (entangled and unentangled ansatz) for unitary evolution differing only in entanglement inclusion/removal for unitary evolution. The complete details, from data encoding to unitary evolution to measurement for the quantum layers we have used in this paper, are presented below: Entangled AnsatzThe entangled ansatz structure consists of a single-qubit parameterized unitaries (\(R_{y}(\theta)\)) and two-qubit unitaries (CNOT) for qubit entanglement. Taking into account the limitations of NISQ devices, in this paper we only consider the nearest-neighbor entanglement, where the last qubit in the underlying system is considered neighbor to the first qubit, as shown in Figure 4. Unentangled AnsatzEntanglement is an important property in quantum mechanics with an anticipated potential to enhance various applications including quantum machine learning. We now update our ansatz structure containing only the single-qubit parameterized unitaries with no entanglement as shown in Figure 5. The motivation of entanglement exclusion comes from the fact that in QNNs trainable parameters are only the single-qubit unitaries, which are optimized during the training. Figure 4: Quantum layer ansatz structure with entanglement. The light pink highlighted region is the data encoding part and either the QubitStateVector or \(R_{y}(\theta)\) rotation gates are used depending on the encoding technique used. The former is used in case of amplitude encoding and the later in case of angle encoding. The green shaded area is the actual quantum ansatz used in training. The parameterized rotation unitaries in yellow boxes are trainable quantum parameters and the vertical blue bars represents two-qubit unitaries We attempt to answer the question: does the unentangled ansatz can potentially delay the issue of BP to enhance the trainability of HQNNs because if it is also identified as a potential source of BP? We show that whether or not entanglement is significant in HQNNs and is dependent on the type of data encoding. For angle encoding, the entanglement does not aid towards better performance, in fact without any entanglement the overall performance improves. On the other hand, for amplitude encoding, the result shows the opposite case. #### 4.2.4 Qubit Measurement For qubit measurements, different measurement bases can be used; however, the most frequently used are the default computational basis, i.e., the eigenbasis of \(\sigma^{z}\). We also use the computational basis to get the output of QNN. ### Classical Post Processing The last step in HQNN workflow is classical post-processing of QNN because the result of qubit measurement is a classical value rather than a quantum state. To post-process the QNN results, we use a classical dense neurons layer (every neuron is connected to every qubit's measurement output from PQC). The cost function is then defined based on which the optimizer tends to find the solution for the given problem. The optimizer updates the trainable parameters in every training iteration and retrains the PQC on updated parameters. The process is repeated until an optimal solution is found. In this paper, the classical specifications (input, neuron layer at the end with non linear function, and classical optimization) of HQNN are fixed, and the primary focus is to experiment with enclosed QNNs (quantum layers). Until here, all the ingredients to construct the HQNN (used in this paper) are presented. The following section present the experiment details including the classical components used in HQNN. Figure 5: Quantum layer structure without entanglement. The light pink and gray highlighted regions represents the same as in Figure 4 ## 5 Experimental Setup The architectural details of HQNN used in this paper are presented in previous section (section 4). This section provides the experimentation details about how that proposed HQNN architecture is trained and evaluated. In particular, the details about the dataset used for training, details of classical hyperparameters of HQNN and list of experiments performed to obtain the results for proposed analysis is also presented. ### Data Preprocessing We consider a multi-class (image) classification problem. The sklearn digit dataset is used for training and evaluating the HQNN, which contains images of handwritten digits [78]. The reason of using this particular dataset over other similar yet popular datasets such as MNIST, is the smaller input feature size, which is more suitable in NISQ era. This dataset has a total 10 classes with 180 samples per class, resulting in a total of approximately \(\cong\) 1797 samples. Furthermore, each data point is an image of \(8\times 8\) resulting in feature dimension of 64. Moreover, 75% of the samples are used for training and the remaining 25% for testing the HQNN. The data is encoded using both amplitude and angle encoding separately for each experiment. As discussed in section 4, we reduce the input feature dimension before passing it to PQC, in case of angle encoding, where input features must be equal to that of total number of qubits. For input dimensionality reduction, we use principal component analysis (_PCA_) from sklearn library (a popular ML library for the Python programming language), which reduces large datasets to smaller ones without losing the most important information in the dataset. ### Hyperparameters Specifications Given the nature of target problem (multi-class classification), we use _categorical cross entropy_ as the cost function. For classical optimization of the cost function, we use _Adam Optimizer_[79], with an initial learning rate of 0.01. Since the dataset we used consists of a total 10 classes, therefore the last classical neuron layer in HQNN has a total of 10 neurons. Furthermore, the non-linear activation function used is _SoftMax_, which is general case in multi-class classification problems.. The models used in this paper are set to train for the maximum of 100 training iterations (epochs). However, to avoid overfitting, we schedule the learning rate as the training progresses using the _early stopping_ method from keras library (an open-source software library that provides a Python interface for artificial neural networks). The early stopping method monitors the validation loss for three consecutive iterations and if there is no improvement, the learning rate is reduced by a factor of 0.1, setting the new learning rate as: \(newlearningrate=previouslearningrate\times 0.1\). If the validation loss is not improved for four consecutive training iterations, the early stopping method forcefully stops the training to avid overfitting. The input training data for all the experiments is passed in the batches of 16. Finally, we use pennylane (a cross-platform Python library for differentiable programming of quantum computers) [6] for training the HQNNs. ### List of Experiments The list of training experiments for our proposed analysis is shown in Figure 6. Since the structure and size of the quantum layers are the primary focus of our analysis, we experiment with different widths (\(n\)) and depths (\(m\)) of quantum layers. \(n\) denotes the number of qubits, and \(m\) denotes the periodic repetition of quantum layers before the measurement. We restricted to use the maximum width of \(n=14\) and the maximum depth of \(m=10\) because of the two reason: 1) the overall accuracy is starting to decline for bigger \(n\) and \(m\) which we speculate would further decline by increasing \(n\) and \(m\) because of the so-called phenomenon of BP, and 2) Since we use classical machines to simulate qubit systems, and even for a simple dataset (considered in this paper), it takes around \(70-80\) hours to train for the maximum width (\(n=14\)) with maximum depth (\(m-10\)). Stemming from the fact that BP exists in standalone QNNs for sufficiently random and expressible PQCs [14], we first empirically analyze the existence of BP in HQNNs. We typically analyze whether or not, the addition of classical neurons layer can mitigate BP to any extent. For that purpose, we train the HQNN for different widths (\(n\)) and depths (\(m\)) of the underlying PQC, Bigger values of \(n\) and \(m\) results in greater number of trainable parameters and generally considered as more expressible ansatz and vice versa. Changing the \(n\) and \(m\) of PQC helps to analyze the existence of BP in HQNNs. The issue of BP is analyzed for both the encodings separately with both ansatz structures (entangled and unentangled). This provides a general idea of which encoding strategy works well with which ansatz. Upon observing the BP dependence on ansatz depth (expressibility) we then perform trainability vs. expressibility analysis, again for both encodings individually for each ansatz structure. This analysis provided an overall idea about how the ansatz expressibility is playing the role in the occurrence of BP and how deep an ansatz can be for a given width before the performance starts declining. Further, it can also help to identify if there is any relationship between \(n\) and \(m\) to achieve a relatively better overall performance. We then compare the performance of both the anstaz structures for both the encoding to understand the role of entanglement in HQNNs. We conclude our analysis by providing a brief application mapping of HQNNs with other important evaluation metrics for classification tasks, i.e., precision, recall and F1-score. Figure 6: List of experiments performed in this work for different \(n\) and \(m\). ## 6 Results and Discussion In this section, we present our experimental analysis. We start by demonstrating the existence of BP in HQNNs for both the ansatz structures individually with both the encodings used in this paper. Based on our analysis, we concur that BP depends on expressibility of quantum layers in HQNNs. We then perform the the trainability vs. expressibility analysis of HQNNs i.e., how the quantum layer(s) depth and width are related to BP and possibly effect the network's overall performance. The trainability vs. expressibility analysis reveals that the entanglement in underlying ansatz also plays a role in the overall performance of HQNNs. We then briefly compare both ansatz structures with both encodings used in this paper, to understand the role of entanglement. Finally, to diversify our analysis, we evaluate the HQNNs in terms of other application-oriented evaluation metrics for classification tasks. ### Demonstration of BP Existence in HQNNs In this section, we analyze the existence of BP in the training landscapes of HQNNs. We benchmark mean accuracy of HQNNs for the analysis of BP. The BP analysis is performed for both the encodings individually for both entangled and unentangled ansatz structures. The existence of BP hinders the learning process of HQNNs eventually resulting in lower accuracy. Consequently, for our analysis the higher accuracy would essentially entail that the model is not yet fully exposed to BP and vice versa. #### 6.1.1 BP Demonstration for Entangled Ansatz with Amplitude Encoding The entangled ansatz contains single qubit unitaries and nearest neighbor qubit entanglement, as shown in Figure 4. The original input feature size to be encoded into quantum states is 64, as discussed in section 5. Hence, with amplitude encoding, the minimum number of qubits required to encode these features is \(n=6\), which is a reasonable width for quantum layer(s) considering NISQ era. Therefore, while using amplitude encoding we do not apply PCA for feature reduction and all the input features are directly encoded. We then vary \(n\) and \(m\) according to Figure 6, to analyze the occurrence of BP. The mean accuracy for all the experiments are shown in Figure 7. To analyze the effect of width (\(n\)) irrespective of depth (\(m\)), the mean accuracy of different training experiments is individually plotted for fixed \(m\) and variable \(n\) in Figure 6(a). The _general_ accuracy trend in Figure 6(a) is declining. This decline in performance, as the number of qubits (\(n\)) increase, indicates the possible presence of BP in training landscapes. On the other hand, to observe the effect of depth (\(m\)) irrespective of the width (\(n\)), the mean accuracy is individually plotted for fixed \(n\) and variable \(m\) as shown in Figure 6(b). we observe that there is a _trade-off_ between quantum layer's depth and width to achieve a relatively better performance. The term _trade-off_ here essentially means that for smaller \(n\) the _allowable_\(m\) is relatively greater and vice versa. One possible reason behind the trade-off is the phenomenon of overparameterization, which in HQNNs (used in this paper), is a result of an increase in \(n\) and \(m\). Overparameterization is vital in classical deep learning because of the complexity of modern day applications, and is often helpful in learning most intricate relationships in the input data. However, HQNNs are not much in favor of overparameterizing the network, and an optimal number of parameters (optimal \(n\) and \(m\)) must be determined to achieve a relatively better performance. #### 6.1.2 BP Demonstration for Entangled Ansatz with Angle Encoding In angle encoding, the number of qubits or quantum layer's depth (\(n\)) must be equal to that of input feature dimension for successful classical-to-quantum feature mapping, as discussed in section 2.1. Since the input feature size of our data is 64, we need at least 64 qubits to directly encode the data into the quantum states. Such a big number of qubits is not suitable for NISQ era. Therefore, we apply PCA to reduce the input feature dimension to experiment with angle encoding approach. The reduction conforms with the width of quantum layers (\(n\)) from Figure 6. However, considering both the NISQ limitations and not losing much information while dimensionality reduction, the minimum width in angle encoding is \(n=8\). Analogous to amplitude encoding, various experiments are performed for different \(n\) and \(m\). The mean accuracy for all the experiments is shown in Figure 8. Figure 7: Entangled Ansatz: Mean accuracy for different \(m\) and \(n\) with amplitude embedding. In classical ML, it is always conjectured that when we have more input features to train the model, the overall performance is improved, given a fairly complex underlying model. In angle encoding, the size of input features is directly proportional to the size of \(n\). It typically means that every increase in \(n\) results in greater input feature dimensions. Hence, following traditional ML, the wider quantum layers should yield a better performance, since they are fed with relatively enhanced feature dimensionality compared to quantum layers of shallow width. However, this is not the case in HQNNs because an increase in \(n\) reduces the accuracy after a certain \(m\), as shown in Figure 8. To analyze the effect of quantum layer(s) width, the mean accuracy is plotted as a function of \(n\), as shown in Figure (a)a. It can be observed that as \(n\) increases the accuracy tends to reduce, particularly for bigger depths (\(m=8\) and \(10\)). For relatively smaller depths (\(m=4,6\)), the accuracy improvement is negligible as \(n\) increases exhibiting the presence of BP. To analyze the effect of quantum layer(s) depth, the mean accuracy is plotted as a function of \(m\), as shown in Figure (b)b. It can be observed that in general, for a certain \(n\) the accuracy tends to improve up to certain \(m\) and then starts declining. Hence, in case of angle encoding also, there is a _trade-off_ between \(n\) and \(m\) to achieve a better overall performance. #### 6.1.3 BP Demonstration for Unentangled Ansatz with Amplitude Encoding For amplitude encoding, the input feature size is unchanged i.e., 64. We start with minimum required width of quantum layers (\(n=6\)) and perform all the experiments from Figure 6. For the sake experimental simplicity, we skip experimenting for \(n=14\) because we already observed the BP going from \(n=6\) to \(n=12\). The mean accuracy for all the training experiments are shown in Figure 9. Figure 8: Entangled Ansatz: Mean accuracy for different \(m\) and \(n\) with angle encoding. The mean accuracy of all the training experiments is plotted as a function of \(n\) in (a) to analyze how the increase in number of qubits effect the performance of HQNN when no entanglement is included in quantum layers. It can be observed that analogous to the case of entangled ansatz, the accuracy starts declining as \(n\) increases irrespective of \(m\). The accuracy decline is a clear indication towards the occurrence of BP. Furthermore, in unentangled ansatz the accuracy decline is quite evident with increases in \(n\) leading to a conclusion that no entanglement in quantum layers makes HQNNs more susceptible to BP. Similarly, the mean accuracy of all the training experiments is also plotted as a function of \(m\) (Figure (b)b) which helps in analyzing the role of \(m\) in HQNNs for a fixed \(n\). We observe that unlike the case of entangled ansatz with amplitude encoding, where the allowed circuit depth \(m\) varies for different \(n\), in case of unentangled ansatz, increase in \(m\) results in further reduction (or a negligible improvement) in accuracy for all \(n\). The performance decline with an increase in \(m\) is more evident for bigger \(n\), leading to a conclusion that in case of unentangled ansatz smaller \(n\) with smaller \(m\) is more appropriate while constructing quantum layers. Bigger \(n\) and \(m\) makes unentangled quantum layers more prone to BP. #### 6.1.4 BP Demonstration for Unentangled Ansatz with Angle Encoding Analogous to the BP analysis in case of entangled ansatz with angle encoding, the input feature dimension is reduced to conform with the reasonable width of quantum layer(s). We start with the minimum of width of \(n=8\). The mean accuracy for all the experiments is shown in Figure in 10. Figure 9: Unentangled Ansatz: Mean accuracy fordifferent \(n\) and \(m\) with amplitude encoding. Analyzing the effect of \(n\) from BP aspect, the accuracy improves in general, as shown in Figure (a)a. This is opposite to the results of entangled ansatz (Figure (a)a), where the accuracy starts decreasing after certain \(m\), despite an increase in number of input features (with an increase in \(n\)). This leads to a conclusion that unentangled ansatz is less susceptible to BP than entangled ansatz when encoding the data in qubit rotation angles. The results in figure (b)b helps analyzing the effect of \(m\) on HQNN performance when the data is encoded via angle encoding. It can be observed that the accuracy deteriorates in general with an increase in \(m\). The smallest depth (\(m=2\)) performs better than all other \(m\), for all \(n\) except \(n=12\). however, even in that case (\(n=12\)) the accuracy improvement from \(m=2\) to \(m=10\) is not quite significant. ### Trainability Vs Expressibility of HQNNs A more careful analysis of results in previous section shows that the decline in performance does not have a consistent relationship (quadratic, exponential etc.) but is dependent on the quantum layer(s) depth \(m\). Consequently, a more insightful analysis of how deep quantum layer(s) can be, for a given width before experiencing the BP and eventually leading to the performance decline, would be important. In this section, we conduct an empirical analysis on trainability vs. expressibility of HQNNs, again for both ansatz structures individually with both encodings. We benchmark model's overall accuracy and loss convergence as evaluation metrics for trainability vs. expressibility analysis. #### 6.2.1 Trainability Vs. Expressibility of Entangled Ansatz with Amplitude Encoding The trainability vs. expressibility analysis of HQNNs gives an insight of how expressible a given ansatz can be (before the gradients start vanishing) for a particular width with better or at least equivalent performance. We performed different training experiments for different \(n\) and \(m\), as shown in Figure 6. The model is evaluated based on the overall accuracy and convergence. The results are shown in Figure 11. Figure 10: Unentangled Ansatz: Mean accuracy for angle encoding. In Figure 11, for \(n=6\) the deeper quantum layers i.e., \(m=8\) and \(m=10\) has better results in terms of accuracy, however, convergence is better in case of \(10\) layers as shown in Figure 12a. When we increase \(n\) to \(8\) and \(10\) qubits, the circuit depth of \(m=6,8,10\) have relatively better accuracy. However, in both cases (\(n=8\) and \(n=10\)) the model converges relatively faster when \(m=6\) as shown in Figure 12b and 12c, respectively, reducing the appropriate depth of quantum layers when more qubits are used. When \(n\) is increased to \(12\) qubits, the appropriate circuit depth is further reduced to \(m=4\), both in terms of accuracy and convergence, as shown in Figures 11 and 12d, respectively. However, the corresponding reduction in quantum layers depth becomes more evident when we further increase \(n\) to \(14\) qubits, as the model clearly converges faster for \(m=4\) as shown in Figure 12e. Considering the results shown in 11 and 12 we concur that the BP phenomenon in HQNNs is not only the function of qubits (gradients vanish exponentially with number of qubits and the network becomes untrainable as we increase the qubits) but it is also dependent on how expressible the quantum layer(s) are. If we analyze the accuracy in Figure 11 as a whole, we observe that as we increase \(n\) the performance starts deteriorating. Although, we observe a trade-off between \(n\) and \(m\) (bigger \(n\) tends to reduce \(m\) and vice versa), a careful analysis of individual accuracy reveals that smaller \(n\) and relatively bigger \(m\) are more appropriate to achieve better performance because the achieved accuracy is higher and are skewed on the higher side. For instance, for \(n=6\) and \(m=8\) almost \(75\%\) of the accuracy are higher than the highest achieved accuracy for \(n=14\) and \(m=4\). Furthermore, the accuracy for bigger \(n\) have more variance (more spread in accuracy) than smaller \(n\), showing non-robust learning, in case of wider quantum layers. #### 6.2.2 Trainability Vs. Expressibility of Entangled Ansatz with Angle Encoding We now present the analysis of how a different data encoding strategy i.e., angle encoding, where the input data features are encoded in qubit rotation angles can affect the trainability in QNNs for certain \(n\) and \(m\). We encode the data in \(RY\) rotations which is then passed to the quantum layer(s). For the case of angle encoding, as explained in Section 2.1 the number of features being encoded should be equal to that of the number of qubits. Our input features are images of size \(8\times 8\) and hence \(64\) features in total. We need \(64\) qubits to encode \(64\) features which is not suitable in NISQ Figure 11: Entangled Ansatz: Accuracy trends for all \(n\) and \(m\) for amplitude encoding. era. Therefore, we apply PCA and reduce the input feature dimension to \(8,10,12\) and \(14\) features. The quantum layer(s) width (\(n\)) is equal to that of dimension of input features, and we experiment with different quantum layers having variable depths (\(m=2,4,6,8,10\)), similar to that of analysis with amplitude encoding and shown in Figure 6, except for \(n=6\), because We tend to keep \(n\) bigger while applying PCA so that, not much information is lost. The accuracy and loss for all the experiments are shown in Figure 13 and 14 respectively. Based on the results obtained, we observe that for \(n=8\), circuit of depth of \(m=8\) is better in terms of both accuracy ( Figure 13 and convergence (Figure 14a) whereas the smallest depth quantum layer (\(m=2\)) is worst among all different values of \(m\).. When the circuit width is increased to \(n=10\) is, the circuit depth with better performance Figure 12: Entangled Ansatz: Loss trends for all \(n\) and \(m\) for amplitude encoding. Figure 13: Entangled Ansatz: Accuracy trends for all \(n\) and \(m\) for angle encoding. both in terms of better accuracy and convergence is \(m=6\) reduced from \(m=8\). Even though the input feature dimension is also increased (from 8 to 10), the circuit depth tends to reduce mainly because the gradients magnitude gets smaller. For \(n=12\), almost all the layers except \(m=2\) have comparable performance in terms of accuracy, but \(m=6\) has slightly better convergence rate than others. However, the individual accuracy dropped from \(n=8\) and \(n=10\). Similarly, for \(n=14\), circuit depth of \(m=14\) is betters in terms of both accuracy and convergence rate. Unlike amplitude encoding, the allowable circuit depth is more for bigger \(n\), when the data is encoded in qubit rotation angles, and this is because of the greater number of input features to train as compared to smaller \(n\). #### 6.2.3 Trainability Vs. Expressibility of Unentangled Ansatz with Amplitude Encoding We now present the trainability vs. expressibility analysis for unentangled ansatz, while encoding the input data in qubit state vector. As discussed previously, for our dataset, the input dimensionality reduction for amplitude encoding is not required, and the PCA is therefore not applied. The experiments are the same as for entangled ansatz, i.e. changing the width and depth of quantum layers as shown in Figure 6. Based on the obtained results, we observe a consistent decline in performance as \(n\) increases, we therefore skip the experimentation for \(n=14\) and all corresponding \(m\). The accuracy and loss trends for unentangled ansatz are presented in Figure 15 and Figure 14: Entangled Ansatz: Loss trends for all \(n\) and \(m\) for angle encoding. 16, respectively. We observe that, unlike the case of amplitude encoding in entangled ansatz, where for smaller width \(n\) the allowed circuit depth \(m\) is relatively bigger, in unentangled ansatz, i.e. when the entanglement is removed, the allowed circuit depth tends to become lower even for smaller \(n\). For \(n=6\), almost all corresponding depths \(m\), yield somewhat same performance both in terms of accuracy and loss convergence as shown in Figure 15 and 16a respectively. However, the smaller depth (\(m=2\)) is slightly better than other \(m\), in contrast with entangled ansatz, where \(m=8\) and \(10\) turned out to be better for \(n=6\) in terms of both the performance metrics (Figure 11 and 12a). Similarly, for \(n=8\) the relatively smaller depth quantum layers typically \(m=4\) and \(6\) are slightly better than other depths, with respect to both accuracy and convergence as shown in Figure 15 and 16b respectively, which is again in contrast to entangled ansatz, where \(n=8\) have better performance with maximum depth (\(m=10\)) and worst for smallest depth (\(m=2\)) as presented in Figures 11 and 12b. When the \(n\) is further increased to \(10\) and \(12\) the overall performance deteriorates (Figure 15), however, analogous to the results in entangled ansatz for bigger \(n\), relatively smaller depths, typically \(m=2\) and \(4\) are relatively better than greater depth layers. Furthermore, as \(n\) increases, the individual performance for all \(m\) in both ansatz structures becomes more inconsistent, as shown in Figure 15 and 11, however this inconsistency is more prominent in case of unentanglement ansatz in HQNNs for amplitude encoding. Also, in case of no entanglement in quantum layers, for bigger \(n\) almost all circuit depths have a fluctuating journey to reach to a minimum value of cost function landscape, clearly exhibiting the existence of BP (unable to determine the cost minimizing direction). It is also worth mentioning here that for all \(n\) and \(m\) in unentangled ansatz, the HQNNs stuck in the local minima and fails to converge even after \(100\) training iterations, and is insensitive to the regularization techniques (we used early stopping in this work to avoid overfitting), whereas in case of entangled ansatz, the model is more robust and and have relatively better and smoother journey to converge to minimum in a cost function landscape, exhibiting relatively lesser insensitivity to regularization. Figure 15: Unentangled Ansatz: Accuracy trends for all \(n\) and \(m\) for amplitude encoding. #### 6.2.4 Trainability Vs. Expressibility of Unentangled Ansatz with Angle Encoding We now present the trainability vs. expressibility analysis for unentangled ansatz when the data is being encoded in qubit rotation angles. Since the angle encoding requires the size of input data equal to that of number of qubits, PCA is applied to reduce input feature dimension. The experiments are performed from Figure 6, starting with \(n=8\). The corresponding accuracy and loss convergence results for all the experiments are shown in Figure 17 and 18 respectively. Figure 16: Unentangled Ansatz: Loss trends for all \(n\) and \(m\) for amplitude encoding. We observe that the removal of entanglement in case of angle encoding affects the trainbility vs expressibility of HQNNs. The loss convergence in all \(n\) and \(m\) (Figure 18), is not quite distinguishable and therefore we observe accuracy (Figure 17) to analyze the effect of \(n\) and \(m\) on trainability. Unlike angle encoding for entangled ansatz, where the expressibility tends to slightly reduce for bigger \(n\) even with an increase in input features, for unentangled ansatz, the allowed circuit depth is greater, eventually leading to have more trainable parameters in hidden quantum layers, resulting in a better overall performance for relatively bigger \(n\) and \(m\). Moreover, for entangled ansatz smaller width quantum layers (\(n=8\)), have better accuracy with relatively wider quantum layers (\(m=8\)), as shown in Figure 13, whereas for unentangled ansatz, the HQNN has a comparable performance for almost all the circuit depths 17. For wider quantum layers (\(n=14\)) in unentangled ansatz, again all the depths have comparable performance. However, a smaller depths (\(m=2\)) deem more feasible, because not only it has a slightly better performance, it would also require less training time compared to deeper layers. In conventional ML, it is always conjectured that, for a reasonably complex model, increasing the input data yields better performance. In HQNNs, when the data is being encoded in rotation angles, we are bound to increase the input feature dimension, if the quantum layers width is required to be increased. From the experimental results presented in this paper, we concur that for entangled ansatz, the HQNNs does not follow the speculated belief (in conventional ML) of better performance with more input data (Figure 13), because of the so-called phenomenon of BP due to an increase in hidden quantum layers width. However, for unentangled ansatz, we observe that this very presumption (_more data = better accuracy_), works and increasing the input feature dimension does infact lead to a better overall performance. Based on these observation, we can safely state that the removal of entanglement in case of angle encoding can potentially avoid (or delay) the BP. Furthermore, unlike the case of amplitude encoding for unentangled ansatz, where an increase in \(n\) and almost all corresponding \(m\), the model struggles to find the cost minimizing direction (Figure 16), in case of angle encoding the journey to converge to minimum in cost function landscape is more smoother as shown in Figure 18. Lastly, although the learning process in unentangled ansatz for angle encoding is more reliable than amplitude encoding, it also exhibits some insensitivity to regularization technique specifically for bigger \(n\) which can be seen in Figure 18c. However, the insensitivity to regularization is more prominent in case of amplitude encoding (Figure 16). Figure 17: Unentangled Ansatz: Accuracy trends for all \(n\) and \(m\) for angle encoding. Based on the trainability vs. expressibility analysis of both entangled and unentangled ansatzes, we concur that entanglement does affect the HQNNs training and eventually their performance, in correspondence with the underlying encoding strategy. This calls for a rather straight-forward comparison of ansatzes (with and without entanglement) to easily understand the role of entanglement in HQNNs. The comparison is presented in the following section. ### Effect of Entanglement in HQNNs In previous sections, the analysis of BP existence in HQNNs along with their trainability vs. expressibility is presented. However, that analysis does not explicitly highlight the importance of entanglement inclusion/removal in the underlying quantum layers. Entanglement is an important fundamental property of quantum mechanics and is a key to construct expressible PQCs in HQNNs. Consequently, it is vital to understand the role of entanglement in HQNNs for real-world applications. In this section, we compare the results obtained for both ansatzes and analyze if entanglement plays any role in overall performance of HQNNs. We observe that the entanglement does affect the HQNN's performance. However, whether its affect result in performance enhancement or degradation, is dependent on how the data is being encoded. Therefore, for understanding the role of entanglement in HQNNs, we compare both ansatz structures first with amplitude encoding and then with angle encoding. Figure 18: Unentangled Ansatz: Loss trends for all \(n\) and \(m\) for angle encoding. #### 6.3.1 Effect of Entanglement in HQNNs - Amplitude Encoding When the classical data is being encoded in qubit state vector and then trained using PQC, the problem of vanishing gradients for unentangled ansatz quantum layers is quite prominent, as discussed in section 6.1.3, resulting in significant reduction in HQNN's performance. A brief comparison of the performance for both ansatzes is shown in Figure 19. We observe that for all \(n\) and corresponding \(m\), the performance of underlying HQNN with quantum layers for entangled ansatz is significantly better than unentangled ansatz. It can also be observed that the performance degrades as the number of qubits are increased. Based on the results shown and discussed, we concur that the inclusion of entanglement in quantum layers while constructing the HQNNs is not-at-all beneficial, when the data is encoded in qubit amplitudes. Consequently, it is recommended to use single-qubit parameterized unitaries only, when using amplitude embedding approach, for better performance and reduced training time. #### 6.3.2 Effect of Entanglement in HQNNs - Angle Encoding Entanglement is into the play also, when the classical data is encoded into qubits rotation angles, which is then trained using parameterized quantum layers. However, unlike amplitude encoding, unentangled ansatz quantum layers results in performance enhancement in case of angle encoding. As shown before in Figure 10 and 10, without any entanglement in quantum layers of underlying HQNNs, the allowed circuit depth Figure 19: Performance comparison of both ansatz structures for amplitude encoding. is greater than that of quantum layers with entanglement 8a and 8b. The performance comparison of both ansatzes when the data is encoded qubit rotation angles, is shown in Figure 20. Unlike amplitude encoding, here we observe that in general the model performs better when there is no entanglement included in quantum layers for all \(n\) and \(m\). The performance enhancement becomes more prominent when \(n\) increases. Based on the results, we concur that, removal of entanglement (unentangled ansatz) reasonably enhances the model's performance when the data is encoded using angle encoding. Moreover, not only the allowed quantum layer(s) depth is greater than that of entangled ansatz quantum layers, but the training time is also reduced. ### Application-Oriented Evaluation of HQNNs for Classification Tasks In previous sections, we discuss the results of our proposed framework based on accuracy and loss convergence. However, for classification tasks (target application in this paper) in ML, accuracy alone is not a sufficient metric to gauge model performance because it only shows the percentage of correct predictions out of the total predictions. Therefore a more diverse set of evaluation metrics are used for classification problems. Although we consider a multi-class classification problem in this paper, explaining other evaluation metrics (precision, recall and F1-score) for multi-class classification is rather tricky. Therefore, for simplicity, we will demonstrate the need of more diverse evaluation metrics for a binary classification, which are directly applicable to multi Figure 20: Performance comparison of both ansatz structures for angle encoding. class classification also. In binary classification, there are two classes (positive and negative) for which the ML the model aims to predict the correct class. Conventionally, the accuracy in this case would be the sum of correctly predicted classes, regardless of what (correct) class was predicted. When a positive sample is classified as negative, it is called False Negative (FN), and a negative sample predicted as positive is knows as False Positive (FP). When the positive and negative samples are correctly predicted to their respective classes, this is called True Positive (TP) and True Negative (TN), respectively. Classifying the performance based on these classes allows us to calculate other important metrics, namely: precision and recall. Precision tells us what proportion of predictions are truly positive whereas recall tells that what proportion of actual positives are correctly classified. Another important evaluation metric is F1-score, which is just the harmonic mean of precision and recall. Mathematically, all these metrics can be calculated using the following equations. \[\text{Recall} =\frac{TP}{TP+FN} \tag{14}\] \[\text{Precision} =\frac{TP}{TP+FP}\] (15) \[\text{F1-Score} =\frac{2\times\text{Recall}\times\text{Precision}}{\text{Recall} +\text{Precision}} \tag{16}\] A high precision and recall scores are always desirable but in practice, classifiers are prone to errors and can result in different precision and recall scores. Therefore, a trade-of between these two scores may need to be made, and is highly application dependent. For instance, for a video recommendation system, a high precision would be more desirable to make sure that all potential videos are being recommended to the user. Similarly, a classifier to detect cancer in patients would need a high recall so that as many cancer patients as possible are correctly diagnosed. The F1-score typically is more useful for performance comparison between the classifiers. For instance, in case of two classifiers, where one classifier has a better precision while the other has a better recall, then the F1-score is an appropriate metric to pick the best classifier. We now briefly evaluate the HQNNs with respect to target (classification) application(s) for both the ansatz structures and data encodings used in this paper. Based on the results discussed in section 6.2 and 6.3, we concur that entangled ansatz structures performs better with amplitude encoding, whereas unentangled ansatz yields better performance with angle encoding. Therefore, we present the application-oriented evaluation of HQNNs only for the best performing ansatz structures with corresponding encodings (entangled ansatz with amplitude encoding and unentangled ansatz for angle encoding). The classifiers with both high precision and high recall (close to overall accuracy) are generally considered to be good classifiers. We observe that in HQNNs, the precision and recall scores follow the same trends as accuracy for all the respective experiments performed in this paper, making them more reliable for a wide range of applications. #### 6.4.1 Application-Oriented Evaluation - Amplitude Encoding We first present the application-oriented analysis for amplitude encoding and entangled ansatz. The precision, recall and F1-score are plotted as a function of \(m\) for all \(n\), as shown in Figure 21. Based on the results shown in Figure 21, we observe that for all \(n\) and \(m\), the precision, recall and corresponding F1-score increase and decrease with almost the same ratio and can be equally applicable for both high precision and high recall applications. The general trends of precision, recall and F1-scores on entangled ansatz with amplitude encoding follows almost the same trend as that of the accuracy (Figure 11). This means that for smaller \(n\), we require relatively bigger \(m\) (and vice versa), to achieve a better recall and precision scores for the corresponding applications. #### 6.4.2 Application-Oriented Evaluation - Angle Encoding Since the unentangled ansatz performs better with angle encoding, we only consider these results for application oriented analysis. The results of precision, recall and F1-score with fixed \(n\) and variable \(m\) are presented in Figure 22. All the performance metrics increase and decrease with same ratio, analogous to the case of amplitude encoding, and hence are equally suitable for either applications (High precision or recall). However, when encoding the data in qubit rotation angles yields slightly better recall than precision for all the training experiments, and is therefore more appropriate for high recall applications. Figure 21: Precision, Recall and F1-score for fixed \(n\) and variable \(m\) for amplitude encoding with entangled ansatz ## 7 Significance of Our Framework The previous sections of this paper have presented and discussed a framework that focuses on the analysis of three major components of HQNNs. These are data encoding, ansatz expressibility, and entanglement inclusion/removal. In other words, there can be various possible scenarios coming from constraints on these three components of HQNN. Therefore, the constraints on these components of HQNNs design, both individually and with respect to each other, are required to be considered. In this section, we highlight the significance of our proposed framework for various constraint scenarios by recommending specifications for different parameters of HQNNs design based on our results in section 6.2. Summaries of the recommendations for each constraint scenario can be found in Appendix A. ### No Constraint In this scenario, we assume the designer has no constraint on any parameter and has the liberty to choose any type of data encoding (amplitude or angle), an arbitrary ansatz width and depth (expressibility). Furthermore, the inclusion or removal of entanglement is also not a constraint. In such a scenario, based on the results of our proposed framework, it is recommended to use angle encoding with no entanglement between qubits in the underlying quantum layers. Furthermore, based on both overall accuracy and loss convergence, the ansatz width of \(n=10\) and depth of \(m=4\) is recommended, as shown in Figure 17 and 18. This is the best obtained result of our framework. ### Constraint on a Single Parameter In this scenario, we consider the possibility when there is a constraint on a single design parameter from data encoding, ansatz expressibility and entanglement in quantum layers. #### 7.2.1 Constraint on Data Encoding. Here, we assume that there is a constraint on the data encoding scheme to be used. If it is required to encode the data in qubit amplitudes, then the entanglement is not Figure 22: Precision, Recall and F1-score for fixed \(n\) and variable \(m\) for angle encoding with unentangled ansatz recommended to be included in the underlying quantum layers. Moreover, the optimal width and depth of quantum layers is \(n=6\) and \(m=10\), as shown in Figure 11 and 12 respectively. On the other hand, if it is constrained to use angle encoding then the same set of HQNNs can be selected as discussed in section 7.1. #### 7.2.2 Constraint on Ansatz Expressibility. Ansatz expressibility has two factors: ansatz width and ansatz depth. If the constraint is on ansatz width, then for smaller widths (\(n=6,8\)), amplitude encoding with moderate depth (\(m=6\)) is more appropriate to use to improve the accuracy and convergence, as shown in Figure 11 and 12. In addition, entanglement is recommended to be included. On the other hand, for bigger widths (\(n=10,12,14\)), angle encoding is a better choice. For the selection of other parameters in this scenario, the discussion in section 7.1 can be followed. If the constraint is on the depth of quantum layers, in the case of smaller depths (\(m=2,4\)), angle encoding with unentangled ansatz works better. Moreover, the allowable width of quantum layers is greater, i.e., \(n=10,12,14\). Although, it allows for wider quantum layers, using \(n=10\) would be more appropriate due to the shorter training time. Similarly, for moderate depth to deeper quantum layers (\(m=6,8,10\)), both amplitude (with entanglement) and angle encoding (without entanglement) have comparable performance in terms of accuracy. However, with amplitude encoding, the allowable width of the quantum layers is relatively smaller (\(n=6\)) and angle encoding allows to have a wider quantum layer (\(n=10,12,14\)) and is hence less susceptible to BP, unlike amplitude encoding. #### 7.2.3 Constraint on Entanglement. For the case when there is a constraint requiring entangling the qubits, amplitude encoding with ansatz width and depth of \(n=8\) and \(m=6\) can be used. On the other hand, when there is a constraint requiring that qubits should not be entangled, then angle encoding performs better, and the same specification as discussed in section 7.1 can be picked. ### Constraint on Two Parameters In this scenario, we consider the possibility when there is a constraint on any two of the parameters from data encoding, ansatz expressibility and entanglement inclusion/removal. #### 7.3.1 Constraint on Data Encoding and ansatz expressibility. The ansatz expressibility has two factors: width and depth of quanutum layers. Therefore, we separately consider constraints on both these expressibility factors along with constraints on data encoding. We first consider constraints on the data encoding and ansatz width. In the case of both amplitude and angle encoding, for smaller widths (\(n=8,10\)), the appropriate depth is greater (\(m=8,10\)), whereas for bigger widths (\(n=12,14\)) the appropriate depth reduces to \(m=2,4\). Similarly, in the case of constraint on ansatz depth and data encoding, again there is a trade-off between ansatz depth and width. Furthermore, in both cases (constraint on ansatz width or depth), it is recommended to entangle the qubits when encoding in qubit amplitudes, whereas for angle encoding, entangling qubits does not improve the performance. #### 7.3.2 Constraint on Data Encoding and Entanglement. We now consider a setting when there is a constraint on data encoding and entanglement inclusion/removal. If there is a constraint requiring the data to be encoded in qubit amplitudes, then irrespective of whether the qubits are entangled or not, the allowable width of quantum layers is relatively lower, i.e., \((n=6,8)\). The bigger widths lead to performance decline, indicating the presence of BP. However, in the case the qubits are entangled in underlying quantum layers, the allowable depth is bigger \((m=8,10)\), whereas in case of no qubit entanglement, the allowable depth is reduced \((m=2,4)\). Similarly, When it is constrained to encode the data in qubit rotation angles, then irrespective of entanglement, the appropriate depths are bigger than amplitude encoding, \((n=10,12,14)\), making it less prone to BP and eventually lead to better performance. However, in the case of entanglement inclusion, the allowable depth of quantum layers is greater \((m=8,10)\), whereas in the case of no entanglement, the allowable depth is reduced \((m=2,4)\). #### 7.3.3 Constraint on expressibility and entanglement. We first consider the setting when there is a constraint on the width factor of expressibility while depth can be arbitrary. Moreover, the entanglement inclusion/removal is also constrained. When the qubits in underlying quantum layers are constrained to be entangled and the corresponding width of quantum layers is required to be smaller \((n=8,10)\), then amplitude encoding is better, with deeper quantum layers \((m=10)\). However, if the required width of quantum layers is greater \((n=12,14)\) along with entanglement inclusion, then angle encoding is relatively better and the appropriate depth of underlying quantum layers is smaller \((m=2)\). On the other hand, when the entanglement is constrained to be removed from quantum layers, then irrespective of width constraints, angle encoding is better than amplitude encoding and the appropriate depth is typically smaller \((m=2,4)\), for all corresponding widths used in this paper. We now consider the setting when there is a constraint on depth factor of expressibility along with entanglement inclusion/removal. When the constraint is to have smaller depth \((m=2\) or \(4)\), and entanglement is constrained to be included also, then both the encodings (amplitude or angle) achieve similar performance and allows to have more wider quantum layers \((n=10)\). However, when the constraint is to have more deeper quantum layers \((m=6,8\) or \(10)\), along with entanglement inclusion, then amplitude encoding performs better than angle encoding. Moreover, the appropriate width of quantum layers is also greater \((n=8)\). When the entanglement is required to be removed, then irrespective of the depth, angle encoding is better than amplitude encoding and the appropriate width of underlying quantum layers is typically higher \((n=12\) or \(14)\). ### Constraint on All Three Parameters In this scenario, we consider the setting where there is a constraint on all the parameters (data encoding, ansatz expressibility and entanglement) of HQNN. As discussed earlier, the ansatz expressibility has two factors, i.e., width and depth of quantum lay ers. Therefore, we separately consider constraints on both these expressibility factors along with constraints on other data encoding and entanglement. #### 7.4.1 Constraint on data encoding, ansatz width and entanglement. When there is a constraint on ansatz width along with data encoding and entanglement inclusion/removal. When it is constrained to encode the data in qubit amplitudes, entanglement is constrained to be included, and width is required to be smaller (typically \(n=6,8,10\)), then to achieve a relatively better performance, the depth of underlying quantum layers should be greater (typically \(m=10\)). However, if the width is constrained to be bigger (\(n=12,14\)), then the appropriate depth reduces to \(m=4\) to achieve better performance. Similarly, when the entanglement is required to be removed, then irrespective of the constrained width, the appropriate depth is typically smaller (\(m=2\)). On the other hand, when it is constrained to encode the data in qubit rotation angles, entanglement is required to be included and ansatz width is required to be smaller (typically \(n=8,10\)), then the appropriate ansatz depth is relatively greater (\(m=8\)). However, when the ansatz width is constrained to be bigger (typically \(n=12,14\)), then analogous to the case of amplitude encoding, the appropriate ansatz depth is reduced (typically \(m=4\)). Similarly, when the entanglement is required to removed and the data is required encoded in qubit angles, then irrespective of the constrained width, the smaller ansatz depth (\(m=2\)) is more appropriate to use. #### 7.4.2 Constraint on data encoding, ansatz depth and entanglement. We now consider the setting when the ansatz depth is constrained along with data encoding and entanglement inclusion/removal. In such a setting the ansatz width can be arbitrary. When the data is constrained to be included in amplitude encoding, entanglement is also required to be included and the ansatz depth is constrained to be smaller (typically \(m=2,4\)) then the appropriate ansatz width is slightly bigger (\(n=10,12\)). However, when the ansatz depth is constrained to be moderate, i.e., \(m=6\) the appropriate ansatz width reduces to \(n=8\), which further reduces to \(n=6\) for more deeper ansatz (\(m=8,10\)). Similarly, when it is constrained to remove entanglement, then irrespective of the depth constraint, the moderate ansatz width (\(n=6\)) is more appropriate to achieve relatively better performance. On the other hand, when the constraint is to encode the data in qubit angles, entanglement is required to be included and the ansatz depth is constrained to be smaller moderate (typically \(m=2,4,6\)), the appropriate width is relatively bigger (\(n=12,14\)). However, if the ansatz depth is constrained to be bigger (\(m=8,10\)), the appropriate width reduces to \(n=8\). Similarly, when the entanglement is constrained to be removed and data is constrained to be encoded in qubit angles, then irrespective of the depth constraint, the appropriate ansatz width is bigger, i.e, \(n=12,14\). ## 8 Conclusion Quantum machine learning (QML) has recently emerged as one of the potential applications of quantum computing, attempting to improve the classical machine learning by harnessing quantum mechanical phenomena. In QML, quantum neural networks (QNNs) are widely being explored because of the unparalleled success of their classical counterparts, namely neural networks (NNs). However, the practical applicability of QNNs is challenged by the phenomenon of barren plateaus (BP), where the gradients of parameters become exponentially small as the system size increases potentially making QNNs untrainable. To this end, the primary components of QNNs, i.e., data encoding, ansatz expressibility, and entanglement between qubits have been identified as the potential sources of BP. All these components have been studied individually from the aspect of BP, however, these components exist simultaneously in a practical setting. Therefore, investigating their joint effect, with respect to each other is of significant importance for practical applications. In this paper, we propose a framework to empirically investigate the holistic effect of all the aforementioned components of QNNs for a practical application namely; multi-class classification. In a practical setting, because of the limitations of noisy intermediate-scale quantum devices, hybrid quantum neural networks (HQNNs) are widely being used to explore the potential quantum advantage in QNNs. Since the HQNNs completely replicate the general QNN architecture (with some classical input pre- and post-processing), the analysis of quantum parts of HQNN can be directly applicable to QNNs. The HQNNs we have used for our analysis, consist of the following sequence of operations; 1) input dimensionality reduction, 2) qubit initialization, 3) data encoding (classical to quantum feature mapping), 4) quantum ansatz (parameterized quantum circuit), 5) qubit measurements and 6) dense classical neuron layer to post-process the qubit measurement results and get the output. Our analysis focuses on the data encoding and quantum layers (their expressibility and entanglement inclusion removal), which are the main components of QNNs. For data encoding, we use two frequently used data encoding techniques, namely: amplitude and angle encoding. For ansatz expressibility, we change the width (\(n\)) and depth (\(m\)) of quantum layers. We train our HQNNs with underlying ansatz for \(n\) and \(m\). We consider two similar ansatz structures, entangled ansatz (which contains single-qubit parameterized unitaries and nearest neighbor entanglement), and unentangled ansatz (which contains single-qubit parameterized unitaries only). We first benchmark the mean accuracy of the training experiments and demonstrate the existence of BP in HQNNs. We observe that the BP in HQNNs does not follow a direct relation with the number of qubits but is dependent on the overall expressibility of quantum layers. We then benchmark the overall accuracy and loss convergence of HQNNs and perform a comprehensive trainability vs. expressibility analysis. This analysis shows how the ansatz expressibility plays a role in the overall performance of HQNNs from the aspect of BP and how deep an ansatz can be for a given width before experiencing the BP, for each encoding. Furthermore, we observed that entanglement plays a role in the training landscapes of HQNNs and is dependent on the encoding type. When the data is encoded in qubit state vector, the entangled ansatz achieves better accuracy than the unentangled ansatz, demonstrating a positive impact of entanglement on the trainability of HQNNs in the case of amplitude encoding. On the contrary, when the data is encoded into qubit rotation angles, unentangled ansatz yields better accuracy than entangled ansatz, demonstrating a negative impact of entanglement on HQNNs trainability. We also briefly evaluate the HQNNs for classification applications considering other important evaluation metrics for classification problems, namely: precision, recall and F1-score. Finally, we illustrate the significance of our proposed framework by providing recommendations for different constraint scenarios (both alone and combined) on data encoding, ansatz expressibility and entanglement inclusion/removal in the underlying quantum layers.
2308.02189
Spin pumping effect in non-Fermi liquid metals
Spin pumping effect is a sensitive and well-established experimental method in two-dimensional (2D) magnetic materials. We propose that spin pumping effect can be a valuable probe for non-Fermi liquid (NFL) behaviors at the 2D interface of magnetic heterostructures. We show that the modulations of ferromagnetic resonance (FMR) exhibiting power-law scalings in frequency and temperature for NFL metals induced near a quantum critical point (QCP). The exponents are universal parameters inherited from the QCP and reflect the non-quasiparticle nature of spin relaxation in NFLs. We demonstrate the divergent FMR modulations with a particular focus on the Ising nematic QCP at both zero and finite temperatures, which contradicts the conventional Gilbert damping mechanism in normal Fermi liquids. Our theoretical proposal could potentially catalyze a surge in research harnessing insights from spin-dependent non-equilibrium physics at nanoscale, primarily developed for room-temperature functioning spin devices in the rapidly progressing field of spintronics, to access magnetic properties in strongly correlated electron systems.
Xiao-Tian Zhang, Xu-Ping Yao, Yuya Ominato, Long Zhang, Mamoru Matsuo
2023-08-04T08:12:21Z
http://arxiv.org/abs/2308.02189v2
# Spin pumping effect in non-Fermi liquid metals ###### Abstract We propose that the spin pumping effect is a valuable probe to non-Fermi liquid (NFL) metals in correlated electron systems. In a bilayer structure composed of a NFL metal and a ferromagnetic insulator subjected to ac magnetic field, the precessing spins in the FI inject spin current into the NFL metal, which modulates the ferromagnetic resonance (FMR) as a backaction. For NFL metals in the vicinity of a quantum critical point, the FMR modulation shows power-law scaling with the frequency and the temperature, which is closely related to the quantum critical behavior and reflects the non-quasiparticle nature of spin relaxation in NFL metals. _Introduction.--_The breakdown of coherent Fermi-liquid quasiparticles is the most dramatic manifestation of the many-body interaction effect in strongly correlated electron systems, which is dubbed the non-Fermi liquid (NFL) behavior. Coherent quasiparticles can be destroyed by coupling to critical fluctuations of order parameters in the vicinity of quantum critical points (QCPs) [1; 2], where physical quantities exhibit universal scaling form in the quantum critical regime distinct from the Fermi liquid metals. Evidence of NFL behavior has accumulated mainly from electric transport experiments in cuprate superconductors [3; 4], and heavy-fermion compounds [5], etc. More diagnostics of the NFL state is greatly desired. Recently, vigorous studies have been conducted on spin pumping, the transfer of spin angular momentum from the dynamics of magnetization driven by ferromagnetic resonance (FMR) to an adjacent material on a magnet [6; 7]. The spin pumping, at a microscopic level, is initiated by the magnetic exchange interaction at the interface between the magnetization in the magnet and the electron spin in the adjacent material [8; 9; 10]. This interaction leads to spin injection into the material and generates a self-energy of the magnons in the magnet as a backaction, modulating the frequency of FMR and Gilbert damping [11; 12]. The modulated FMR signal carries information about the dynamical spin susceptibility of the thin-film material attached to the ferromagnet, making it a useful probe for studying its spin characteristics [13]. The effectiveness of spin pumping as a sensitive probe, demonstrated by its use in measuring the magnetic phase transition in an antiferromagnetic ultra-thin film [14], shows its potential as a quantum probe in complex systems beyond simple quasiparticle models. This implies vast research potential in using spin pumping effect to explore spin properties in strongly correlated electron systems, including detecting the pairing symmetry in superconductors [12; 15]. In this Letter, we study the spin pumping effect in the bilayer system composed of a NFL metal and a ferromagnetic insulator (FI) thin film schematically shown in Fig. 1. The injected spin current is carried by "non-quasiparticles" in the NFL metal. We focus on NFL metals induced in the quantum critical regime of QCPs and take the two-dimensional (2D) Ising nematic QCP as a concrete example. The FMR frequency and the Gilbert damping coefficient are significantly modulated by the spin pumping into the NFL metal at both zero and finite temperatures. The FMR modulation shows power-law scaling with frequency and temperature, which are closely related to the universal critical exponents of the QCP. Therefore, the FMR-driven spin pumping is a valuable probe to the NFL metals in strongly correlated electron systems. We start with a generic relation between the FMR modulation and the dynamical spin susceptibility in NFL metals. Then, we calculate the FMR modulation of the NFL metal close to the 2D Ising nematic QCP at both zero and finite temperature, highlighting the power-law scaling with frequency and temperature. We hope this work can motivate further study of NFL metals and other strongly correlated electron systems from the spintronics perspective in the future. _FMR modulation in NFL metals._--Let us consider the bilayer structure composed of a NFL metal and a FI thin film shown in Fig. 1. The FI is driven by an external ac magnetic field in resonance with the precession of local spins. The spin current is injected into the NFL metal via the exchange coupling at the interface. The backaction modulates the FMR frequency and the Gilbert damping coefficient, which can be measured in experiments. The full Hamiltonian of the NFL/FI bilayer comprises three parts, \[\mathcal{H}(t)=\mathcal{H}_{\text{FI}}(t)+\mathcal{H}_{\text{ex}}+\mathcal{H}_ {\text{NFL}}. \tag{1}\] The first term describes the FI with the ferromagnetic Heisenberg model subjected to an oscillating magnetic field, \[\begin{split}\mathcal{H}_{\text{FI}}(t)=&-J\sum_{\langle i,j\rangle}\mathbf{S}_{i}\cdot\mathbf{S}_{j}+\gamma_{g}H\sum_{i}S_{i}^{z}\\ &-\gamma h_{\text{ac}}\sum_{i}\big{(}S_{i}^{x}\cos\omega t-S_{i}^ {y}\sin\omega t\big{)},\end{split} \tag{2}\] in which \(J>0\) is the ferromagnetic exchange coupling constant. \(\mathbf{S}_{i}\) stands for the local spin at site \(i\) in the FI. \(H\) is the magnitude of the Zeeman field. \(\gamma_{g}\) (\(<0\)) is the gyromagnetic ratio. \(h_{\text{ac}}\) and \(\omega\) are the amplitude and the frequency of the circularly oscillating external magnetic field, respectively. The second term in Eq. (1) is the exchange coupling at the interface between local spins in the FI and itinerant electrons in the NFL metal, \[\mathcal{H}_{\text{ex}}=\sum_{i}\int d^{2}\mathbf{r}\ J(\mathbf{r},\mathbf{r}_{i})\mathbf{S}_{ i}\cdot\mathbf{s}(\mathbf{r}), \tag{3}\] where \(\mathbf{s}(\mathbf{r})=\frac{1}{2}c_{\alpha}^{\dagger}(\mathbf{r})\mathbf{\sigma}_{\alpha \beta}c_{\beta}(\mathbf{r})\) is the itinerant electron spin operator. The spin-flip terms in the exchange coupling can be written in the reciprocal space as \(\mathcal{H}_{\text{flip}}=\sum_{\mathbf{k},\mathbf{q}}\big{(}J_{\mathbf{k},\mathbf{s}}s_{\mathbf{ q}}^{+}S_{\mathbf{k}}^{-}+\text{H.c.}\big{)}\). The exchange coupling function in the reciprocal space is approximately given by \[|J_{\mathbf{k},\mathbf{q}}|^{2}=\frac{J_{1}^{2}}{N}\delta_{\mathbf{k},\mathbf{q}}+\frac{J_{2}^ {2}l^{2}}{AN}, \tag{4}\] in which \(A\) is the area of the interface, and \(l\) is the spatial correlation length of quenched disorders at the interface. \(J_{1}\) and \(J_{2}\) are related to the mean value and the variance of the exchange coupling [12]. Let us consider the NFL metals induced by coupling electrons \(c_{\alpha}\) (\(\alpha=\uparrow,\downarrow\)) to a critical fluctuating order parameter field \(\phi(\mathbf{r})\). The third term in Eq. (1) denotes the following generic form, \[\begin{split}\mathcal{H}_{\text{NFL}}=&\int\text{ d}^{2}\mathbf{r}\Big{[}c_{\alpha}^{\dagger}(\mathbf{r})\epsilon(-i\partial_{\mathbf{r}})c_{ \alpha}(\mathbf{r})-\lambda O(\mathbf{r})\phi(\mathbf{r})\\ &+\frac{1}{2}\big{(}\partial_{\mathbf{r}}\phi\big{)}^{2}+\frac{r}{2} \phi^{2}\Big{]},\end{split} \tag{5}\] in which \(\epsilon(\mathbf{k})\) is the bare electron dispersion, \(\lambda\) is the coupling constant of electrons and the order parameter field \(\phi(\mathbf{k})\), and \(r\) is the tuning parameter towards the QCP. \(O(\mathbf{r})\) is a fermion bilinear operator, which transforms inversely as \(\phi(\mathbf{r})\) under symmetry actions guaranteeing the invariance of the coupling term. The backaction of the spin pumping renormalizes the magnon Green's function in the FI [8; 9], which is calculated from the Feynman diagram shown in Fig. 2 (a). The retarded magnon Green's function takes the following form, \[G(\mathbf{k},\omega)=\big{(}\omega-\omega_{\mathbf{k}}+i\alpha\omega-\Sigma(\mathbf{k}, \omega)\big{)}^{-1}, \tag{6}\] in which the bare magnon dispersion reads \(\omega_{\mathbf{k}}=Dk^{2}-\gamma_{g}H\), and \(\Sigma(\mathbf{k},\omega)\) is the magnon self-energy due to the backaction of the spin pumping. In a generic bosonic system, the self-energy has an imaginary part proportional to \(\omega\)[16], and the coefficient \(\alpha\) is called the Gilbert damping constant [6]. The FMR modulation is determined by the uniform component (\(\mathbf{k}=0\)) of the magnon Green's function, in which the pole dictates the resonance condition, \(\omega+\gamma_{g}H-\text{Re}\Sigma_{\mathbf{k}=0}(\omega)=0\), thus the resonance frequency is shifted by \(\delta H=\gamma_{g}^{-1}\text{Re}\Sigma_{\mathbf{k}=0}(\omega)\). The imaginary part of the self-energy leads to an enhanced Gilbert damping coefficient, \(\delta\alpha=-\omega^{-1}\text{Im}\Sigma_{\mathbf{k}=0}(\omega)\). The magnon self-energy can be calculated perturbatively in terms of the external oscillating magnetic field \(h_{\text{ac}}\) and the exchange coupling \(J_{\mathbf{k},\mathbf{q}}\)[8; 10], \[\Sigma(\mathbf{k},\omega)=-\sum_{\mathbf{q}}|J_{\mathbf{k},\mathbf{q}}|^{2}\chi(\mathbf{q},\omega), \tag{7}\] with \(\chi(\mathbf{q},\omega)\equiv i\int dte^{i(\omega+i0^{+})t}\Theta(t)([s_{\mathbf{q}}^{+ }(t),s_{-\mathbf{q}}^{-}(0)])\) being the retarded dynamical spin susceptibility for NFL metals. Inserting the exchange coupling function in Eq. (4), the magnon self-energy is given by [12] \[\Sigma_{\mathbf{k}=0}(\omega)=-\frac{J_{1}^{2}}{N}\chi_{\text{uni}}(\omega)-\frac{ J_{2}^{2}l^{2}}{AN}\chi_{\text{loc}}(\omega). \tag{8}\] Here, the uniform and the local components of the dynamical spin susceptibility, defined by \(\chi_{\text{uni}}(\omega)=\chi(\mathbf{q}=0,\omega)\) and \(\chi_{\text{loc}}(\omega)=\sum_{\mathbf{q}}\chi(\mathbf{q},\omega)\), dominate in opposite limits set by the ratio \(J_{1}\sqrt{A}/(J_{2}l)\). In the vicinity of the QCP at \(r=r_{c}\), the dynamical spin susceptibility takes the following universal scaling form at zero temperature, \(\chi(\mathbf{q},\omega,r-r_{c})=\xi^{d_{\chi}}\chi(\mathbf{q}\xi,\omega\xi^{z})\) Figure 1: Schematic plot of the NFL/FI bilayer structure for the FMR-driven spin pumping experiment. The pink arrow in the FI indicates the spin \(\mathbf{S}\) precessing in the external ac magnetic field \(\mathbf{h}_{\text{ac}}\). The blue arrows in the NFL metal indicate the itinerant electrons exchanging spin angular momentum at the interface with magnons in the FI layer. The gradually fainted green balls illustrate the incoherent quasiparticles in the NFL metal. in which \(\xi\) is the spatial correlation length, \(d_{\chi}\) is the scaling dimension of the spin susceptibility, and \(z\) is the dynamical exponent. At the QCP, the correlation length diverges, leading to the reduced scaling form, \(\chi(\mathbf{q},\omega,0)=\omega^{-d_{\chi}/z}\tilde{\chi}(\mathbf{q}/\omega^{1/z})\). Therefore, the uniform and the local components show different power-law scaling forms with the frequency, \[\chi_{\rm uni}(\omega)\sim\omega^{-d_{\chi}/z},\quad\chi_{\rm loc}(\omega)\sim \omega^{(2-d_{\chi})/z}. \tag{9}\] The critical exponents \(d_{\chi}\) and \(z\) can be extracted from their contributions to the resonance frequency shift and the enhanced Gilbert damping coefficient. The power-law scaling with the frequency is in sharp contrast to the conventional Gilbert damping mechanism, thus reflects the NFL nature at the QCP. The quantum critical regime extends to nonzero temperatures above the QCP, where a different set of critical exponents \(d^{\prime}_{\chi}\) and \(z^{\prime}\) may emerge in the following scaling form, \[\chi(\mathbf{q},\omega,r-r_{c},T)=L_{\tau}^{d^{\prime}_{\chi}/z^{\prime}}\chi\big{(} \mathbf{q}L_{\tau}^{1/z^{\prime}},\omega L_{\tau},\xi(T)/L_{\tau}^{1/z^{\prime}} \big{)}. \tag{10}\] Here, \(L_{\tau}=1/(k_{B}T)\) is a characteristic scale in the imaginary time direction. Therefore, we have \[\chi_{\rm uni}(\omega,T)= T^{-d^{\prime}_{\chi}/z^{\prime}}\chi\big{(}\omega/T,\xi(T)T^{1/z^{ \prime}}\big{)}, \tag{11}\] \[\chi_{\rm loc}(\omega,T)= T^{(d-d^{\prime}_{\chi})/z^{\prime}}\tilde{\chi}\big{(}\omega/T,\xi(T)T^{1/z^{ \prime}}\big{)}. \tag{12}\] The frequency and temperature dependence of the FMR modulation in the quantum critical regime at finite temperature is also a characteristic feature of the NFL metal. _FMR modulation at 2D Ising nematic QCP.--_Let us take the 2D Ising nematic QCP as a concrete example, which could be relevant to experiments in underdoped cuprates [4] and iron-based superconductors [17]. The order parameter field \(\phi(\mathbf{r})\) changes sign under the four-fold lattice rotation transformation. The fermion bilinear operator \(O(\mathbf{r})\) in Eq. (5) is given by \[O(\mathbf{r})=\frac{1}{A}\sum_{\mathbf{q}}\sum_{\mathbf{k},\alpha}d_{\mathbf{k}}c^{\dagger}_{ \mathbf{k}-\mathbf{q}/2,\alpha}c_{\mathbf{k}+\mathbf{q}/2,\alpha}e^{i\mathbf{q}\cdot\mathbf{r}}, \tag{13}\] where \(d_{\mathbf{k}}=\cos k_{x}-\cos k_{y}\) is the \(d\)-wave form factor. The four-fold lattice rotation symmetry is spontaneously broken in the Ising nematic ordered phase. Coherent quasiparticles are destroyed by critical fluctuations of the Ising nematic order parameter \(\phi(\mathbf{r})\) in the quantum critical regime. Let us consider the smooth NFL/FI interface for simplicity. In this case, \(J_{1}\gg J_{2}l/\sqrt{A}\), thus the uniform component of the dynamical spin susceptibility \(\chi_{\rm uni}(\omega)\) dominates, which can be calculated up to one-loop order as [see Fig. 2 (b)] \[\chi(\omega)=i\sum_{\alpha\beta}\sigma^{+}_{\alpha\beta}\sigma^{-}_{\beta \alpha}\sum_{\mathbf{k}}\int\frac{d\omega^{\prime}}{2\pi}g_{\alpha}(\mathbf{k},\omega ^{\prime})g_{\beta}(\mathbf{k},\omega^{\prime}+\omega). \tag{14}\] Here, the renormalized fermion propagator of the NFL metal is defined by \(g_{\alpha}(\mathbf{k},\omega+i0^{+})=-i\int d\tau e^{i(\omega+i0^{+})\tau}\langle \mathcal{T}_{c}\mathbf{c}_{\mathbf{k},\alpha}(\tau)c^{\dagger}_{\mathbf{k},\alpha}(0)\rangle\) where \(\mathcal{T}_{c}\) is the time ordering on the Keldysh contour. The NFL metal near the Ising nematic QCP can be captured by the patch theory [18; 19], in which the momenta near the Fermi surface are decomposed into patches \(\mathbf{k}=\mathbf{k}_{F,s}+\delta\mathbf{k}\), in which \(s\) is the patch index, while \(\mathbf{k}_{F,s}\) and \(\delta\mathbf{k}\) are the Fermi momentum and a small deviation at the \(s\)-th patch, respectively. Denoting the radial and the tangential components of \(\delta\mathbf{k}\) by \(\delta k_{x}\) and \(\delta k_{y}\), the bare electron propagator is given by \(g_{\alpha}^{\prime}(\mathbf{k},\omega+i0^{+})=\big{(}\omega+i0^{+}-v_{F}\delta k_{ x}-\kappa\delta k_{y}^{2}/2\big{)}^{-1}\). The boson polarization is induced by the electron-hole excitations near the Fermi surface captured by Fig. 2 (c) up to one-loop order, which yields \(-ic_{B}\omega/|q_{y}|\). The renormalized boson propagator is then adopted to calculate the electron self-energy [see Fig. 2 (d)] in the same spirit as the Hertz-Millis theory of ferromagnetic metals [20; 21]. The dynamically generated electron self-energy overwhelms the bare kinetic term in the low-energy limit \(\Sigma_{e}(\mathbf{k}_{F},\omega)\sim-d_{\mathbf{k}_{F}}^{2}|\omega|^{2/3}({\rm sgn}( \omega)+i/\sqrt{3})\). The NFL nature is captured by the vanishing quasiparticle weight in the \(\omega\to 0^{+}\) limit for \(k_{F,x}\neq\pm k_{F,y}\): \(Z_{\mathbf{k}_{F}}(\omega)=\big{(}1-\partial_{\omega}{\rm Re}\Sigma_{e}(\mathbf{k}_{F},\omega)\big{)}^{-1}\sim d_{\mathbf{k}_{F}}^{-2}\omega^{1/3}\). The renormalized electron propagator is given by [22] \[g_{s}(\mathbf{k},k_{0})=\big{(}ic_{F}d_{\mathbf{k}_{F,s}}^{2}|\omega|^{2/3}-v_{F}k_{x}- \kappa k_{y}^{2}/2\big{)}^{-1} \tag{15}\] for \(\omega<\omega_{c}=c_{F}^{3}\) with \(c_{F}=\lambda^{2}/(\sqrt{3}\pi v_{F}c_{B}^{1/3})\). The dy Figure 2: Feynman diagrams for perturbative calculations. (a) Dyson equation for the magnon Green’s function. The thin and the thick wavy lines are the bare and the renormalized magnon Green’s functions, respectively. The circle stands for the magnon self-energy \(\Sigma\). (b) The magnon self-energy is equivalent to the dynamical spin susceptibility of the NFL metal, which can be calculated with the renormalized electron propagator (thick lines) up to one-loop order. (c-d) One-loop corrections to (c) the boson polarization function and (d) the electron self-energy in the NFL metal. The straight and the wavy lines are the electron and the boson propagators, respectively. namical exponent \(z=3\) is not modified until the three-loop order [18]. The uniform spin susceptibility evaluated with Eq. (15) has a scaling dimension \(d_{\chi}=d-z\). The retarded magnon self-energy is given by \[\Sigma_{\mathbf{k}=0}(\omega)=\frac{4\pi^{2}J_{1}^{2}\rho_{F}}{N}\Big{|}\frac{ \omega}{\omega_{c}}\Big{|}^{1/3}\Big{[}\sqrt{3}-i\text{sgn}(\omega)\Big{]}, \tag{16}\] in which \(\rho_{F}\) is the density of states at the Fermi energy. The resonance frequency shift and the enhanced Gilbert damping have the following scaling forms in the low-energy limit, \[\delta\alpha\sim|\omega|^{-2/3},\quad\delta H\sim\gamma_{g}^{-1}|\omega|^{1/3}. \tag{17}\] These are schematically plotted in Fig. 3. The diverging coefficient \(\delta\alpha\) in the zero-frequency limit indicates a novel spin relaxation mechanism, which is in sharp contrast to the conventional Gilbert damping. _FMR modulation at finite temperature._--The finite-temperature properties of the NFL metal close to the Ising nematic QCP cannot be simply inferred from the zero-temperature results by assuming the \(\omega/T\) scaling [23; 24]. It has been proposed based on the renormalization group calculations [25] that the boson dynamics at finite temperature is characterized by a dynamical exponent \(z=2\) instead of \(z=3\) at zero temperature, which has been confirmed by quantum Monte Carlo simulations [26]. The renormalized boson and fermion propagators are given by [24], \[g(\mathbf{k},\omega+i0^{+})= \big{(}\omega+i0^{+}-\epsilon_{\mathbf{k}}+i\gamma_{\mathbf{k}_{F}}(T) \big{)}^{-1}, \tag{18}\] \[D(\mathbf{q},\Omega+i0^{+})= \bigg{(}\xi^{-2}(T)+a|\mathbf{q}|^{2}-ib\frac{\Omega}{\gamma(T)}\bigg{)} ^{-1}. \tag{19}\] where \(a\) and \(b\) are nonuniversal constants. The correlation length \(\xi(T)\) and the electron scattering rate \(\gamma(T)\) scale as \(\xi(T)^{-1}\sim\sqrt{T/\ln(\epsilon_{F}/T)}\) and \(\gamma(T)\sim\sqrt{T/\ln(\epsilon_{F}/T)}\) in the quantum critical regime. The imaginary part of the retarded self-energy \(\gamma_{\mathbf{k}_{F}}(T)=-\text{Im}\Sigma(\mathbf{k}_{F},\omega=0;T)\) is calculated in an asymptotic temperature range \(T\ll T_{0}\), \(\gamma_{\mathbf{k}_{F}}(T)=-(\lambda^{2}d_{\mathbf{k}_{F,x}}^{2})(4v_{F}\sqrt{a})T\xi(T)\), where the upper bound \(T_{0}\) is specified by the condition \(|\gamma_{\mathbf{k}_{F}}(T)|/v_{F}\ll\xi^{-1}(T)/\sqrt{a}\), thus \(T_{0}=\epsilon_{F}e^{-\lambda^{2}/v_{F}^{2}}\). These results are valid for \(\omega<T\ll T_{0}<\omega_{c}\), while the zero-temperature results presented before should be valid in the low-temperature regime for \(T<\omega\). The real part of the electron self-energy can be obtained from the Kramers-Kronig relation as \(\text{Re}\Sigma_{e}(\mathbf{k}_{F},\omega)\sim-d_{\mathbf{k}_{F}}\omega\xi(T)^{-1}\) for \(|\omega|<T<\omega_{c}\). The quasiparticle weight given by \[Z(\mathbf{k}_{F},\omega;T)\simeq d_{\mathbf{k}_{F}}^{-2}\xi(T)^{-1},\quad k_{F,x}\neq \pm k_{F,y}, \tag{20}\] vanishes approaching the QCP due to the divergent \(\xi(T)\). The renormalized electron propagator in Eq. (18) is used to evaluate the dynamical spin susceptibility and the magnon self-energy at finite temperature, which yields \[\Sigma_{\mathbf{k}=0}(\omega)=\frac{\sqrt{2}\pi J_{1}^{2}\rho_{F}}{N}\Big{(}\frac {4}{\pi}\xi^{-\frac{1}{2}}(T)+i\text{sgn}(\omega)\sqrt{\frac{|\omega|}{\gamma( T)}}\Big{)}. \tag{21}\] Accordingly, the FMR frequency shift and the enhanced Gilbert damping coefficient have the following scaling form \[\delta\alpha\sim-\big{(}|\omega|\gamma(T)\big{)}^{-\frac{1}{2}},\qquad\delta H \sim\gamma_{g}^{-1}\xi^{-\frac{1}{2}}(T), \tag{22}\] which are schematically plotted in Fig. 3. _Summary._--We propose that the FMR-driven spin pumping effect in the NFL/FI bilayer structure is a valuable probe to the NFL metal. In the NFL metal close to an Ising nematic QCP, the resonance frequency shift and the enhanced Gilbert damping exhibit power-law scaling with frequency and temperature both at the QCP and in the finite-\(T\) quantum critical regime, which are closely related to the critical exponents. Therefore, the spintronics experiments can shed light on the non-quasiparticle nature of spin relaxation the NFL metals. This work is partially supported by China's Ministry of Science and Technology (Grant No. 2022YFA1403900) and National Natural Science Foundation of Figure 3: Schematic illustrations of the FMR modulations scaling forms. The frequency and temperature are rescaled by a characteristic temperature \(T_{0}\) (\(\gg T\)). (a) Temperature dependence of the resonance frequency shift \(\delta H(T)\). The finite temperature(\(T>\omega\)) \(\delta H(T)\) shows a rapid increase approaching low-\(T\) owing to the divergent correlation length. (b) Frequency dependence of the enhanced Gilbert damping coefficient \(|\delta\alpha(\omega)|\). The magnitude of \(\delta\alpha\) displays different scaling forms in the low-frequency(\(\omega<T\), left) and high-frequency(\(\omega>T\), right) regions. China(NSFC) (Grant No. 11920101005). L.Z. is supported by the National Key R&D Program of China (Grant No. 2018YFA0305800), the NSFC (Grant No. 12174387), the Strategic Priority Research Program of CAS (Grant No. XDB28000000), and the CAS Youth Innovation Promotion Association. M.M. is supported by the Priority Program of Chinese Academy of Sciences under Grant No. XDB28000000, and by JSPS KAKENHI for Grants (Nos. 21H01800, and 21H04565, and 23H01839) from MEXT, Japan.
2307.05063
A "Game of Like" : Online Social Network Sharing As Strategic Interaction
We argue that behavioral science models of online content-sharing overlook the role of strategic interactions between users. Borrowing from accuracy-nudges studies decision-theoretic models, we propose a basic game model and explore special cases with idealized parameter settings to identify refinements necessary to capture real-world online social network behavior. Anticipating those refinements, we sketch a strategic analysis of content amplification and draw a connection between Keynes's beauty contest analogy and recent social-epistemological work on echo chambers. We conclude on the model's prospects from analytical and empirical perspectives.
Emmanuel J. Genot
2023-07-11T07:10:55Z
http://arxiv.org/abs/2307.05063v1
# A "Game of Like" : Online Social Network Sharing As Strategic Interaction ###### Abstract We argue that behavioral science models of online content-sharing overlook the role of strategic interactions between users. Borrowing from accuracy-nudges studies decision-theoretic models, we propose a basic game model and explore special cases with idealized parameter settings to identify refinements necessary to capture real-world online social network behavior. Anticipating those refinements, we sketch a strategic analysis of content amplification and draw a connection between Keynes' "beauty contest" analogy and recent social-epistemological work on echo chambers. We conclude on the model's prospects from analytical and empirical perspectives. ## 1 Motivations Online search engines garnered attention from social epistemologists in the early days of the commercial Internet, when A. Goldman analyzed them as retrieval systems in [6]. Later, T.A. Simpson extended Goldman's analysis into a model of surrogate expertise in [22] in direct response to Google Search personalization algorithms. Recently, epistemologists have turned to online social networks (hereafter osn), fulfilling a similar function of online information sources, with even greater personalization. Notably, T.C. Nguyen [13] provided a much-needed conceptual analysis of osn epistemic bubbles and echo chambers, and C. O'Connor and J.O. Weatherall [15] proposed that applying network epistemology to osn could address limitations of contagion models of online information spread. At the same time, behavioral scientists have independently addressed the limitations of contagion models by looking at osn-sharing through a rational choice lens. Particularly, studies that shaped the field and its public perception have manifested a Bayesian influence. Widely publicized studies like [24] (a _Science_ cover story: "How lies spread-On social media, fake news beats the truth") and [17] (a _Nature_ cover story: "Misinformation-A prompt to check accuracy cuts online sharing of fake news") appealed to Bayesian decision theory and expected utility theory to rationalize osn content-sharing and interpret diffusion-model data analyses.1 Decision theory best models decisions under uncertainty about the state of nature, but osn-sharing outcomes depend on reactions from a community of users. The preferred model for decisions _under uncertainty about other agents' decisions_ is game theory, and while the formalisms are inter-translatable, decision theory is less expressive. As pointed out by J. Harsanyi, the game-to-decision direction loses in translation the explicit expression of mutual expectations of rationality (via solution concepts [7, 8]). Compounding this issue, decision-theoretic models from behavioral science studies (such as [24, 17]) were not proposed as translations for games and thus did not explicitly translate mutual expectations into constraints on decision-makers priors (as per the games-to-decision direction, cf. [9, 7]), leaving their role almost entirely unanalyzed. Unfortunately, social epistemology offers no ready-made solution. Nguyen's analysis is strategic but informal and cannot bear on the data without a formal reconstruction, while network epistemology does not address strategic expectations formally. Footnote 1: The _same-to-decision direction_ is implicitly decision-theoretic, as utilities take as argument proxies for individual choices (content shared) rather than strategic profiles (cf. Section 2). The absence of a strategic analysis of osn-sharing motivated the approach presented in the remainder of this paper. Section 2 builds upon behavioral science decision-theoretic models to propose a simplified game model for osn-sharing, differentiating between content-based and engagement-based preferences. Section 3 examines special cases that highlight the model's salient features and limitations and identifies extensions necessary to reconstruct real-life osn users' behavior. Section 4 extrapolates informally and proposes that special cases of osn-sharing elicit strategy selection akin to reasoning in guessing games and could illuminate content amplification scenarios, including Nguyen's epistemic bubbles and echo chambers. We conclude with the analytic prospects of a strategic re-interpretation of extant data, and a suggestion for the design of new studies. ## 2 A Game of Like Behavioral science studies of osn-sharing often acknowledge the role of strategic interactions between users but have so far fallen short of factoring in their contribution. Pennycook _et al._ (2021) is a paradigmatic example: the authors note that "the desire to attract and please followers/friends or to signal one's group membership" (17, p. 591) contributes to content-sharing decisions, but propose a utility function limited to personal preferences for content having such-and-such characteristics.2 A natural first step toward a strategic model is thus to introduce the missing terms, then specify a game based on this completed picture. For simplicity, we can let \(u_{p_{i}}(\cdot)\) denote \(i\)'s _personal utility_, expressing how some content aligns with \(i\)'s personal preferences for content having such-and-such characteristics, with the understanding that this alignment could be further analyzed along multiple dimensions (as in [17], cf. n. 2). To that, we add a term that we denote \(u_{s_{i}}(\cdot)\), for the _social utility_, expressing how reactions to the content shared--'likes,' re-shares, comments, etc.--satisfy \(i\)'s preferences for social validation or, more generally, engagement. Finally, we introduce a parameter, that we denote \(\gamma\), to represent the relative weights of \(i\)'s personal preferences for content and social preferences for engagement. In the decision-theoretic model of [17], the only action being'sharing,' actions and content are indiscernible, and the utility function can range over the content. In a game-theoretic model, the utility function ranges over _strategy profiles_, and we must distinguish content from actions. As a basic model, we consider an \(n\)-player repeated game \(G\) in strategic form with a set \(P\) of players, where any \(i\in P\) can, at each round, 'like' or'share' content. As simplifications, we assume that players only share new content at round \(r=0\), so any'share' action at round \(r\geq 1\) is a'reshare.' Under this simplification, we can specify a set of (original) _content_\(C_{G}=\{c_{1},\ldots,c_{n}\}\) for \(G\), where \(c_{i}\) is the content introduced at \(r=0\) by agent \(i\). The _action set_ for some player \(i\) at some round \(r\) is \(A_{i}=\{\mathrm{like}_{i}(x,y),\mathrm{share}_{i}(x,y):x\in C_{G},y\in P\}\), where \(y\) is a player who shared \(x\) at some round \(r^{\prime}<r\), and from whom \(i\) is re-sharing \(x\). Note that, under our simplifying assumption, at \(r=0\), there is nothing to 'like.' If all actions are visible to all players, no restriction is imposed on \(x\) or \(y\). Explicitly: any content shared by some player at round \(r\) can be reshared by any other player at round \((r+1)\). This amounts to a game with perfect information, adequate for demonstrating the strategic standpoint's fruitfulness but insufficient to model real-world osns (see below). Our earlier discussion of personal and social preferences yields a utility function, as below. \[U_{i}(\cdot) = \gamma_{i}u_{p_{i}}(\cdot)+(1-\gamma_{i})u_{s_{i}}(\cdot) \tag{1}\] Intuitively, in the decision-theoretic approach, \(u_{p_{i}}(c)\) expresses \(i\)'s preferences as a function of the distance between \(c\) and \(i\)'s ideal content located in a multi-dimensional space whose dimensions correspond to \(i\)'s criteria of evaluation. In a round of \(G\), the argument of (1) is a strategy profile \(\overline{\sigma}=(\sigma_{1},\ldots\sigma_{n})\) where \(\sigma_{i}\) is player \(i\)'s strategy at that round. Following the same intuition, \(u_{p_{i}}(\overline{\sigma})\) can be thought of as a function of the relative distances between \(i\)'s ideal content and the 'community content' \(c_{1},\ldots,c_{n}\), or some weighed sum thereof, representing how close \(C_{G}\) is to \(i\)'s ideal.3 So construed, and under our simplification, \(u_{p_{i}}(\overline{\sigma})\) remains constant after \(r=0\). Again, this personal preference model is sufficient for our purposes. Still, in a real-world osn, overall engagement could indirectly affect \(u_{p_{i}}(\overline{\sigma})\) (\(i\) may care for overall visibility, and in a model with incomplete information, visibility would depend on engagement, see below). Footnote 3: Note that \(i\) may be indifferent to others’ strategies, in which case \(u_{p_{i}}(\overline{\sigma})=u_{p_{i}}(\overline{\sigma}^{\prime})\) whenever \(i\)’s strategy \(\sigma_{i}\) is the same in \(\overline{\sigma}\) and \(\overline{\sigma}^{\prime}\). In a decision-theoretic model (e.g., extending that of [17]) \(u_{s_{i}}(c)\) would be a function of the (accumulated) engagement from users other than \(i\), with \(c\) (when shared by \(i\)). In \(G\), \(u_{s_{i}}(\overline{\sigma})\) at round \(r\) is, in part, a function of how other players have engaged in \(r\) with the content \(i\) shared at some \(r^{\prime}>r\); and in part, of the accrued social utility inherited from earlier rounds. The candidate functions for computing either component are too numerous to review here, and which one applies to particular cases may be empirically constrained by algorithms. Still, it suffices for our purposes to note that, at some round \(r\), \(u_{s_{i}}(\overline{\sigma})\) does not'reset' \(i\)'s social utility; that the contribution of 'likes' and (re)shares may vary; and that evaluations may depend on players' knowledge.4 For definiteness, we can assume a function \(u_{s_{i}}(\overline{\sigma})\) that ranks higher strategy profiles where content \(i\) shared (or reshared) is both liked and reshared rather than liked or shared (alone)--i.e., a function that takes some weighed sum of 'likes' and (re)shares, rather than an average (or an argmax). This justifies the shorthand "game of like"--as a nod to J. Conway's "game of life" [5]--since the preferred social outcome, over repetitions, is like-and-reshare, a strengthened form of 'like' ("game of share" would be equally justified, but the homog Let us conclude this section with a few words on our model's (self-imposed) limitations. In real-world osn, new content can be introduced at any time, and players have only a partial picture of the content they can reshare. A more realistic "game of like" would have imperfect information (e.g., as a model of bounded attention): any content \(c\) would be available to a player \(i\) to react to at \(r\) with a certain probability, depending on overall engagement with \(c\) prior to \(r\). In such a model, \(i\) could be aware of some \(c\), close to \(i\)'s ideal content, and care for its visibility (the probability of \(c\) being available to other players) and thus for other players' engagement with \(c\). Conversely, \(i\) might not worry much about some \(c\), far removed from their ideal, as long as \(c\)'s probability of being available to other players would remain low. Still, a simplified model with perfect information already acknowledges the relevance of overall interaction by virtue of the argument of \(\gamma_{i}u_{p_{i}}(\cdot)\) being a strategy profile, and thus furthers goal of identifying strategic components of osn-sharing. Hence, our "game of like" with perfect information is a proof-of-concept and a foundation for future developments. The next section considers special cases, varying players' \(\gamma\) types, to determine which refinements would be necessary to turn the proof-of-concept model into a model for real-world osns. ## 3 Strategy Selection Let us begin with the limit case where, for all \(i\in P\), \(\gamma_{i}=1\), denoted \(G_{\gamma=1}\) for later reference. We could distinguish _a priori_ between a variety of subcases, depending on whether players have non-equivvocal prior beliefs about other players' personal preferences; and/or whether they have non-equivvocal prior beliefs about other players' \(\gamma\). However, the differences between those subcases are inconsequential. To see this, assume an arbitrary player \(i\) in \(G_{\gamma=1}\) who _does have_ non-equivvocal prior beliefs about other players' personal preferences for content and \(\gamma\)-type (say, following a round of cheap talk). _Ex hypothesis_, at any round \(r\) of \(G_{\gamma=1}\), for any \(i\in P\), \(U_{i}(\overline{\sigma})\)=\(u_{p_{i}}(\overline{\sigma})\). Hence, \(i\)'s best strategy at round \(r=0\) is to share whatever content \(c_{i}\) available to them that is closest to their ideal content (according to their dimensions of evaluation). Beliefs about other players' preferences and \(\gamma\) type do not affect that choice. Hence, \(i\) would choose the same content _without_ any information about other players. Since the only assumption we made about \(i\) is that \(\gamma_{i}=1\), this generalizes to any \(i\in P\) for \(G_{\gamma=1}\) (and yields an equilibrium solution in the basic model for \(r=0\) in \(G_{\gamma=1}\)). Under the assumption that content is only introduced at round \(r=0\), the distance between the 'community content' and any player \(i\)'s ideal content remains constant across repetitions, whatever their strategy at \((r\geq 1)\). Relaxing this simplifying assumption is one way to model how players can attempt to drive community content closer to their preferences by sharing more content closer to their ideal at any new round \((r\geq 1)\). But this would not bring the model closer to real-world osn, as "spamming" content is only efficient if the content is visible, bringing us back to a version of the "game of like" with imperfect information. Conversely, a "game of like" _without_ content introduced at round \(r>0\), and with \((\gamma=1)\)-players only, would be susceptible to manipulations by coalitions of like-minded players, who would want to see some content promoted. Therefore, relaxing the assumption that no new content is introduced past \(r=0\) would not be especially illuminating without an explicit topological model of content distances and auxiliary assumptions about how variable availability of content correlates with engagement. In a second limit case, denoted \(G_{\gamma=0}\), all \(i\in P\) are such that \(\gamma_{i}=0\). Unlike \(G_{\gamma=1}\), player priors about others can significantly impact the game. To see this, consider the limit subcase where players' \(\gamma\) type is common knowledge. Then, \(G_{\gamma=0}\) becomes a game of reciprocation-or-retaliation or _quid pro quo_, where players either trade reciprocal 'likes' and re-shares, or ignore one another, and where content becomes inconsequential (so that it matters little whether new content can be introduced after \(r=0\) or not). To see this, consider a simple \(G_{\gamma=0}\) case with two players \(i\) and \(j\), content introduction restricted to \(r=0\), and (as a simplification) no marginal utility for 'liking' or re-sharing one's content. Hence, the only utility \(i\) and \(j\) can get is from the other player's liking or re-sharing their content. At \(r=0\), they share (resp.) \(c_{i}\) and \(c_{j}\). At \(r=1\), \(i\) (\(j\)) can like or re-share \(c_{j}\) (\(c_{i}\)), or do nothing (for definiteness: repeating their move from \(r=0\)). If either does nothing at \(r=1\), the other can retaliate at \(r=2\) by playing nothing; otherwise, they can reciprocate and play the remaining action (like, or reshare) they did not play at \(r=1\). With no introduction of new content, they can repeat the cycle over \(c_{i}\) and \(c_{j}\). If new content is allowed, they can repeat cycles of three rounds (introduction, like, or re-share, then reciprocation or retaliation) to accrue utility. The strategy just described turning the "game of like" into a game of reciprocation-or-retaliation, and resembles the _tit-for-tat_ strategy in the repeated Prisoner's Dilemma. As extreme as it is, this case suggests that when \((\gamma=0)\)-players have non-equivocal beliefs about one another's \(\gamma\) type, the closer the players are to having correct beliefs, the closer \(G_{\gamma=0}\) resembles a _quid pro quo_ game. Assume now a subcase of \(G_{\gamma=0}\) where players have equivocal beliefs about \(\gamma\) types--i.e., do not _know_ that other players are \((\gamma=0)\)-players. If they also have equivocal beliefs about other players' personal preferences for content, the rational choice (for any \(i\)) is a mixed strategy assigning equal weight to any content \(i\) has access to at \(r=0\) and hope for the best. Lifting the restriction on content introduction is more consequential than in the \(G_{\gamma=1}\) case, as repeated observations of others' sharing behavior are necessary to infer their personal preferences for content from their actions or their preferences for engagement. Since, _ex hypothesis_, no player in \(G_{\gamma=0}\) actually cares for content (as long as they receive engagement), inferences from sharing behavior to personal preferences could result in 'false consensus' situations if players gradually amplify a salient type of content, leading to an echo chamber (in the sense of [13]; cf. Section 4). However, even without lifting the assumption, we can form a picture of a repeated game with new content by assuming a round of cheap talk prior to \(r=0\), during which players can form priors (or update equivocal priors) about other players' preferences based on observed behavior. Suppose that some candidate content appears salient for eliciting positive reactions--say, pictures of cats in precarious positions. Then, upon engaging in \(G_{\gamma=0}\), players could anticipate similar pictures to elicit 'like' and'share' reactions, skewing the content shared at \(r=0\) toward pictures of cats in precarious positions. Thus, it would appear that a majority of players favor cat pictures. Even without the introduction of new content, this could lead to cat pictures being increasingly reshared at every \(r\geq 1\) without (_ex hypothesis_) any player selecting their strategy out of personal preference for that type of content, resulting in a 'false consensus.' Again, as with \(G_{\gamma=1}\), how engagement could impact visibility appears more critical than whether or not content may be repeatedly introduced. Subsequently, the need to accommodate \((\gamma<1)\)-players does not require further refinements beyond those suggested by \(G_{\gamma=1}\): imperfect information and an explicit content evaluation and comparison model. The latter would, in particular, suffice for representing how \((\gamma<1)\)-players form (and revise) beliefs about the majority's opinion, instrumental in selecting strategies for eliciting engagement. ## 4 Mutual Expectations and Social Influence Our remark about the majority's opinion being of import to \((\gamma<1)\)-players may remind the reader of J.M. Keynes' "Beauty Contest" analogy for professional investment, quoted below. [P]rofessional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole. [...] It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. [9, p. 156] The parallel is intentional: we propose that Keynes' "third degree" describes the reasoning of a \((\gamma<1)\)-player selecting a strategy that could elicit (re)share reactions from other \((\gamma<1)\)-players who would want to elicit 'like' reactions. More generally, a "game of like" with some proportion of \((\gamma<1)\)-players relates to _guessing games_, proposed as a generalization of J.M. Keynes' beauty contest by R. Nagel (first, in [11]; see [1] for an overview of empirical studies). A formal reconstruction of this suggestion would require an explicit model of preference distances (already identified as a necessary refinement for our basic model to capture real-world osns), but we can offer an informal sketch. Asssume the standpoint of a player of type \(\gamma=0\), that we will denote \(\gamma_{0}\), reasoning about other players of a "game of like."5 When choosing between multiple options for content to share, when \(\gamma_{0}\)'s goal is accruing "like" reactions, \(\gamma_{0}\) is equally well-off: (_i_) choosing based on their own preferences for content, or: (_ii_) choosing based on the majority's preference (e.g., as inferred following a round of cheap talk) when preferences agree; and: (_iii_) possibly worse off, when preferences disagree. In case (_iii_), \(\gamma_{0}\) would be better off switching to an option that agrees with the majority's (displayed) preferences. Thus, options based on \(\gamma_{0}\) preferences are _weakly dominated_ by options based on the majority's preferences (as inferred by \(\gamma_{0}\)). Consider now how \(\gamma_{0}\) would approach selecting a strategy for eliciting "share" reactions; as simplification, assume that \(\gamma_{0}\) believes that most players are like him and care more for engagement than for content. Then, \(\gamma_{0}\) expects that most players would (re-)share content to elicit (at least) 'like' reactions. If \(\gamma_{0}\) assumes that those players are rational, they expect those players to reason to (_i-iii_) above. From there, \(\gamma_{0}\) can conclude that selecting an option based on their own preferences for content would yield the same payoff as choosing based on the majority's opinion of the majority's (displayed) preferences for content (if in agreement); and possibly a worse payoff (if in disagreement). In the latter case, \(\gamma_{0}\) would be better off switching options. Hence, a selection based on the majority's opinion of the majority's (displayed) preferences _weakly dominates_ a selection based on \(\gamma_{0}\)'s preferences for content alone. Footnote 5: We assume that the agent is a \((\gamma=0)\)-player rather than a weaker \((\gamma<1)\) to avoid dealing with correlations between personal preferences for content and preferences for engagement. Otherwise, we would have to factor in the cost of sharing contrary-to-preference content, which could offset the benefit of engagement. The argument just sketched guesstimates too many important parameters to be general--e.g., the respective distribution of \(\gamma\) types among the players, the cost of seeking social feedback with contrary-to-personal preferences for other players that \(\gamma_{0}\), how \(\gamma_{0}\) would arrive at estimates for those, etc. However, it suffices to motivate a comparison between a subclass of "game of like," Keynes' beauty contest, and Nagel's guessing games. And empirical motivation for this comparison would be the reconstruction of the real-world osn behavior colloquially called'signal boosting,' whereby users of an osn leverage the influence of public figures ("influucers") with a larger following base, tagging them in hope to be re-shared. A well-reported example is a November 13, 2020 Twitter video featuring actor R. Quaid reading aloud an earlier tweet from then-US president D.J. Trump under a stroboscopic light, with an over-dramatic voice. Trump (unsurprisingly) reshared Quaid's video, which then accrued millions of views from Trump's followers, reaching beyond Quaid's following. In fact, we have already encountered in Section 3 a variant of (involuntary) signal-boosting behavior, as a pathway to amplification (false consensus) when discussing \(G_{\gamma=0}\). This seems grounds enough to suggest that a "game of like" model of osn with influucers could contribute to a formal theory of online amplification, echoing Keynes' motivations for the beauty-contest analogy (speculative asset bubbles). Another possible contribution that circles back to social epistemology is a possible formal reconstruction of Nguyen's conceptual analysis [13]. Nguyen proposes that _epistemic bubbles_ occur when individuals receive limited exposure to information sources challenging their pre-existing beliefs, in contrast to _echo chambers_, which emerge when individuals receive extensive exposure to information sources that align with their pre-existing beliefs. Epistemic bubbles result from combined personal choice and algorithmic curation, particularly when online platforms tailor content to individual preferences, thereby restricting the information individuals encounter. In an echo chamber, people reinforce their views and are shielded from diverse perspectives and alternative information, leading to the exclusion of dissenting opinions. Nguyen notes that epistemic bubbles are easy to burst with the presentation of contrary evidence, while echo chambers are self-reinforcing, with social interaction actively fostering distrust of outside sources. Nguyen's analysis of echo chambers invites a formal reconstruction in a "game of like" model with imperfect information, bringing it closer to the methodological frameworks of behavioral science (_modulo_ a game-to-decision translation). ## 5 Concluding Remarks We argued in Section 1 that, while osn-sharing is a strategic interaction, behavioral science models overlook the contribution of strategic anticipations. We extrapolated from behavioral science decision-theoretic models a basic game model of osn-sharing (Section 2) and explored some limit cases to determine refinements necessary to capture real-world osn-sharing (Section 3). A connection with Keynes' Beauty contest (and, more generally, guessing games) allowed us to sketch a strategic analysis of content amplification in the presence of influencers and users leveraging influence and suggested a direction for the model's development (Section 4). Still, a "game of like" model may not contribute to conceptual analysis beyond a formal reconstruction of Nguyen's framework. And Nguyen's informal analysis has already done the heavy lifting of rigorously ordering concepts inherited from unsystematic public discourse, such as "echo chambers" and "filter bubbles" (introduced, resp., in [23] and [16]), whose previously heterogeneous use had prevented consensus among researchers (see [21]). Rather, the litmus test for a "game of like" model would be a contribution to the critical re-evaluation of empirical data assessed from a decision-theoretic standpoint; and a suggestion of empirical investigations that a decision-theoretic standpoint would have neglected. To conclude, we want to suggest that, as incomplete as it is, our "game of like" model already achieves that. As for critical re-evaluation, consider the widely-publicized study by Pennycook _et al._[17], in which the intervention condition proceeds from the auxiliary hypothesis that accuracy competes for attention with social incentives.6 From a strategic standpoint, the authors' other auxiliary hypothesis--that "people do care more about accuracy than other content dimensions" (p. 591)--could characterize common knowledge of one dimension of users' preferences. If it does, having "the concept of accuracy more [...] salient in [one's] mind" (_ibid_) could _prime_ engagement-based expectations, rather than shutting them down; in a game-to-decision translation, a Bayesian decision-maker would then anticipate a better prospect of eliciting other users' reactions conditional on being perceived as accurate (compared to conditional on being perceived as inaccurate). Compare this with the intervention condition from the more recent study by Ren _et al._[19], which socially incentivized both accuracy and engagement.7 As for the design of new studies, consider the question of whether differences in intervention conditions between [17] and [19] translate into differences in reasoning about other users' strategies is an interesting question. A positive answer would partition "accuracy nudges" into two classes (engagement-based and non-engagement-based). A negative answer would invalidate the auxiliary hypothesis that accuracy competes with the social dimension. The connection we drew with Nagel's work on guessing games suggest an empirical approach to answering this question, with following the methodology of [4], which established neural correlates of lower- and higher-order "Keynes degree" reasoning in guessing games. Footnote 7: “In each incentive condition, we told participants that they would be entered into a lottery for a $50 prize, and we manipulated how they would earn tickets to increase their odds of winning the prize. In the _Accuracy_ condition, we told participants that they would earn a ticket if the post they shared was validated to be true by a professional fact-checker. In the _Like_ condition, we told participants that they would earn a ticket for each “like” they received from others. In the _Comment_ condition, we told participants that they would earn a ticket for each comment they received from others. In the _Control_ condition, we did not provide additional incentives. [19, 104421:4]
2305.04262
Magnetic field-induced weak-to-strong-link transformation in patterned superconducting films
Ubiquitous in most superconducting materials and a common result of nanofabrication processes, weak-links are known for their limiting effects on the transport of electric currents. Still, they are at the root of key features of superconducting technology. By performing quantitative magneto-optical imaging experiments and thermomagnetic model simulations, we correlate the existence of local maxima in the magnetization loops of FIB-patterned Nb films to a magnetic field-induced weak-to-strong-link transformation increasing their critical current. This phenomenon arises from the nanoscale interaction between quantized magnetic flux lines and FIB-induced modifications of the device microstructure. Under an ac drive field, this leads to a rectified vortex motion along the weak-link. The reported tunable effect can be exploited in the development of new superconducting electronic devices, such as flux pumps and valves, to attenuate or amplify the supercurrent through a circuit element, and as a strategy to enhance the critical current in weak-link-bearing devices.
D. A. D. Chaves, M. I. Valerio-Cuadros, L. Jiang, E. A. Abbey, F. Colauto, A. A. M. Oliveira, A. M. H. Andrade, L. B. L. G. Pinheiro, T. H. Johansen, C. Xue, Y. -H. Zhou, A. V. Silhanek, W. A. Ortiz, M. Motta
2023-05-07T12:49:19Z
http://arxiv.org/abs/2305.04262v3
# Magnetic field-induced weak-to-strong-link crossover in patterned superconducting films ###### Abstract Ubiquitous in most superconducting materials, weak-links are well-known for their limiting effect on the critical current. Still, they are at the root of important features of superconductivity. Using quantitative magneto-optical imaging and the thermomagnetic model, we correlate local maxima in magnetization loops of patterned Nb films to a magnetic field-induced weak-to-strong-link crossover. This tunable phenomenon can be exploited in superconducting electronics, e.g., flux valves and injectors, or as a simple strategy to adjust the critical current in specialized devices. Nanoscale patterning of superconducting thin films enables the optimization and control of distinct material properties [1; 2; 3; 4; 5; 6; 7; 8; 9]. Moreover, patterning allows exploring different phenomena rising from the rich physics of superconducting weak-links (WLs) [10; 11; 12; 13]. Today, patterned superconducting structures find applications in memories [14; 15; 16; 17], diodes [18; 19], transistors [20; 21; 22], quantum batteries [23], and phase shifters [24; 25], serving as the backbone of superconducting electronics. Meanwhile, focused-ion beam (FIB) milling is a prominent nanofabrication technique with maximum spatial resolution in the nanometer range [26; 27; 28]. However, FIB unavoidably introduces defects along the patterned regions, arising from ionic implantation [29; 30; 31], and locally increasing pinning in superconducting films [3; 8; 32]. Recently, we investigated the effects of a single FIB-milled WL on the properties of prototypical low-critical temperature Nb films, revealing the existence of unexpected local maxima in their magnetization loops [33]. Weak-links are also an important challenge in the optimization of large-scale applications of high-temperature superconductors (HTSC) [34; 35]. Particularly, the overall critical current density (\(J_{c}\)) of HTSCs is affected by an angular-dependent deterioration of \(J_{c}\) along grain boundaries (GB) [36; 37]. This obstacle led to the development of specialized fabrication techniques to minimize the GB impact on \(J_{c}\), such as IBAD and RABiTS [38; 39; 40; 41]. Nevertheless, the improvement of transport carrying capacities, both in HTSCs and low-critical temperature materials, remains an active research topic [42; 43; 44; 45; 46; 47]. In this study, we experimentally and numerically investigate the magnetic flux penetration and shielding currents in FIB-patterned Nb films containing a single macroscopic WL. Our results shine light on the origin of unexpected local maxima in the magnetization loops, revealing a field-induced crossover from a weak-link to a strong-link behavior by the enhancement of transport capacities across the WL. Three 180-nm-thick Nb films were patterned with a single shallow groove across their center using FIB doses of 0.1, 0.2, and 0.3 nC/\(\upmu\)m\({}^{2}\). The grooved films are labeled GF01, GF02, and GF03, accordingly. GF01 has a surface area of 3 \(\times\) 1 mm\({}^{2}\), the others 2.5 \(\times\) 0.8 mm\({}^{2}\). This process creates WLs comprised of thinner Nb regions pervaded by defects arising from Ga\({}^{+}\) implantation, as shown in the Supplemental Material (SM) [48]. The groove depths range from 4.2 nm to 14.6 nm. A pristine 180-nm Nb film with critical temperature (\(T_{c}\)) of 9.0 K is also studied. Details of the sample preparation are given in Ref. [33]. The global DC magnetic response of the samples is obtained in a SQUID-based Quantum Design MPMS-5S system in the perpendicular geometry. Quantitative magneto-optical imaging (MOI) is used to reveal the local out-of-plane magnetic flux density (\(B\)) distribution in the films [49]. The local shielding current density (\(\mathbf{J}\)) is obtained as described in Ref. [50]. The left panel of Fig. 1(a) schematically represents a superconducting film fully penetrated by magnetic flux under a perpendicular applied magnetic field \(H\) after zero-field cooling (ZFC) [51, 52]. We observe five dark discontinuity lines, or d-lines, where \(\mathbf{J}\) changes its direction, shielding flux more efficiently. They define two trapezoidal and two triangular current domains in the sample. The magneto-optical (MO) image at 11.0 Oe demonstrates that flux penetrates uniformly from the edges toward the center of the pristine film as \(H\) is increased. This is manifested by bright, flux-filled regions surrounding the dark shielded central portion of the sample. At 62.0 Oe, we observe the four expected domains for the fully penetrated film. In contrast, a groove across the shortest symmetry line of the rectangular film creates a WL and defines two apparently disconnected pristine regions (PR). Since the WL has a lower \(J_{c}\) than the PR, \(\mathbf{J}\) needs to bend away from the groove, resulting in the central diamond-shaped domain represented in Fig. 1(b) [53, 54, 55]. In the limiting case when the depicted angle \(\alpha=45^{\circ}\), no current is able to flow through the WL, i.e., \(J_{c}^{\mathrm{WL}}=0\). MOI for GF01 at 13.9 Oe reveals that, although the flux front has reached only a small portion of the PR, the WL is fully penetrated--a consequence of its weaker magnetic shielding capacity. At 145.4 Oe, the triangular and diamond-shaped domains can be clearly distinguished. Contrary to the schematic representation, MOI for GF01 reveals \(\alpha>45^{\circ}\) and a faded dark vertical d-line inside the diamond-shaped domain. These effects happen because \(J_{c}^{\mathrm{WL}}>0\), meaning that a fraction of \(\mathbf{J}\) is able to flow through the WL, as we will discuss ahead. The dark scratches above the diamond-shaped and bottom triangular domains are defects on the MO indicator and do not interfere with the flux penetration into the sample or the presented data analysis. The inset of Fig. 1(c) shows complete \(m(H)\) hysteresis loops at 7 K for the pristine film and GF02, where \(m\) is the magnetic moment. For the pristine film, the observed behavior matches that expected for a type-II superconductor presenting a flux-dependent critical current density, \(J_{c}(B)\)[56, 57]. In contrast, the nanostructured sample GF02 exhibits unexpected local maxima in the positive and negative decreasing-field branches of the \(m(H)\) loop. As shown in the main panel of Fig. 1(c), the same feature is observed around 50 Oe for GF01 and GF03. Notably, \(J_{c}\) is proportional to the height of the \(m(H)\) curve [58]. We elect sample GF01 to illustrate our investigation on what leads to this phenomenon. GF01 contains a 4.2-nm-deep groove with a 800 nm width at half depth--see SM. Figure 1(d) shows a series of MO images of GF01 at 7 K as \(H\) is decreased from a state of full penetration. For 140.8 Oe, we observe flux penetration similar to that of Fig 1(b). The WL now appears in dark contrast, because \(H\) was reduced from 315.6 Oe, and the WL shows a lower, but positive flux density due to its weaker shielding capability--see SM. However, as \(H\) approaches 50 Oe, the images reveal that the d-lines forming the central diamond-shaped domain move toward the groove--as if they were closing. The image at 50.2 Oe shows great resemblance to the case of a pristine film, as the diamond-shape practically vanishes. If \(H\) is reduced beyond the local maximum of \(m(H)\), the d-lines now move away from the groove, as if they were reopening, reestablishing the diamond-shaped domain, as seen at 25.6 Oe. This analysis demonstrates that MOI is an ideal tool for our study, as it reveals details of the local flux penetration and current distribution that are hidden in global magnetic measurements, such as DC magnetization. The observed d-line movement is associated to an enhancement of the transport properties in the WL. This is demonstrated by the MO results in Fig. 2(a). As the d-lines close, \(\alpha\) increases and \(\mathbf{J}\) changes accordingly--streamlines that initially did not cross the WL be Figure 1: Representation and MOI at 7 K of fully flux-penetrated (a) pristine and (b) grooved GF01 films under a perpendicular \(H\) after ZFC. Red arrows represent the direction of \(\mathbf{J}\), \(\alpha\) is the angle between the d-line and the edge. (c) The positive field-decreasing branch of \(m(H)\) for GF01 and GF03. Inset shows \(m(H)\) for pristine and GF02 samples. Red arrows indicate the local maxima in \(m(H)\). (d) MOI of GF01 at 7 K for values of \(H\) indicated by vertical arrows in the main panel in (c). Colored rectangles represent areas used in the quantitative study in Fig. 2(b-c). come straighter and have a higher density across the WL. This trend is maximized at 48.0 Oe, when the diamond-shaped domain almost fades out. This result shows that, after the sample is fully penetrated and \(H\) is reduced to a specific value, the WL behaves as a strong-link, as its transport properties are greatly enhanced and current may flow through it largely unaffected. At 46.0 Oe and as \(H\) is further reduced, the d-lines reappner, reestablishing the WL behavior. This notion explains the local peak in \(m(H)\) as an increase in the overall \(J_{c}\) due to an increase in \(J_{c}^{\text{WL}}\) as the sample approaches the pristine behavior. In Fig. 2(b), we evaluate the magnetic moment of GF01 via MOI as \(m=\sum_{y}m_{l}(y)l_{px}=l_{px}\sum_{y}2\int_{0}^{w/2}xj_{y}(x)dx\), where \(m_{l}(y)\) is \(m\) per unit length at a position \(y\), \(l_{px}\) is the image pixel size, \(w\) is the film width, \(j_{y}=J_{y}d\) is the sheet current flowing perpendicular to the WL, and \(d\) is the film thickness [59; 60]. The results bare great resemblance to those in Fig. 1(c). Figure 2 also presents quantitative results obtained along the four color-matched regions in Fig. 1(d). In Fig. 2(c), we average \(m_{l}\) over the WL (blue) and on a selected portion of the PR (red), confirming that the peak in \(m(H)\) manifests an effect tied to the WL--as a peak in \(m_{l}\) appears for the WL, but not for the PR. Moreover, we are able to gauge the flux behavior just outside the sample, \(\Phi_{\text{out}}\), by the numerical integration of \(B\). Figure 2(d) shows that, as \(H\) is reduced, \(\Phi_{\text{out}}\) is everywhere higher just besides the WL (pink) than besides the PR (yellow) for equal areas. However, as the d-lines are closing, \(\Phi_{\text{out}}\) decreases close to the WL while it is unaffected besides the PR, lessening the difference between \(\Phi_{\text{out}}\) along the two regions (green points). This indicates that less flux is now able to escape the WL. When the d-lines reopen below 50 Oe, \(\Phi_{\text{out}}\) is significantly enhanced besides the groove, indicating a large amount of flux is expelled through the edges of the WL. Figure 3 locally resolves the behavior of \(B\) and \(J\) for GF01. Figure 3(a) depicts the differential flux density, \(B_{\text{diff}}\). It captures the \(B\) variation due to the variation of \(H\) by subtracting the target MO image \(n\) by the one taken at the previous field step, i.e. \(B_{\text{diff}}~{}=~{}B_{n}-B_{n-1}\). The field step is kept around \(-2\) Oe in the \(H\) range depicted in Fig. 3. In the first three \(B_{\text{diff}}\) images, we visualize the closing of the d-lines by the shrinking of a brighter diamond-shaped inner region. This indicates that the \(B\) decrease in that region is lower than that outside the d-lines. Therefore, flux is pushed toward the WL as the d-lines close. This agrees with the fact that flux is unable to cross d-lines, as these are regions of high shielding capacity [61]. At 46.0 Oe, a dark-colored region anchored in the central part of the groove appears in \(B_{\text{diff}}\), revealing flux is intensely expelled from the WL, as also seen in Fig. 2(d). At 43.8 Oe, flux is still being strongly pushed away from the inner part of the sample and is directed toward the edges and the PR--as confirmed by the bright halo centered in the groove, signaling a lower variation in \(B_{\text{diff}}\) although \(H\) decreases uniformly [62]. This demonstrates that the reappearance of the d-lines begins from the central part of the film, not from the edges. For 39.9 Oe and further, the flux pushed out of the WL is systematically reduced. Figure 3(b) shows the evolution of \(B\) along the WL (in black) and the PR (red). From 92.6 to 48.0 Oe, the black profile decreases less and becomes increasingly similar to the red one, indicating that the flux pushed inward by the d-line movement is partially retained by the WL. This is related to an increased pinning in the WL due to its reduced thickness and FIB-induced defects, which act as pinning centers that also locally suppress \(T_{c}\)[33]--see SM. The relative increase in \(B\) corroborates the increase in \(M\) across the WL seen in Fig. 2(b). Starting at 46.0 Oe, trapped flux is vigorously pushed away from the WL and the \(B\) profiles become distinct again. Interestingly, at 46.0 Oe \(B\) is positive along the WL. These observations suggest that the maximum in \(m(H)\) is not related to a net neutral flux in the WL. Here, we argue that the increased flux pinning in the WL reduces energy dissipation coming from flux line movement, increasing \(J_{c}^{\text{WL}}\) and, therefore, acting to equalize \(J_{c}(B)\) for the PR and WL for a specific \(H\) value. A similar behavior exists in HTSCs [63; 64; 65; 66; 67; 68; 60]. The fact that the associated d-line movement was not observed in previously MOI-studied YBCO bicrystalline thin films [68; 69; 68; 63; 69; 70] may be related to the absence of the FIB-added pinning centers and the relatively higher \(H\) for which the peak in \(m(H)\) occurs in HTSCs, rendering \(J_{c}(B)\)-dependent effects less prominent [57]. Figure 2: (a) MOI of GF01 at 7 K as \(H\) is reduced from full penetration. The \(\mathbf{J}\) distribution is shown as red streamlines. White dashed lines define the angle \(\alpha\) shown in the panels. (b) \(m(H)\) for GF01 evaluated by MOI. (c) \(m_{l}(H)\) averaged over different regions of the sample. (d) Upper panel shows \(\Phi_{\text{out}}\) besides the WL and the PR. Lower panel highlights the difference in \(\Phi_{\text{out}}\) between the two regions. The presented quantities are obtained for the color-matched regions shown in Fig. 1(d). Figure 3(c) presents an analysis of the current density flowing perpendicularly to the WL, \(J_{y}\). Initially, there is a gradual increase in the current able to cross the WL, highlighted by an enhancement of the black curve magnitude. At 48.0 Oe, \(J_{y}\) across the WL bares a remarkable resemblance to that flowing in the PR--as the sample behaves almost as a pristine film due to a weak-to-strong-link behavior crossover. The trend is reversed when the d-lines reappear and a sharp decrease in \(J_{y}\) is observed beginning in the center of the WL. This occurs due to pinning centers created by the FIB. The \(B\) profile along the WL at 48.0 Oe displays the highest flux density at the center of the film. Therefore, as the density of vortices increases, the electromagnetic repulsion between them eventually overcomes the pinning force acting to keep the flux contained in the WL, thus expelling the flux, inducing a strong-link to weak-link crossover. The SM depicts the behavior of \(B\) and \(J_{y}\) for a large number of images at different applied fields, corroborating their relationship to the d-line movement. With the experimental input, we now turn to numerical simulations based on the thermomagnetic model (TM), which allow for the comprehension of subtle distinctions in specimen behavior arising from a \(B\)-dependent \(J_{c}\)[71; 72; 73]--see SM for further information. The simulated Nb sample shares the geometry of GF01, except for the groove width, set to 45 um to improve computational performance. Notably, the higher density of pinning sites introduced by Ga implantation in the WL is not included in the simulations. A reduction of \(\sim\)15% in the zero-field \(J_{c}^{\rm WL}\) is induced by reducing \(T_{c}\) in the WL, as experimentally observed--see SM and Ref. [33]. Figure 4 describes the TM results obtained when considering \(j_{c}=j_{c}(T,B)=j_{c0}(1-T/T_{c})(B_{0}/(B+B_{0}))\), with \(B_{0}=\mu_{0}j_{c0}/\pi\)[74]. Disorder is introduced by lowering \(j_{c0}\) for randomly selected simulation grid points [75]. First, the magnetic field is increased up to 250 Oe after ZFC to 7 K. The resulting flux distribution in Fig. 4(a) qualitatively matches that of GF01 in the full penetration state. Then, \(H\) is progressively reduced. At 50 Oe, we visually notice the disappearance of the diamond-shaped domain, which is maximum at 20 Oe. This is Figure 3: Magnetic flux and shielding currents for GF01 at 7 K at different \(H\). (a) \(B_{\rm dif}\) distribution around the region of d-line movement. Thin vertical lines represent the edges of the film. (b) \(B\) and (c) \(J_{y}\) profiles along the WL and PR, evaluated respectively at the black and red dashed lines in panel (a). Figure 4: (a) \(B\) distribution at 7 K captured by TM simulations as \(H\) is reduced from 250 Oe. Simulation parameters follow Ref. [73]. (b) \(\Delta\alpha(H)\) for the simulated sample. Red arrows indicate the direction of field variation. Inset: experimental \(\Delta\alpha(H)\) for GF01. Error lies within symbol size. quantitatively corroborated by Fig. 4(b), which depicts \(\Delta\alpha(H)=\alpha(H=250\) Oe\()-\alpha(H)\). The inset of Fig. 4(b) confirms that the TM simulations reproduce the d-line movement observed experimentally for GF01. The lesser closure of the d-lines is likely due to the larger width of the simulated groove in comparison to GF01. Finally, the d-lines reappear, as demonstrated in Fig. 4(a) at \(-50\) Oe. Additional simulations, conducted for \(J_{c}\) independent of \(B\), i.e., \(J_{c}=J_{c0}(1-T/T_{c})\), did not reproduce the d-line movement--reinforcing the \(J_{c}(B)\)-dependency role in the observed phenomenon. In summary, combining quantitative magneto-optical imaging and thermomagnetic model simulations, we demonstrate that the presence of local maxima in the magnetization loop of FIB-patterned Nb films is associated with a field-induced weak-link to strong-link to weak-link crossover. Such an effect may be explored in the development of novel sensing applications, as well as for different implementations in superconducting electronics. For instance, the tunable flux escape observed just outside the groove may be employed in a flux valve or injector. Mastering this phenomenon may also lead to the improvement of operating conditions for different superconducting technologies, resulting in effectively higher critical currents for films in which the presence of weak-links is essential or unavoidable. We additionally envision that, if the proper \(J_{c}(B)\)-dependency, pinning center distribution, and field conditions are met, the reported mechanism could play an important role in HTSCs--solidifying the understanding of a post-fabrication route for \(J_{c}\) optimization in these materials. This work was partially supported by Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, the Sao Paulo Research Foundation (FAPESP, Grant No. 2021/08781-8), and the National Council for Scientific and Technological Development (CNPq, Grants No. 431974/2018-7, No. 316602/2021-3, and No. 309928/2018-4). C. X. acknowledges the support by the National Natural Science Foundation of China (Grants No. 11972298). L. J. was supported by the China Scholarship Council. The authors thank the Laboratory of Structural Characterization (LCE/DEMa/UFSCar) for the EDS and AFM measurements reported in the SM.
2301.10938
Compact Transformer Tracker with Correlative Masked Modeling
Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 fps. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at https://github.com/HUSTDML/CTTrack.
Zikai Song, Run Luo, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang
2023-01-26T04:58:08Z
http://arxiv.org/abs/2301.10938v1
# Compact Transformer Tracker with Correlative Masked Modeling ###### Abstract Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 _fps_. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at [https://github.com/HUSTDM/CTTrack](https://github.com/HUSTDM/CTTrack). ## 1 Introduction Visual Object Tracking is one of the fundamental tasks in computer vision with applications ranging from human-computer interaction, surveillance, traffic flow monitoring and etc. It aims to estimate the location, denoted as a bounding box, of an arbitrary target object throughout the subsequent video sequence. Deep Learning based trackers have achieved great success due to their strong representation ability. Trackers Bertinetto et al. (2016); Nam and Han (2016); Li et al. (2018, 2019) derived from Convolutional Neural Networks (CNN) Krizhevsky et al. (2012); Simonyan and Zisserman (2015); He et al. (2016) produce tracking accuracy that beyond the comparison of traditional approaches, especially the trackers built on Siamese network Bertinetto et al. (2016); Xu et al. (2020); Li et al. (2018, 2019); Voigtlaender et al. (2020); Yu et al. (2020); Guo et al. (2021). The key of Siamese network trackers is to produce the cross-correlation and measure the similarity between the target template and search image. Nowadays, transformer-based trackers Chen et al. (2021); Wang et al. (2021); Yan et al. (2021); Shen et al. (2022); Song et al. (2022); Cui et al. (2022) have shown great strength by introducing the attention mechanism Vaswani et al. (2017) to enhance and fuse the features of querying sample and tracked objects. Prevalent transformer trackers Chen et al. (2021); Yan et al. (2021) Figure 1: Our compact transformer tracker adopts the simple ViT structure (encoder) with the concatenation of the template and search image as input, which essentially exploits the standard self-attention mechanism for information aggregation. The encoded tokens pass through a box head to estimate the result bounding box. And we develop a correlative masked decoder reconstructing the original template and search pixels to enhance the information aggregation, which is skipped during inference. Cui et al. 2022) more or less adapt the attention for aggregating information across the template and search image. We find that the advanced variants of attention mechanism in recent research, including mix-attention (Cui et al. 2022) and cross-attention (Yu et al. 2020; Chen et al. 2021), are equivalent or even just a subset of the packed self-attention (i.e., standard self-attention with the concatenation of the template and search image as input). Then the question is which parts of the self-attention mechanism play an important role in visual object tracking? We revisited the transformer tracking framework and find that the tracking results are generated from tokens corresponding to the search image (search tokens), while the tokens corresponding to the template (template tokens) are always discarded in the last. The representational ability of search tokens comes from two parts: the cross-information enhancement from the template tokens and the self-information enhancement from the search tokens themselves. In this paper, we prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, though cross-information aggregation is indispensable in visual object tracking but not greatly beneficial. Driven by this analysis, we propose a compact transformer tracker combined with correlative masked modeling for the cross-information aggregation and self-information reinforcement. As shown in Figure 1, our tracker adopts the basic vision transformer as the main branch and applies a lightweight masked decoder to enhance the implicit representation capability of the packed self-attention. The correlative masked decoder, which is inspired by Masked Image Modeling (He et al. 2022; Xie et al. 2022), reconstructs the both original template and search pixels from the corresponding masked tokens, to guide the encoder to capture the invariant feature for tracking. In addition, our decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. Applying our correlative masked modeling strategy to the compact transformer tracker can improve the AUC from 64.0% to 65.8% on the LaSOT (Fan et al. 2019) dataset. Extensive comparison experiments on 5 challenging datasets including VOT2020 (Kristan et al. 2020), UAV123 (Mueller, Smith, and Ghanem 2016), LaSOT, GOT-10k (Huang, Zhao, and Huang 2019), and TrackingNet (Muller et al. 2018) exhibits the state-of-the-art performance, which further evidence the correctness of our analysis regarding the self-attention in visual tracking. To summarize, our main contributions include: 1. [leftmargin=*,noitemsep,nolistsep] 2. We present a unified analyzing method for the attention mechanism and find that the advanced variants of the attention mechanism are equivalent or even just a subset of the self-attention. We also prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation. 3. We develop a compact transformer tracker with a correlative masked decoder, which has a very simple structure and achieves state-of-the-art accuracy at a high Frames-Per-Seconds (_fps_) tracking speed. The decoder reconstructs the original template and search image from the corresponding masked tokens and serves as a training plugin for the tracker. The experiment demonstrates that our analysis regarding self-attention is correct. ## 2 Related Work **Traditional trackers.** Traditional single object tracking algorithms can be roughly summarized as Correlation Filter based trackers (CF), Deep Network based trackers (DLN). CF-based trackers(Bolme et al. 2010; Henriques et al. 2015; Danelljan et al. 2016, 2017, 2019; Bhat et al. 2019) exploit the convolution theorem and learn a filter in the Fourier domain that maps known target images to the desired output. DLN-based trackers refer to algorithms employing deep neural networks for the tracking process. Earlier approaches (Nam and Han 2016; Pu et al. 2018) treat the tracking task as a classification problem and exploit deep features for locating the target. Shortly afterwards more trackers adopt the Siamese network (Bertinetto et al. 2016; Li et al. 2018, 2019) for its effectiveness in measuring similarity. The Siamese network consists of two branches, one operates on the template and the other for the search area. Above all, these methods mainly consist of a backbone which extracts the features of search image and template separately, a similarity measuring module, and heads to predict the location and bounding box. Compared to our framework, traditional trackers have too many modules and a very complex design, we simply adapt a ViT backbone with a box head to get better tracking results. **Transformer trackers.** The ViT (Dosovitskiy et al. 2021) first introduces the transformer to image recognition tasks and presents an impressive performance. Ever since, transformer has been widely applied in image classification(Dosovitskiy et al. 2021; Wu et al. 2021; Liu et al. 2021), object detection(Carion et al. 2020; Li et al. 2022), visual object tracking(Yan et al. 2021; Chen et al. 2021; Wang et al. 2021; Song et al. 2022; Shen et al. 2022; Cui et al. 2022) and etc. Transformer-based tracking methods have become the mainstream tracking algorithms nowadays. TransT (Chen et al. 2021) proposes a feature fusion network and employs an attention mechanism to combine the features of the template and search region. STARK (Yan et al. 2021) develops a spatial-temporal architecture based on the encoder-decoder transformer. CSWinTT (Song et al. 2022) proposes a transformer architecture with multi-scale cyclic shifting window attention for visual tracking, elevating the attention from pixel level to window level. MixFormer (Cui et al. 2022) constructs a compact tracking framework and designs a mixed attention module that unifies the process of feature extraction and information matching module. Instead of designing a complex attention mechanism as in the previous tracking approaches, we compare the essential differences of attention variants(such as mix-attention and cross-attention) and find these attention variants are equivalent or even just a subset of the packed self-attention. To verify the capability of self-attention in information aggregation, we design a compact transformer tracker using the most simple pipeline which only consists of a ViT backbone and a box head, without any extra design including separate modules of feature extraction and aggregation, and multi-layer feature aggregation. **Masked image modeling (MIM).** MIM masks an area of the original images and predicts the missing pixels, which aims to enhance the representation of models. Recently, MIM approaches(Chen et al., 2020; He et al., 2022; Xie et al., 2022; Wei et al., 2022; Bao, Dong, and Wei, 2021)) are extended to the modern vision transformers (Dosovitskiy et al., 2021; Liu et al., 2021). iGPT (Chen et al., 2020) first proposes a transformer to predict unknown pixels from a sequence of low-resolution pixels. BEiT (Bao, Dong, and Wei, 2021) tokenizes the images via an additional dVAE (Ramesh et al., 2021) network with a block-wise masking strategy. SimMIM (Xie et al., 2022) find that a moderately large masked patch size of the input image for pixel predictions makes a strong pre-text task. MAE (He et al., 2022) develops an asymmetric encoder-decoder architecture, the encoder operates on a small proportion of the visible patches, and the decoder reconstructs the original pixels. MaskFeat (Wei et al., 2022) reconstructs the feature descriptors such as HoG (Dalal and Triggs, 2005) instead of pixels. Our approach is inspired by the previous MIM method (Xie et al., 2022; He et al., 2022), but we have to deal with two fundamental problems in the tracking framework: (1) Visual tracking is a downstream vision task that generally does not have the pre-train process to apply the MIM strategy. We develop a masked decoder to leverage the search and the template tokens to predict the original images, which is embedded as an attachment plugin in the training phase to implement an end-to-end model. (2) MIM methods reconstructing the single image do not fit the tracking framework which involves cross-aggregation of multiple images. According to the properties of packed self-attention, we design a self-decoder and a cross-decoder to reconstruct the original template and search image from the corresponding masked tokens. As far as we know, we are the first to artfully introduce the MIM into the visual tracking field to improve the information aggregation capabilities. ## 3 Approach In this section, we introduce our compact transformer tracker with correlative masked modeling in detail. Before proceed, we first present a analysis on the key component of transformer tracker, and demonstrate that existing attention variants are equivalent to the packed self-attention. ### Revisiting Transformer Tracker **Transformer tracking framework.** As described in ViT(Vaswani et al., 2017), the query-key-value attention mechanism is applied with query \(\mathbf{Q}\), key \(\mathbf{K}\), and value \(\mathbf{V}\). The linear weights of \(\mathbf{Q}\), \(\mathbf{K}\), \(\mathbf{V}\) are \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\), \(\mathbf{W}_{V}\) respectively. The attention (Attn) is computed as: \[\text{Attn(X)}=\text{softmax}(\frac{\mathbf{X}\mathbf{W}_{Q}\cdot\mathbf{W}_{K }^{T}\mathbf{X}^{T}}{\sqrt{d_{k}}})\cdot\mathbf{X}\mathbf{W}_{V} \tag{1}\] where the X is the input token and the \(d_{k}\) is the dimension of the key. For a clearer description of the post-order steps, we apply an attention calculation with the inputs of two different tokens, the token \(\mathbf{X}_{Q}\) computed with query and the token \(\mathbf{X}_{K}V\) computed with key and value. We modify the attention formula and define the attention map (AMap) as: \[\text{Attn}(\mathbf{X}_{Q},\mathbf{X}_{KV}) =\text{AMap}(\mathbf{X}_{Q},\mathbf{X}_{KV})\cdot\mathbf{X}_{KV} \mathbf{W}_{V} \tag{2}\] \[\text{AMap}(\mathbf{X}_{Q},\mathbf{X}_{KV}) =\text{softmax}(\frac{\mathbf{X}_{Q}\mathbf{W}_{Q}\cdot\mathbf{W} _{K}^{T}\mathbf{X}_{KV}^{T}}{\sqrt{d}})\] Our compact transformer tracker consists of two parts: a transformer backbone for information aggregation and a box head for the bounding box estimation. Give the template \(z\) in the initial frame and a search image \(s\). We obtain the tokens \(X_{t}\in\mathbb{R}^{L_{x}\times d}\) and \(X_{s}\in\mathbb{R}^{L_{x}\times d}\) respectively through patch embedding, where \(d\) represents the number of channels. The **packed self-attention (PSelf-Attn)** in the tracking field is defined as the self-attention with the input of the concatenation (Cat) of the template and the search image: \[\text{PSelf-Attn}=\text{Attn}\Big{(}Cat(\mathbf{X}_{z},\mathbf{X}_{s}),Cat( \mathbf{X}_{z},\mathbf{X}_{s})\Big{)} \tag{3}\] **Analysis on Attention.** As shown in Figure 2, we divide the computation of attention mechanism, which involves both template and search image, into four information streams: Figure 2: Information streams in the attention mechanism. The four information streams of Q-K-V are corresponding to the four parts in the attention map. Variants of attention can be uniformly explained under this analytical approach. 1. self-information enhancement on template; 2. cross-information aggregation on template; 3. cross-information aggregation on search image; 4. self-information enhancement on search image. These four information streams are also reflected in the four parts of the attention map (In Figure 2, the index of each part in the attention map corresponds to the information stream). Based on this dissection, we can conveniently compare the differences between existing attention, including packed self-attention, mix-attention, and cross-attention. The **PSelf-Attn** and the **mix-attention**[14] are essentially equivalent, the mix-attention is calculated as: \[\text{PSelf-Attn}==\text{Mix-Attn}= \tag{4}\] which is the same as Eqn. 3, and they include all four information streams (the attention map is shown as Figure 2(a)). By the same analysis, the **asymmetric mix-attention (AMix-Attn)** contains three information streams (#1, #3, #4 info stream), which is shown in the Figure 2(b) and is calculated as follows: \[\text{AMix-Attn}= \tag{5}\] \[\text{Cat}\Big{(}\text{AMap}\big{(}\mathbf{X}_{z},\mathbf{X}_{z} \big{)},\text{AMap}\big{(}\mathbf{X}_{s},Cat(\mathbf{X}_{z},\mathbf{X}_{s}) \big{)}\Big{)}\] The **cross-attention** contains two information streams (#2,#3 info stream) for cross information aggregation, which is shown in the Figure 2(c) and is calculated as follows: \[\text{Cross-Attn}=\text{Cat}\Big{(}\text{AMap}\big{(}\mathbf{X}_{z},\mathbf{X} _{s}\big{)},\text{AMap}\big{(}\mathbf{X}_{s},\mathbf{X}_{z}\big{)}\Big{)} \tag{6}\] In order to fully verify the importance of each part of packed attention, it is necessary to evaluate the impact of each information stream individually. The key of visual object tracking is to find the target in the search image, there must be a cross-information aggregation of the search image (#3 info stream). The other information streams can be blocked out to verify their performance. Based on the above idea, we conduct detailed experiments and the result is shown in Table 1. Removing cross-information aggregation of the template (#2 info stream) of self-attention can greatly improve tracking performance (the AUC and Prec of Table 1 #2 are better than that of Table 1 #1), and the cross-information aggregation of the template will introduce a lot of noise in template features, which is not recommended in visual tracking. However, removing self-information enhancement (#3 and #4 info stream) of self-attention severely degrades the tracking performance (the AUC and Prec of Table 1 #3 and #4 are worse than that of Table 1 #1). From the results we can conclude that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, the cross-information aggregation is indispensable in tracking but not greatly beneficial. ### Correlative Masked Modeling According to the above analysis, the best tracking performance can be achieved by adopting three information streams: self-information on the template(#1 info stream), cross-information on the search image (#3 info stream), and self-information on the search image (#4 info stream). These three information streams can be grouped into two categories: two self-information enhancements and one cross-information aggregation. We designed a correlative masked modeling method to enhance the information aggregation of our tracking framework, as shown in Figure 1. The ViT backbone is an encoder, and the correlative masked decoder reconstructs the original image (the template and search image respectively) from randomly masked tokens to enhance the self-information and reconstructs the template image from search tokens to improve cross-information aggregation. In parallel with the masked decoder, the search image tokens go through a box estimation head as in [15] to generate the result bounding box. **Decoder.** The decoders in our framework consist of a self-decoder and a cross-decoder, these two decoders have the same structure but do not share weights, each one is composed of a series of transformer blocks similar to the MAE, and the last layer of the decoder is a linear projection with output channels equal to the number of pixels in a patch. As shown in Figure 4, the decoder takes masked tokens as input and predicts the original image pixels corresponding to \begin{table} \begin{tabular}{c c|c c c c|c c} \hline \hline \multirow{2}{*}{\# AMap} & \multicolumn{4}{c|}{No. Info Stream} & \multirow{2}{*}{AUC} & \multirow{2}{*}{Prec} \\ & & & & & & & \\ \hline 1 & \(\frac{\bullet\bullet}{\bullet}\) & & ✓ & ✓ & ✓ & ✓ & 61.7 & 64.2 \\ 2 & \(\frac{\bullet\bullet}{\bullet}\) & & ✓ & & ✓ & ✓ & **64.0** & **67.7** \\ 3 & \(\frac{\bullet\bullet}{\bullet}\) & & & ✓ & ✓ & ✓ & 60.6 & 63.7 \\ 4 & \(\frac{\bullet\bullet}{\bullet}\) & & ✓ & ✓ & ✓ & 58.8 & 60.1 \\ 5 & \(\frac{\bullet\bullet}{\bullet}\) & & & ✓ & ✓ & 57.9 & 58.5 \\ \hline \hline \end{tabular} \end{table} Table 1: The effectiveness of information streams in the attention mechanism on the LaSOT dataset. The visualized four parts in the attention map (AMap) correspond to the four information streams at the matched location. Figure 3: Configurations of information stream in attention map of packed self-attention (PSelf-Attn), asymmetric mix-attention(AMix-Attn) and cross-attention (Cross-Attn). the template token and the search image token, where the template tokens are only self-reconstructed to the template image for enhancing the #1 information stream, search tokens are used to crossly reconstruct the template image (for #3 info stream) and self-reconstruct the search image (for #4 info stream). **Masking and Reconstruction.** The encoder embeds the concatenation set of template tokens and search tokens. Then we split the encoded tokens into template tokens and search tokens, crop the search tokens using Precise RoI Pooling[1] to the same size as the template tokens, and sample a subset of them. We randomly sample tokens at a high masking ratio (75%). Our decoder predicts the pixel values for each masked token, and the output of the decoder is reshaped to form a reconstructed image. We use the mean squared error (MSE) between the reconstructed and original images on masked tokens as our loss function. ### Training and Inference Our decoder is only used in the training phase, while does not participate in the inference phase, hence it doesn't affect the tracking speed. During the training phase, our tracker takes a triplet input consisting of one search region and two templates similar to STARK[21]. We randomly sample multiple frames from sequences in the training set, select the first frame and the second frame as templates, and the last frame as the search region. In the target localization training, we train the whole network except the scoring head in an end-to-end manner with the combination of \(L1\) Loss, generalized IoU loss [1], and decoder loss \(L_{dec}\). The full loss function is defined as follows: \[Loss=\lambda_{L1}L_{1}(B_{i},\hat{B}_{i})+\lambda_{g}L_{g}(B_{i},\hat{B}_{i})+ \lambda_{dec}L_{dec} \tag{7}\] where \(\lambda_{L1}=5.0\), \(\lambda_{g}=2.0\) and \(\lambda_{dec}=0.3\) are the weighting factors of three losses, \(\hat{B}_{i}\) is the estimated box of the target and \(B_{i}\) is the ground-truth bounding box. The decoder loss \(L_{dec}\) is defined as: \[L_{dec}=L_{2}(z,z_{p})+L_{2}(s,s_{p})+L_{2}(z,s_{p}) \tag{8}\] where the \(L_{2}\) is the MSE loss, \(z\) and \(s\) represent the original template image and search image, \(z_{p}\) and \(s_{p}\) represent the predicting template image and search image respectively. In the inference phase, we use two templates of the same size as the input. One of which is the initial template and fixed, the other is online updated and always set to the latest tracking result with high confidence. We use a score head to control the updating of the online template. Our score head consists of the multilayer perceptron (MLP) that receives a class-token[1] as input and evaluates the accuracy of current tracking results. ## 4 Experiments ### Implementation Details In order to effectively verify the correctness of our analysis, we design the compact transformer tracker without any other extra attention mechanisms. The only structures remaining are feature extraction and aggregation, and multi-layer feature aggregation. The main tracker only consists of a ViT backbone and a box estimation head, we test both ViT-Base and ViT-Large, and the ViT parameters are initialized with MAE [1] pre-trained model. We refer our Compact Transformer tracker as CTTrack-B (the backbone of ViT-Base) and CTTrack-L (the backbone of ViT-Large) in this section. We adopt CoCo[11], LaSOT[2], GOT-10k[12], and TrackingNet[21] as our training dataset except the GOT-10k benchmark. The training samples are directly sampled from the same sequence and we apply common data augmentation operations including brightness jitter and horizontal flip. The size of the input template is 128\(\times\)128, the search region is \(5^{2}\) times of the target box area and further resized to 320\(\times\)320. The decoder parameters are initialized with Xavier Uniform. The AdamW optimizer [1] is employed with initial learning rate (lr) of 1e-4 with the layer-wise decay 0.75, and the lr decreases according to the cosine function with the final decrease factor of 0.1. We adopt a warm-up lr with the 0.2 warm-up factor on the first 5 epochs. We train our model on 4 Nvidia Tesla V100 GPUs for a total of 500 epochs, each epoch uses \(6\times 10^{4}\) images. The mini-batch size is set to 128 images with each GPU hosting 32 images. Our approach is implemented in Python 3.7 with PyTorch 1.7. ### Ablation Study We ablate our compact transformer tracker on several intriguing properties using the challenging LaSOT dataset and report the Area Under the Curve (AUC) and Precision (Prec) as the validation accuracy. **Backbone Comparison.** Table 2 shows the comparison of the transformer backbones between the ViT-Base and ViT-Large backbone. The CTTrack-B reaches a higher tracking speed while the CTTrack-L exhibits a better performance. Figure 4: The correlative masked decoders consists of a self-decoder and a cross-decoder. The self-decoder reconstructs the two original images, template and search image, from its corresponding masked tokens. The cross-decoder reconstructs the template image from search tokens. **Reconstruction Streams.** Our decoder enforces three types of reconstruction streams as shown in Figure 4. Table 3 exhibits different configurations of reconstruction streams, through varied combinations of search tokens reconstruct search image (s2s), template tokens reconstruct template image (t2t) and search tokens reconstruct template image(s2t). The result is consistent with the conclusion of our previous analysis that self-information enhancement (#5) plays the most important role in transformer tracking, compared to cross-information aggregation(#4). Besides, search image information has more influence than the template information, the s2s (#2) improves performance the most among all streams (#2, #3, #4), from 64.0 to 64.7 in AUC score. After adopting all three reconstruction streams, tracking accuracy improved by an impressive AUC score of 1.8%, which validates the effectiveness of our masked modeling decoders. **Masking ratio.** When we conduct reconstruction streams, we randomly mask the input tokens according to a pre-defined ratio. Table 4 shows the influence of different masking ratios. We mask the encoded template token and search tokens with a random sampling strategy at different masking rates. Similar to the conclusion obtained by the MAE(He et al., 2022), the optimal ratios are relatively high, and the accuracy increases steadily with the masking ratio growing until reaching 75%, which produces the best tracking results. **Online Template Updating.** We evaluate the effect of the online update strategy in our method. The ablation study result is shown in Table 5, #1 represents the performance without template updating. We can see that applying a fixed interval to update the online template (#2) is ineffective as it greatly reduces the quality of template and causes tracking drift. It can be seen in #3, there is a 0.2% improvement in the AUC score after applying the scoring head to evaluate the accuracy of current tracking results. **Visualization of attention maps.** We visualize attention maps in Figure5, our tracker adopting the correlative decoder has a stronger discriminative ability. The baseline transformer without a reconstruction decoder tends to lose the target position, and the distractors in the background get suppressed with the training by the correlative decoder. ### Comparison with the SOTA We compare our compact tracker with the state-of-the-art trackers on UAV123Mueller et al. (2016), LaSOT(Fan et al., 2019), TrackingNet(Muller et al., 2018), GOT-10k(Huang et al., 2019), and VOT2020(Kristan et al., 2020). For a fairer comparison, here we adopt relative position biases in our ViT backbones, this addition improves AUC by around 1 point. **UAV123** gathers an application-specific collection of 123 sequences. It adopts the AUC and Precision (P) as the evaluation metrics. As shown in Table 1, Our CTTrack-L outperforms previous trackers as and exhibits very competitive performance (71.3% AUC) when compared to the previous best-performing tracker CSWinTT (70.5% AUC). \begin{table} \begin{tabular}{c|c c c c} \hline \hline Mask Ratio & 25\% & 50\% & 75\% & 90\% \\ \hline AUC & 64.6 & 65.7 & **65.8** & 64.9 \\ Prec & 69.0 & 70.7 & **70.9** & 69.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison on masking ratio. \begin{table} \begin{tabular}{c|c c c} \hline \hline Methods & Params(M) & FLOPs(G) & Speed(_fps_) \\ \hline CTTrack-B & 93.8 & 48.1 & 40 \\ CTTrack-L & 313.9 & 163.7 & 22 \\ \hline \hline \end{tabular} \end{table} Table 2: Model size and speed using different backbones. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{\#} & \multicolumn{3}{c|}{Recons Type} & \multirow{2}{*}{AUC} & \multirow{2}{*}{Prec} \\ \cline{2-2} \cline{4-5} & s2s & t2t & s2t & \\ \hline 1 & - & - & - & 64.0 & 67.7 \\ 2 & ✓ & - & - & 64.7 & 69.1 \\ 3 & - & ✓ & - & 64.4 & 68.4 \\ 4 & - & - & ✓ & 64.4 & 68.6 \\ 5 & ✓ & ✓ & - & 65.1 & 69.9 \\ 6 & ✓ & ✓ & ✓ & **65.8** & **70.9** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation Study for the reconstruction streams. **s2s** represents search tokens reconstruct search image, **t2t** denotes template tokens reconstruct template image and **s2t** means search tokens reconstruct template image. Figure 5: Visualization of attention map which compares the difference between training with correlative decoder (w) and training without correlative decoder(w/o). **S-to-S** is self-information enhancement on search image, **T-to-T** is self-information enhancement on template, **S-to-T** is cross-information aggregation on search image. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline & Online & Score & AUC & Prec \\ \hline \multirow{3}{*}{CTTrack-B} & - & - & 65.8 & 70.9 \\ & ✓ & - & 64.9 & 69.9 \\ \cline{1-1} & ✓ & ✓ & 66.0 & 71.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation for the online template updating component.**Online** denotes updating the template at a fixed update interval. **Score** represents the online template is only updated with high confident samples. **LaSOT** is a long-term dataset including 1400 sequences and distributed over 14 attributes, the testing subset of LaSOT contains 280 sequences. Methods are ranked by the AUC, P, and Normalized Precision (P\({}_{Norm}\)). Our CTTrack-L achieves the AUC (69.8%) and Prec (76.2%), which is an excellent result that outperforms other methods only except the MixFormer. Our tracker has lower performance than MixFormer on LaSOT because it contains long-term sequences and large variations in content. ViT backbone is a plain and non-hierarchical architecture that maintains feature maps at a certain scale, which may not be able to well handle long-term tracking sequences with scale variations. **TrackingNet** is a large-scale tracking dataset consisting of 511 sequences for testing. The evaluation is performed on the online server. Table 1 shows that CTTrack-L performs better quality and ranks first in AUC score at 84.9%. The gain is 1.0% improvement when compared with the previous best results. **GOT-10k** contains over 10k videos for training and 180 for testing. It forbids the trackers to use external datasets for training. We follow this protocol by retraining our trackers to only use the GOT10k train split. As in Table 1, MixFormer and CSWinTT provide the best performance, with an AO score of 70.7% and 69.4%. Our CTTrack-L has obtained an AO score of 72.8%, significantly outperforming the best existing tracker by 2.1%. **VOT2020** benchmark contains 60 challenging videos. The performance is evaluated using the expected average overlap (EAO), which takes both accuracy (A) and robustness (R). Since our algorithm does not output a segmentation mask, trackers that only predict bounding boxes are selected for comparisons to ensure fairness. It can be seen from Table 7 that our CTTrack-L obtains an EAO of 0.287. ## 5 Conclusion In this work, we analyze the information stream in the attention mechanism in depth. We prove that the vanilla self-attention structure is sufficient for information aggregation, and employ the three information streams of the packed self-attention in the transformer tracking framework. To enhance the information representation, we design the correlative masked decoder consisting of a self-decoder and a cross-decoder to reconstruct the original pixels of both template and search image. Extensive experiments demonstrate the effectiveness of our correlative masked modeling strategy and our compact transformer tracker exhibits impressive performance over previous trackers. In addition, our correlative masked decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. In the future, we plan to combine the feature pyramid or convolution module for better performance on long-term tracking sequences. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{UAV123} & \multicolumn{2}{c}{LaSOT} & \multicolumn{3}{c}{TrackingNet} & \multicolumn{3}{c}{GOT-10k} \\ \cline{2-11} & AUC & P & AUC & P\({}_{Norm}\) & P & AUC & P\({}_{Norm}\) & P & AO & SR\({}_{0.5}\) & SR\({}_{0.75}\) \\ \hline **CTTrack-L** & **71.3** & **93.3** & 69.8 & 79.7 & 76.2 & **84.9** & **89.1** & **83.5** & 75.3 & 84.5 & 74.0 \\ **CTTrack-B** & 68.8 & 89.5 & 67.8 & 77.8 & 74.0 & 82.5 & 87.1 & 80.3 & 73.5 & 83.5 & 70.6 \\ **CTTrack-L -GOT** & - & - & - & - & - & - & - & **72.8** & **81.3** & **71.5** \\ **CTTrack-B -GOT** & - & - & - & - & - & - & - & - & 71.3 & 80.7 & 70.3 \\ \hline MixFormerCui et al. (2022) & 69.5 & 91.0 & **70.1** & **79.9** & **76.3** & 83.9 & 88.9 & 83.1 & 70.7 & 80.0 & 67.8 \\ CSWinTTSong et al. (2022) & 70.5 & 90.3 & 66.2 & 75.2 & 70.9 & 81.9 & 86.7 & 79.5 & 69.4 & 78.9 & 65.4 \\ UTTShen et al. (2022) & - & - & 64.6 & - & 67.2 & 79.7 & - & 77.0 & 67.2 & 76.3 & 60.5 \\ STARKYan et al. (2021) & - & - & 67.1 & 77.0 & - & 82.0 & 86.9 & - & 68.8 & 78.1 & 64.1 \\ TransTChen et al. (2021) & 68.1 & 87.6 & 64.9 & 73.8 & 69.0 & 81.4 & 86.7 & 80.3 & 67.1 & 76.8 & 60.9 \\ TrDiMPWang et al. (2021) & 67.0 & 87.6 & 64.0 & 73.2 & 66.6 & 78.4 & 83.3 & 73.1 & 68.8 & 80.5 & 59.7 \\ STMTrackFu et al. (2021) & 64.7 & - & 60.6 & 69.3 & 63.3 & 80.3 & 85.1 & 76.7 & 64.2 & 73.7 & 57.5 \\ AutoMatchZhang et al. (2021) & 64.4 & 83.8 & 58.2 & 67.5 & 59.9 & 76.0 & 82.4 & 72.5 & 65.2 & 76.6 & 54.3 \\ SiamGATGuo et al. (2021) & 64.6 & 84.3 & 53.9 & 63.3 & 53.0 & - & - & - & 62.7 & 74.3 & 48.8 \\ KYSBhat et al. (2020) & - & - & 55.4 & 63.3 & 55.8 & 74.0 & 80.0 & 68.8 & 63.6 & 75.1 & 51.5 \\ MAMLWang et al. (2020) & - & - & 52.3 & - & 53.1 & 75.7 & 82.2 & 72.5 & - & - & - \\ SiamAttunYu et al. (2020) & 65.0 & 84.5 & 56.0 & 64.8 & - & 75.2 & 81.7 & - & - & - & - \\ SiamFC++Xu et al. (2020) & 61.8 & 80.4 & 54.4 & 62.3 & 54.7 & 75.4 & 80.0 & 70.5 & 59.5 & 69.5 & 47.9 \\ SiamRPN++Li et al. (2019) & 64.2 & 84.0 & 49.6 & 56.9 & 49.1 & 73.3 & 80.0 & 69.4 & 51.7 & 61.6 & 32.5 \\ DiMPBhat et al. (2019) & 64.2 & 84.9 & 57.7 & 66.4 & 57.9 & 74.0 & 80.1 & 68.7 & 61.1 & 71.7 & 49.2 \\ ATOMDanelljan et al. (2019) & 61.7 & 82.7 & 51.5 & 57.6 & 50.5 & 70.3 & 77.1 & 64.8 & 55.6 & 63.4 & 40.2 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparisons with previous state-of-the-art trackers on four challenge benchmarks. The **red**, green and blue indicate performances ranked at first, second, and third places. The **tracker** -**GOT** denotes only trained on the GOT-10k train split. \begin{table} \begin{tabular}{l|c c c} \hline \hline Methods & EAO\(\uparrow\) & Accuracy\(\uparrow\) & Robustness\(\uparrow\) \\ \hline SiamFC & 0.179 & 0.418 & 0.502 \\ ATOM & 0.271 & 0.462 & 0.734 \\ DiMP & 0.274 & 0.457 & 0.740 \\ UPDT & 0.278 & 0.465 & 0.755 \\ TransT & 0.293 & 0.477 & 0.754 \\ CSWinTT & **0.304** & **0.480** & **0.787** \\ \hline CTTrack-L & 0.287 & 0.453 & **0.787** \\ \hline \hline \end{tabular} \end{table} Table 7: Comparisons on VOT2020, where trackers only predict bounding boxes rather than masks. ## Acknowledgments This work is supported by the national key research and development program of China under Grant No.2020YFB1805601, National Natural Science Foundation of China (NSFC No. 62272184), and CCF-Tencent Open Research Fund (CCF-Tencent RAGR20220120). The computation is completed in the HPC Platform of Huazhong University of Science and Technology.
2303.06923
Strategy-proof Budgeting via a VCG-like Mechanism
We present a strategy-proof public goods budgeting mechanism where agents determine both the total volume of expanses and the specific allocation. It is constructed as a modification of VCG to a less typical environment, namely where we do not assume quasi-linear utilities nor direct revelation. We further show that under plausible assumptions it satisfies strategy-proofness in strictly dominant strategies, and consequently implements the social optimum as a Coalition Proof Nash Equilibrium. A primary (albeit not an exclusive) motivation of our model is Participatory Budgeting, where members of a community collectively decide the spending policy of public tax dollars. While incentives alignment in our mechanism, as in classic VCG, is achieved via individual payments we charge from agents, in a PB context that seems unreasonable. Our second main result thus provides that, under further specifications relevant in that context, these payments will vanish in large populations. In the last section we expand the mechanism's definition to a class of mechanisms in which the designer can prioritize certain outcomes she sees as desirable. In particular we give the example of favoring equitable (egalitarian) allocations.
Jonathan Wagner, Reshef Meir
2023-03-13T08:40:08Z
http://arxiv.org/abs/2303.06923v1
# Strategy-proof Budgeting via a VCG-like Mechanism ###### Abstract We present a strategy-proof public goods budgeting mechanism where agents determine both the total volume of expanses and the specific allocation. It is constructed as a modification of VCG to a non-typical environment, namely where we do not assume quasi-linear utilities nor direct revelation. We further show that under plausible assumptions it satisfies strategyproofness in strictly dominant strategies, and consequently implements the social optimum as a Coalition-Proof Nash Equilibrium. A primary (albeit not an exclusive) motivation of our model is Participatory Budgeting, where members of a community collectively decide the spending policy of public tax dollars. While incentives alignment in our mechanism, as in classic VCG, is achieved via individual payments we charge from agents, in a PB context that seems unreasonable. Our second main result thus provides that, under further specifications relevant in that context, these payments will vanish in large populations. In the last section we expand the mechanism's definition to a class of mechanisms in which the designer can prioritize certain outcomes she sees as desirable. In particular we give the example of favoring equitable/egalitarian allocations. ## 1. Introduction We study a model where a population of \(n\) agents face the decision of funding several public goods or investments that serve their collective objectives. Formally, we start with an available budget of \(B_{0}\geq 0\) that should be allocated among \(m\geq 1\) different alternatives. The _budget decision_ we seek is a pair \((x,t)\) where \(t\in[-\frac{B_{0}}{n},\infty)\) is a monetary sum ("tax") that each agent adds to (or subtracts from) the budget, and \(x\in\Lambda^{m}:=\{x\in R^{n}|x_{j}\geq 0\ \forall j,\ \sum_{j}x_{j}=1\}\) represents the allocation of the resulting budget \(B_{t}=B_{0}+nt\) among the \(m\) alternatives. Note that we allow \(t<0\), meaning that some of the initial budget \(B_{0}\) will be distributed equally among agents rather than fund public investments. We are interested in constructing a collective decision mechanism to which every agent submits her preferred budget decision, and one that is incentive compatible (IC) particularly. Table 1 demonstrates how agents might report to such mechanism their preferences for budget allocation among three municipal services. Note e.g. that agent \(a\) suggests an individual payment of \(t=20\) each, and so to allocate a total of \(B_{0}+3t=100+3\cdot 20=160\). Also note that agents \(b\) and \(c\) propose the same normalized allocation \(x_{b}=x_{c}\in\Delta^{3}\) but differ in their preferred tax. ### Motivation and Main Results Incentives alignment has long been a major interest in Social Choice and Mechanism Design research. While Gibbard & Satterthwaite's Theorem (Gibbard & Satterthwaite, 1993; Gibbard et al., 2000) provides a negative result for the most general setup, positive results do exist when the preference space is contracted somehow (mainly to singled-peaked (Satterthwaite, 1993)), or when monetary transfers that the mechanism charges from agents are introduced. In the latter case, the VCG mechanism (Gibbard et al., 2000) is the canonical model and moreover, Roberts' Theorem (Roberts, 1993) shows that for a general preferences domain, any strategy-proof mechanism is in a sense a generalized version of the VCG principle. However, VCG is built upon an assumed model that cannot apply to every scenario. Most importantly, it depends crucially on underlying quasi-linear utilities, meaning that the overall satisfaction of agent \(i\) when the mechanism outputs the decision \(\Omega\) and charges her with a payment \(p_{i}\) is expressed as \(u_{i}(\Omega,p_{i})=v_{i}(\Omega)-p_{i}\) where \(v_{i}(\Omega)\) is the value she attributes to \(\Omega\). While plausible in many economic situations, quasi-linearity is in particular violated if agents do not asses their benefits from different outcomes in monetary terms (making the subtraction of \(p_{i}\) from \(v_{i}(\Omega)\) a meaningless expression), whether because these are purely non-financial or just difficult to quantify. When we consider investments in public goods, that is likely to be the case. Furthermore, VCG is a _'Direct Revelation'_ mechanism in which agents explicitly report the full description of their preferences (Gibbard et al., 2000). That is reasonable when, for example, we ask agents to specify a value (that is, the maximum price they are willing to pay) for each of the items offered in an auction. In our case the space of optional outcomes is a multidimensional continuum and the 'value' is quite an abstract concept, non-linear in particular. As should be obvious when we later introduce our model to its full details, a direct revelation mechanism becomes unlikely in that context and we thus only collect optimal allocations,1 as demonstrated in Table 1. \begin{table} \begin{tabular}{l|c|c|c|c} & Tax (\(t\)) & Education (\(x_{1}\)) & Parks (\(x_{2}\)) & Transport (\(x_{3}\)) \\ \hline agent \(a\) & 0 & 60 (0.6) & 30 (0.3) & 10 (0.1) \\ agent \(b\) & 20 & 32 (0.2) & 48 (0.3) & 80 (0.5) \\ agent \(c\) & -20 & 8 (0.2) & 12 (0.3) & 20 (0.5) \\ \end{tabular} \end{table} Table 1. An example voting profile for three alternatives and three agents, with an initial budget \(B_{0}=100\). _Contribution and paper structure._ We show that for the budgeting problem described above we can construct a 'VCG-like' mechanism all the same. The key idea we will use is that while we can no longer justify the simple quasi-linear relation between the utility in the outcome and that of the charged payment, the fact that the decision space itself involves monetary transfers enables revealing of the true relation between these two in each agents' perspective. At the begining of Section 3 show how full information can in fact be extracted from the preferences that agents report. Section 4 introduces our proposed mechanism, and in 4.3 we show that under certain conditions it is IC in strictly dominant strategies and consequently Coalition-proof [8]. Finally, in some applications of the model collecting money from agents (that is, money paid "to the mechanism", on top of the tax \(t\) that is a part of the chosen outcome and finances shared expenditures) serves well the purpose of aligning incentives, however may not be reasonable on its own merits. Thus, our second and most technically challenging main result, presented in Section 5, provides that, for a relevant subclass of utility functions, these payments become negligible in large populations. A few examples of environments where our mechanism may be useful are listed below, starting with the one standing at the center of our focus. _Participatory Budgeting._ Our model falls naturally within a Participatory Budgeting (PB) framework, where members of a community determine the allocation of their shared budget via some voting mechanism.2 While most PB implementations worldwide, and accordingly much of the PB literature, involve the allocation of a given budget among several public indivisible goods, several previous works were dedicated to divisible PB models [12; 16; 18; 19; 21], where 'projects' may correspond to city departments, e.g. education, transportation, parks and recreation and so on. Our model moreover involves taxation in the collective decision, in which we find several advantages. Technically, our adjustment of VCG payments to non quasi-linear utilities is enabled solely due to that feature. Conceptually, it expands the scope of control delegated to society through direct democracy, and also brings a more valuable feedback to the designer on the true value that public expenditures create, rather than just a comparison between them. Being our primary motivation, the terminology we use and our approach to modelling is mainly driven by the PB application, however intended to represent a wider range of real life economic situations. While we are aware that some of the assumptions we make may not be generically adequate in every possible instance of different environments too, the basic ideas extend to at least some of them. We thoroughly discuss our works' relations with previous models and results in that area in the next section of the introduction. Footnote 2: The Participatory Budget Project’: [https://www.participatorybudgeting.org/](https://www.participatorybudgeting.org/) _Shared financing among nearby businesses._ A group of nearby businesses (for example in the same shopping center) might cofinance some facilities and services that are more public or regional by nature, e.g. security, costumers' parking, shared dining areas, public activities that increase exposure, etc. _Environmental or other non-financial investments._ That could apply to governments imposing an 'environmental (or other) tax' on firms or different countries deciding on their coordinated actions [10]. Other pursued goals might include border controls of neighbouring countries, for example. _Joint R&D and Human Capital ventures._ R & D and Human Capital investments are typically long term and require considerable resources. Thus, firms in the same or related industries might benefit from joining forces in funding e.g. the training of required professions or the development of new technologies and methods. Such collaborations might scale from a small number of businesses deciding to cooperate, through a unionized industry and up to being run by the government state-wide. _Non-monetary 'Currencies'._ As the concept of 'value' of different outcomes is more abstract in our model, it may also apply to situations where the investments themselves, and thereby the collected tax payments, are not necessarily monetary. Examples may include members in a community willing to dedicate an agreed amount of working hours within their community, or firms that allocate resources such as land (e.g. agricultural land dedicated to cooperative experimental planting), or technological resources such as storage space or computation force. A variation of our model where we employ a heterogeneous taxation may also be relevant, especially to applications other than PB. For example, imposing higher contributions on wealthier countries in joint environmental investments. We discuss that in appendix D and show that our IC results extend to such setup. ### Modeling Approaches in Divisible Participatory Budgeting Before presenting our own model, we discuss here the limitations of existing incentive compatibility concepts from the divisible PB literature. _Spatial models._ Several past works have studied incentive alignment in PB and divisible PB particularly [1; 9; 16; 18; 21]. Notably, two different works [18; 21] presented incentive compatible (or'strategyproof') mechanisms for a divisible PB scenario similar to ours. These works assume \(\ell_{1}\)-norm preferences in which each agent \(i\) reports her optimal allocation \(x^{(i)}\in\Delta^{m}\) to the mechanism, and her utility in any allocation \(x\) is given by \(u_{i}(x)=-\left\|x^{(i)}-x\right\|_{1}\). Other \(\ell_{p}\) norms [19], or more generally single-peaked preferences [5; 27], are well studied in social choice literature and known to allow for mechanisms with strong strategyproofness guarantees. This IC definition relies on the underlying assumption that the utility of an agent depends solely on the 'distance' between the accepted outcome and her favoured alternative. Indeed, in the absence of a concrete way to measure the monetary value of decisions, minimizing the distance to a voter's ideal allocation is a reasonable solution. However, we argue that when agents have concrete measurable utility from decisions, as in PB, the spatial approach may not adequately capture the preference of a voter, as the very reason that voters' optimal allocations are not typically uniform is that they value money spent on each alternative differently. In particular, minimizing the \(\ell_{1}\) distance is not just suboptimal, but may in fact incentivize agents to manipulate. To see why, consider Voter \(a\) from Table 1, with her ideal allocation \(x_{a}=(0.6,0.3,0.1)\). Now, let us further assume that \(a\) is the parent to children under 18 years of age and the three public goods considered are education systems, public parks and public transportation-- that she very rarely uses and thus does not value highly. Reasonably, that voter might strictly prefer the allocation \(x^{\prime}=(0.7,0.2,0.1)\) over \(x^{\prime\prime}=(0.6,0.2,0.2)\) because while both are suboptimal to her, investing another 0.1 of the budget in the education of her children may serve her interests better than investing it in a facility she rarely enjoys. However, under the \(\ell_{1}\)-distance model she is indifferent between the two, meaning that her incentive to manipulate the outcome from \(x^{\prime\prime}\) to \(x^{\prime}\), when and if facing the opportunity, is not accounted for.3 Footnote 3: Indeed, concrete examples that show how the incentive compatibility in [21] and [18] might ”fail”, in the above sense, are not difficult to construct. _Social choice vs. microeconomic models._ As explained in [18], the solution concept that minimizes the \(\ell_{1}\) distances from agents' ideal points generalizes the one-dimensional median rule [27]. Similarly, most of the literature on divisible PB adopt or generalizes the same assumptions used for PB with indivisible projects, which in turn stem from standard modelling assumptions in voting and computational social choice. However, we argue that divisible budget allocation is much closer in nature to problems typically treated within microeconomic framework (Meyer, 1992; Goyal and Goyal, 2006; Goyal and Goyal, 2006). This is true especially when assigning budgets to departments etc. rather than to specific projects with variable costs.4 Hence, it makes more sense to adopt conventional economic assumptions regarding demand and utility, as in (Goyal and Goyal, 2006) and as we do here. In particular: Footnote 4: See e.g. [https://pbstanford.org/boston16internal/knapsack](https://pbstanford.org/boston16internal/knapsack) * _Additive concave utilities._ We adopt the additive concave utilities model (Goyal and Goyal, 2006; Goyal and Goyal, 2006; Goyal and Goyal, 2006) that offers a more concrete description of the utility gained from different public investments. Its most closely related version to ours is found in a former work by Fain et al. (Goyal and Goyal, 2006). There, the utility of agent \(i\) in allocation \(X=(X_{1},X_{2}\dots)\) is expressed as \[U_{i}(X)=\sum_{j}\alpha_{i,j}\theta_{j}(X_{j})\] (1) where \(X_{j}\) is the amount spent on public good \(j\), the \(\{\theta_{j}\}_{j}\) functions are monotonically increasing in \(X_{j}\) for all \(j\) and strictly concave, (smoothly) expressing the plausible assumption of decreasing marginal gains, and \(\alpha_{i,j}\) are scalars that vary between agents. As we assume that part of the budget is collected via a tax-per-agent \(t\), our model adds on the above the disutility of a voter from the tax payment. * _Optimal points characterized by the MRS conditions_, that follows form the concavity and some additional conventions on utilities (Goyal and Goyal, 2006). * _Utility depends on public investment per capita._ (that we add to the model in Section 5). On a large scale, it is reasonable that the quality of public goods depends more on spending per capita rather than on the nominal amount.5 Footnote 5: See for example [https://data.oecd.org/gga/general-government-spending.htm](https://data.oecd.org/gga/general-government-spending.htm), and (Goyal and Goyal, 2006). In contrast, _elicitation_ is an issue that has received much more attention in the literature of mechanism design and computational social choice than in microeconomics. For example there is a live discussion in the context of indivisible PB on the tradeoff between expressiveness of the ballot and effort required for elicitation (Goyal and Goyal, 2006; Goyal and Goyal, 2006). Similarly, we argue that it does not make sense to assume that we have direct access to voters' preferences, and here we adopt from computational social choice the assumptions that voters simply report their most preferred allocation, as in (Goyal and Goyal, 2006). In terms of applicability, however, the obvious shortcoming of our model is that it requires us to explicitly specify the functions \(\{\theta_{j}\}_{j}\) and \(f\), which are fairly abstract. Importantly, we **do not** assume that agents are 'aware' of their assumed utility function, but, conventionally, only know their preferences regarding the decision space, that presumably can be interpreted as derived from an underlying utility model (Goyal and Goyal, 2006). Of course, any such model would be an approximation at best. Nevertheless, it is fair to assume that any choice of monotonically increasing concave functions probably better approximates individuals' preferences--and thereby incentives--than the spatial model or a linear additive model (Goyal and Goyal, 2006). (Note that the linear additive model is arguably much less reasonable than concave, not merely due to excluding diminishing returns, but because it implies boundry optimal allocations where only alternatives \(j\) that maximize \(\alpha_{i,j}\) are implemented). ### Further related literature The Economic literature on public goods markets, equilibria and optimal taxation is abundant. ((Meyer, 1992; Goyal and Goyal, 2006; Goyal and Goyal, 2006), just to name a few). While our work adopts a similar approach to modelling and also optimizes social welfare, this brunch of the literature rarely discusses mechanisms. One exception that we know of is found in (Garg et al., 2017), in which the socially optimal outcome is implemented in strictly dominant strategies using a method very similar to ours, however for quite a different utility model. To the best of our knowledge, the only existing PB mechanism that included tax in the collective decision previously to ours was studied by Garg et al. (1998) in the context of experimenting _'iterative voting'_ mechanisms. Interestingly, it may suggest some supporting evidence in favour of the additive concave utility model over spatial models in that context. Two other previous works (Bang et al., 2016; Garg et al., 2017) incorporated private funding into a PB model, albeit in the form of voluntary donations that every agent can choose freely and not as a collectively decided sum that is collected from (or paid to) everyone, as we consider here. The literature on divisible PB is relatively narrow. In terms of incentive compatibility, (Fain et al., 2016) presented the soundest results, under a spatial utility model. Alternatively, Fain et al. (Fain et al., 2016) offer a randomized mechanism that is 'approximately-truthful' for the special case of 1-degree homogeneous additive utilities. The _Knapsack Voting_ mechanism introduced in (Fain et al., 2016) also satisfies some weaker notion of strategyproofness under a similar model. Aziz et al. (Aziz et al., 2016) presented IC mechanisms for additive linear utilities, although their model is primarily motivated by randomized approval mechanisms. A similar utility model is also found in (Ziegler et al., 2017). Overall, in relation to the divisible PB field, this work offers an SDSIC mechanism under concave additive utilities, to the best of our knowledge for the first time. Our desire for diminishing the (modified) VCG payments resembles the idea of _redistribution_ in mechanism design (Garg et al., 2017; Garg et al., 2017). Such methods are especially relevant in a discrete decision space and can eliminate surplus only partially, while in our model the complete (asymptotic) vanishing is much thanks to the continuity of the decision space. Much of the PB literature deals with the concept of _fair allocations_(Bang et al., 2016; Garg et al., 2017; Garg et al., 2017). While not a primary goal of our model, we show that the designer can bias the allocation closer to a favorable allocation--including one they see as fair. ## 2. Model and Preliminaries We denote by \(\Delta^{m}\) the set of distributions over \(m\) elements, and use \([m]\) as a shortcut for \(\{1,\ldots,m\}\). A set of \(n\) agents (voters) need to collectively reach a _budget decision_\((x,t)\) described as follows. \(t\in\mathbb{R}\) is a lump-sum tax collected from every agent. \(x\in\Delta^{m}\) is an allocation of the total available budget \(B_{t}:=B_{0}+nt\) among some \(m\) pre-given public goods, where \(B_{0}\) is an external, non tax-funded source of public funds. \(t\) is restricted only by \(t>-\frac{B_{0}}{n}\), meaning that voters can decide either to finance a budget larger than \(B_{0}\) through positive taxation, or allocate some of it to themselves directly as cash (negative taxation). The collective decision is taken through some voting mechanism to which every agent submits her most preferred budget decision \((x^{(i)},t^{(i)})\).
2306.12150
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of 'explainable' artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the 'explanation performance' of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance.
Marta Oliveira, Rick Wilming, Benedict Clark, Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
2023-06-21T09:53:37Z
http://arxiv.org/abs/2306.12150v1
Benchmark data to study the influence of pre-training on explanation performance in MR image classification ###### Abstract Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of 'explainable' artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the 'explanation performance' of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance. ## 1 Introduction Following AlexNet's [23] victory in the ImageNet competition, CNNs developed to become the deep neural network (DNN) architecture of choice for any image-based prediction tasks. Apart from their ingenious design, the success of CNNs was made possible by ever-growing supplies of data and computational resources. However, sufficient labelled data to train complex CNNs are not widely available for every prediction task. This is especially true for medical imaging data, which is cumbersome to acquire and underlies strict data protection regulations. To address this bottleneck, Transfer Learning (TL) techniques are frequently employed [2]. In the context of DNNs, TL strategies often consist of two steps. First, a surrogate model is trained on a different prediction tasks, for which ample training data are available. This is called pre-training. And, second, the resulting model is adapted to the prediction task of interest, where only parts of the model's parameters are updated, while other parameters are kept untouched. [27]. This is called re-training or fine-tuning and requires smaller amounts of labelled data than training a network from scratch, leading to a less computationally expensive process. It is believed that TL techniques improves the generalisation by identifying common features between the two tasks [38]. Also, TL is frequently employed for prediction tasks in medical imaging. Cheng and Malhi [6] use a DNN, trained with the ImageNet dataset [11], to classify ultrasound images into eleven categories, achieving better results than human radiologists. Another example is the reconstruction of Magnetic Resonance Imaging (MRI) data with models trained on an image corpus that was augmented with ImageNet data [8]. The resulting model outperformed conventional reconstruction techniques. However, it also has been argued [30] that the use of pre-trained models may not be adequate for the medical field. The main argument being that structures in medical images are very different from those observed in natural images. Hence, feature representations learned during pre-training may not be useful for solving clinical tasks. Despite the success of DNN models, their intrinsic structure makes them hard to interpret. This challenges their real-world applicability in high-stake fields such as medicine. Although many practices in medicine are still not purely evidence-based, the risk posed by faulty algorithms is exponentially higher than that of doctor-patient interactions [37]. Thus, it has been recognised that the working principles of complex learning algorithms need to be made transparent if such algorithms are to be used on critical human data. The General Data Protection Regulation of the European Union (GDPR, Article 15), for example, states that patients have the right to receive meaningful information about how decisions are achieved based on their data, including decisions made based on artificial intelligence algorithms, such as DNNs [14]. The field of 'explainable artificial intelligence' (XAI) originated to address this need. Several XAI methods seek to deliver 'post-hoc explanations', where explanations are obtained after a model has been trained and applied to a test input. The outcome of such methods is often a so-called heat map, which assigns 'importance' scores to the input features. However, despite the popularity of various XAI methods - accumulating thousands of citations within a few years - their theoretical underpinnings are far from established. Most importantly, there is no agreed upon definition of what explainability means or what XAI methods are supposed to deliver [44]. Consequently, little quantitative empirical validation of XAI methods [9] exists. The lack of robust quantitative criteria to benchmark XAI methods against ground-truth data is problematic and speaks against their use in critical domains. It also means that, when highly unstable results for identical inputs occur, it is impossible to judge which one is more accurate. One step towards overcoming this issue is to devise working definitions of what constitutes a ground-truth explanation. Such a definition would allow us to measure the explanation performance of XAI methods using objective metrics. So far, most quantitative studies in the XAI field focus on secondary quality aspects such as robustness or uncertainty but spare out the fundamental issue of correctness, which should be the prime concern before looking into any secondary metrics such as stability. Recently, however, the field has moved towards a more objective validation of XAI approaches using synthetic data [36, 43, 3, 17, 1]. Wilming et al. [41], for example, provide a data-driven definition of explainability as well as quantitative metrics to measure explanation performance. Their empirical evaluation of a large number of XAI approaches is, however, limited to linear toy data. On the other hand, [20] used XAI methods to find structural changes of the ageing brain, which allowed the authors to identify white matter lesions associated to the ageing brain and forms of dementia. The obtained heat maps were compared with the lesion maps. Cherti and Jitsev [7] analysed pre-training as well, while fine-tuning their models to predict the age of a brain. They explained the model using LRP [5] and compared the resulting heat maps with white matter lesion maps created using the lesion segmentation toolbox by Schmidt et al. [29]. In this work, we extend these lines of research in one major aspect. We devise ground-truth data for a realistic clinical use case, the classification of brain MRIs, according to the presence of different lesion types. To this end, we overlay real MR images with artificial lesions, loosely resembling white matter hyperintensities (WMH). WMH are important biomarkers of the aging brain and ageing-related neurodegenerative disorders [40, 12]. We establish a realistic use case for an MRI prediction task, and more importantly, provide a ground-truth benchmark for model explanations in this task, as the positions of the artificial lesions defining solely the class-membership are fully known by construction. In the second part of this work, we show the benchmark's utility by investigating both classification as well as explanation performance as a function of pre-training. We provide a framework for generating MRI slices with different types of lesions and respective ground-truths. We then use this framework to benchmark common XAI methods against each other and to compare the explanation performance of models pre-trained using either within-domain data (using different MRI classification tasks) or out-of-domain data (using natural images from the ImageNet classification challenge [10]). ## 2 Methods ### Data Generation To generate the data used for this analysis, we use 2-dimensional T1-weighted axial MRI slices from 1007 healthy adults aged between 22 and 37 years, sourced from the Human Connectome Project (HCP) [39] (see also supplemental material section A). The MRI data consists of 3D MRI slices pre-processed with FSL [22] and FreeSurfer [15] tools [18; 21] and defaced through the algorithm reported in Milchenko and Marcus [25]. These slices provide the background on which a random number of artificial 'lesions' are overlaid. Regular and irregular lesions are generated and added to the slices. Each slice contains only one type of lesions. This defines a binary classification problem, which we solve using CNNs. The lesions are added so that the dataset created is balanced. For this study, we keep only slices with less than 55% black pixels. These images are 260\(\times\)311 pixels in size. To obtain square images, they are padded vertically with zeros and cropped horizontally. The final size of the slices result in background images \(B\in\mathbb{R}^{270\times 270}\), where we keep the intensity values \(B_{ij}\in[0,0.7]\) for \(i,j=1,\dots,270\). Artificial lesions are created from a 256\(\times\)256 pixel noise image, to which a Gaussian filter with a radius of 2 pixels is applied. The Otsu method [26] is used to binarise the smoothed image. After the application of the morphological operations erosion and opening, a second erosion is applied to create more irregular shapes (see supplemental material section C). Since these shapes occur less frequently than regular shapes, these determine the number of different noise images necessary to create a given number of lesions. From the images obtained after the application of the morphological operations, the connected components (contiguous groups of non-zero intensity pixels fully surrounded by zero intensity pixels) are identified, which serve as lesion candidates. Further, lesions are selected based on the compactness of their shape. Here, it is sufficient to consider the isoperimetric inequality on a plane Figure 1: Example of images of the dataset created. The top row consists of axial MRI slices from the Human Connectome Project (HCP) [39] healthy brain dataset, with artificial lesions added. The bottom row consists of the top row images, but with the position of these lesions contoured in blue, forming the ground-truth for an explanation. \(A\leq p^{2}/4\pi\), where \(A\in\mathbb{R}\) is the area of a particular lesion shape and \(p\in\mathbb{R}\) its perimeter. The compactness is obtained by comparing the shape of the lesion candidate to a circle with the same perimeter. The larger the compactness, the rounder the shape. Here, regular lesions are required to have a compactness above \(0.8\) and irregular lesions have a compactness below \(0.4\). After selecting the lesions, they are padded with a \(2\)-pixel margin, and a Gaussian filter with a radius of \(0.75\) pixels is applied to smooth the lesion boundaries. Examples of obtained lesions are displayed in Figure 1. Three to five lesions of the same type (regular or irregular) are composed in one image \(L\in\mathbb{R}^{270\times 270}\) in random locations within the brain, without overlapping and pixel-wise multiplied with the background MRI \(B\) (see Figure 2). For the lesions we consider the intensity values \(L_{ij}\in[0,w]\), where \(i,j\) correspond to pixels representing lesions. The parameter \(w\) is a constant that controls the SNR. Higher \(w\) values lead to whiter lesions and higher SNR, leading to easier classification and explanation tasks. In this study, we set \(w=0.5\). Note also, that this setup may lead to the emergence of so-called suppressor variables. These would be pixels of the background outside any lesion, which could still provide a model with information on how to remove background content from lesion areas in order to improve the model's predictions. Suppressor variables have been shown to be often misinterpreted for important class-dependent features by XAI methods [19; 41]. In parallel to the generation of the actual synthetic MR images, the same lesions are added to a black image to create ground-truth masks. Examples of these created images can be seen in Figure 2 (C). Out of the \(1\:006\) subjects in the HCP dataset, 60% were used to create the training dataset, 20% to create the validation dataset, and another 20% to create the holdout dataset, corresponding to \(24\:924\), \(8\:319\), and \(8\:319\) slices, respectively. ### Pre-training We apply the XAI methods to the VGG-16 [32] architecture, included in the Torchvision package, version 0.12.0+cu102. Two models are pre-trained using two different corpora, and serve as starting points for our study. The first model is pre-trained using the ImageNet dataset [11] (out-of-domain pre-training). The weights used are included in the same version of Torchvision. The second model is pre-trained using MRI slices extracted from the HCP as described before but without artificial lesions (within-domain pre-training). Here, the task is to classify slices according to whether they were acquired from female or male subjects. To train the latter model, \(24\:924\) slices are used, 46% of which belong to male subjects and 54% to female subjects. These slices are arranged into batches of 32 data points. The model is trained using stochastic gradient decent (SGD) with a learning rate (LR) of 0.02 and momentum of 0.5. The learning rate is reduced by 10% every 5 epochs. Cross-entropy is used as the loss function. ### Fine-tuning After pre-training, the models are fine-tuned layer-wise on the lesion-classification problem, with images chosen from the holdout dataset, which we split into train/validation/test again (see supplementary material section D). Each degree of fine-tuning includes the convolutional layers between Figure 2: Example of the lesion creation process. (A) depicts an original axial MRI slice from the Human Connectome Project and denoted as background image \(B\). (B) showcases an example lesion mask \(L\) composed of several lesions. (C) represents ground-truth explanations used for the XAI performance study. two consecutive max-pooling layers. Thus, the five degrees of fine-tuning are: _Iconv_ (fine-tuning up to the first max-pooling layer), _2conv_ (fine-tuning up to the second max-pooling layer), and so on, up to _all_ (fine-tuning of all VGG-16 layers). Weights in layers that are not to be fine-tuned are frozen. SGD and Cross-entropy loss with the same parameters as used for the pre-training are employed in this phase. However, several different LRs are used. ### XAI methods We apply XAI methods from the Captum library (version 0.5.0). These methods have been proposed to provide 'explanations' of the models' output in the form of a heat map \(\mathbf{s}\in\mathbb{R}^{270\times 270}\), assigning an 'importance' score to each input feature of an example. We use the default settings from Captum for all XAI methods. Wherever a baseline - a reference point to begin the computation of the explanation - is needed, an all-zeros image is used. This is done for Integrated Gradients, DeepLift, and GradientSHAP. The absolute value of the obtained importance score or heat map constitutes the basis for our visualisations and quantitative explanation analyses. For visualisation purposes, we further transformed the intensity of the importance scores by \(-\log(1-\mathbf{s}_{ij}(1-\nicefrac{{1}}{{b}}))/\log(b)\), where \(\log\) is the natural logarithm and \(b=0.5\). The XAI methods used were Integrated Gradients [35], Gradient SHAP [24], Layer-wise Relevance Propagation (LRP) [4], DeepLIFT [31], Saliency [33], Deconvolution [42] and Guided Backpropagation [34]. ### Explanation performance Our definition of quantitative explanation performance is the precision to which the generated importance or heat maps resemble the ground-truth, i.e. the location of the lesions (cf. Figure 2). It would be expected that the best explanation would only highlight the pixels of the ground-truth, since those are the ones that are relevant to the classification task at hand. We determine the explanation performance by finding the \(n\) most intense pixels of the heat map \(\mathbf{s}\), where \(n\) is equal to the number of pixels in the ground-truth of each image. Then we calculate the number of these pixels that were in the ground-truth (true positives). The precision is obtained by calculating the ratio between the true positives and all positives (the number of pixels in the ground-truth). ### Baselines The performance of each explanation is then compared to several baseline methods, which act as 'null models' for explanation performance. These baselines are models that are initialised randomly and not trained (random model) and two edge detection methods, the Laplace and Sobel filters. ## 3 Experiments Showcasing the proposed dataset's utility, we fine-tune two VGG-16 models that have been previously pre-trained with the two corpora (ImageNet and MRI), to five different degrees. For each degree of fine-tuning, we fine-tuned 15 models with different seeds. Then we select the three best-performing models, where performance is measured on test data in terms of accuracy. We further analyse the model explanation performance of common XAI methods with respect to the ground-truth explanations in the form of lesion maps provided by our dataset. A reference to the Python code to reproduce our experiments is provided in the supplemental material section B. ## 4 Results The best-performing models had an accuracy above 90% except the least fine-tuned ones (1conv). All models, except the least fine-tuned ones (1conv), reached accuracies above 90%. The models pre-trained with ImageNet achieved higher accuracy than the ones pre-trained with MR images. ### Qualitative analysis of explanations Figure 3 displays importance heat maps for a test sample with four irregular lesions. These explanations are obtained by eight XAI methods for five degrees of fine-tuning. Plots are divided into two sections reflecting the two corpora used for pre-training (ImageNet and MRI female vs. male). The white contours in each heat map represent the ground-truth of the explanation. A good explanation should give high attribution to regions inside the white contour and low everywhere else. In this respect, most of the explanations appear to perform well, identifying most of the lesions, especially for high degrees of fine-tuning. However, the explanations generally do not highlight all of the lesions in the ground-truth. This image also shows that, for some XAI methods, the explanation may deteriorate for an intermediate degree of fine-tuning, and then improve again. This can be seen especially in the results of the model pre-trained with ImageNet data. Heat maps of the untrained baseline model are shown in the section F of the supplement. When comparing the 'explanations' obtained from models pre-trained on ImageNet data with the ones from models pre-trained on MRI data, the latter seems to contain less contamination from the structural features of the MRI background, especially for Deconvolution and Guided Backpropagation. We can further argue that some models seem to do a better job identifying the lesions than others. Particularly noisy explanations are obtained with Deconvolution, especially for models pre-trained with ImageNet data. In this case, pixels with higher importance attribution seem to form a regular grid, roughly covering the shape of the brain of the underlying MRI slice. For models pre-trained on the MRI corpus, Deconvolution is able to place higher importance within the lesions for higher degrees of fine-tuning. Figure 3: Examples of heat maps representing importance scores attributed to individual inputs by popular XAI methods for several degrees of fine-tuning of the VGG-16 architecture. The models were selected to achieve maximal test validation accuracy. Each row corresponds to an XAI method, whereas each column corresponds to a different degree of fine-tuning from 1 convolution block (1conv) to the entire network (all). The image is divided into two vertical blocks, where importance maps obtained from models pre-trained with ImageNet data are depicted on the left, and importance maps obtained from models pre-trained with MR images are depicted on the right. ### Quantitative analysis of explanation performance Figure 4 shows quantitative explanation performance. Here, each boxplot was derived from the intersection of test images that were correctly classified by all models (\(N=2\)\(371\)). The results obtained for the edge filter baseline as well as the random baseline model are derived from the same \(2\)\(371\) images. Note that the edge detection filters only depend on the given image and are independent of models and XAI methods. Thus, identical results are presented for edge filters in each subfigure. The lines in the background correspond to the average classification performance (test accuracy) of the five models for each degree of fine-tuning. The random baseline model is only one and has a test accuracy of \(50\%\). Interestingly, models pre-trained with ImageNet data consistently achieved higher classification performance than models pre-trained with MR images. The classification performance of the models pre-trained with MR images peaks at an intermediate degree of fine-tuning (3conv), while the models pre-trained with ImageNet improve with higher fine-tuning degrees. In some settings, ImageNet pre-training leads to considerably worse explanation performance. This is the case for specific methods such as Deconvolution and, to some extent, Guided Backpropagation. Moreover ImageNet pre-training leads to worse explanations across all XAI methods for lower degrees of fine-tuning (1conv and 2conv), where large parts of the models are prohibited to depart from the internal representations learned on the ImageNet data. As a function of the amount of fine-tuning, explanation performance generally increases with higher degrees of fine-tuning. However, depending on the XAI method used, and the corpus used for pre-training, this trend plateaus or even slightly reverses at a high degree of fine-tuning (4conv). Importantly, explanation performance appears to strongly correlate with the classification performance of the underlying model. As classification accuracy could represent a potential confound to our analysis, we repeated our quantitative analysis of explanation performance based on five models with similar classification performance per pre-training corpus and degree of fine-tuning. Here, it is apparent that, when controlling for classification performance, models pre-trained on MRI data Figure 4: Quantitative explanation performance for XAI methods applied to the five best models, with different degrees of fine-tuning. The blue line and boxplots correspond, respectively, to the classification performance (accuracy) and explanation performance (precision) derived from models pre-trained with MRI data, whereas the pink line and boxplots correspond analogously to classification and explanation performance for models pre-trained with ImageNet data. The other three boxplots correspond to the performance of baseline heat maps. Yellow and orange colour correspond to the Sobel and Laplace filters respectively, and red colour to the model with random weights. consistently outperform equally well-predicting models that were pre-trained on ImageNet data in terms of explanation performance. These results are presented in supplementary in section G. ## 5 Discussion The field of XAI has produced a plethora of methods whose goal it is to 'explain' predictions performed by deep learning and other complex models, including CNNs. However, quantitative evaluations of these methods based on ground-truth data are scarce. Even if these methods are based on seemingly intuitive principles, XAI can only serve its purpose if it is itself properly validated, which is so far not often done. The present study was designed to create a benchmark within which explanation quality can be objectively quantified. To this end, we designed a well-defined ground-truth dataset for model explanations, where we modelled artificial data to resemble the important clinical use case of structural MR image classification for the diagnosis of brain lesions. With this benchmark dataset, we propose a framework to evaluate the influence of pre-training on explanation performance. We observed a correlation between classification accuracy and explanation performance, which could be expected since a more accurate model is likely to more successfully focus on relevant input features. Networks trained on ImageNet data may have learned representations for objects occurring only outside the domain of brain images (e.g., cats and dogs). The existence of such representations in the network seems to negatively affect XAI methods, whose importance maps are in parts derived by propagating network activations backwards through the network. Consistent with this remark is the observation that for lower degrees of fine-tuning (1conv and 2conv), the explanation quality of models pre-trained with ImageNet data is worse compared to models pre-trained with MR images. These findings challenge the popular view that the low-level information captured by the first layers of a CNN can be shared across domains. Our quantitative analysis suggests a large dispersion of explanation performance for all XAI methods, which may be unexpected given the controlled setting in which these methods have been applied here. Individual explanations can range from very good to very poor even for high overall classification accuracy, indicating a high risk of misinterpretation for a considerable fraction of inputs. ### Limitations Note, our analysis of XAI methods is limited to one DNN architecture, VGG-16, mainly showcasing the utility of our devised ground-truth dataset for model explanations. Furthermore, the lesion generation process resembles the idea of white matter hyperintensities where we aim to approximate specific neurodegenerative disorders from a'model perspective', where a natural prediction task would be 'healthy' vs. 'lesioned brain'. But it would be difficult to define a ground-truth for the class 'healthy'. Hence, we chose to create a classification problem based on two different shapes of lesions: round vs elongated. Admittedly, this distinction has no immediate physiological basis and serves purely the purpose of this benchmark, i.e., we can solve a classification task well enough by using a model architecture considered popular in this field. In this work, we are leveraging the HCP data as background for our prediction task. Since we recently learned that HCP will not permit us to publish derivative work ourselves, we plan to replace HCP MR images with equivalent ones from the IXI dataset1, made available under the Creative Common license. While we are unable to implement the change now, we plan to make the complete dataset available under a similar licence by the time of the conference. We expect neither substantial qualitative nor quantitative changes in the results and discussions provided in this work, given the equivalence of the two datasets. Footnote 1: [https://brain-development.org/ixi-dataset](https://brain-development.org/ixi-dataset) We argue that the quantitative validation of the _correctness_ of XAI methods is still a greatly under-investigated topic given how popular some of the methods have become. Major efforts both on the theoretical and empirical side are needed to create a framework within which evidence for the correctness of such methods can be provided. As a first step towards such a goal, meaningful definitions of what actually constitutes a correct explanation need to be devised. While in our study, ground-truth explanations were defined through a data generation process, other definitions, depending on the intended use of the XAI, are conceivable. The existence of such definitions would then pave the way for a theoretical analysis of XAI methods as well as for use-case-dependent empirical validations. ## 6 Conclusion In this work we created a versatile synthetic image dataset that allows us to quantitatively study the classification and explanation performances of CNN and similar complex ML methods in a highly controlled yet realistic setting, resembling a clinical diagnosis/anomaly detection task based on medical imaging data. Concretely, we overlaid structural brain MRI data with synthetic lesions representing clinically relevant white matter hyperintensities. We propose this dataset, to evaluate the explanations obtained from pre-trained models. Our study is set apart from the majority of work on XAI in that it uses a well-defined ground-truth for explanations, which allows us to quantitatively evaluate the 'explanation' performance of several XAI methods. Our study revealed a strong correlation between the classification performance of the model and the explanation performance of the XAI methods. Despite this correlation, models fine-tuned to a greater extent were shown to lead to better explanations. Controlling for classification performance, models pre-trained on MRI data lead to better explanations for every XAI method. The explanation performance of models pre-trained on within-domain images seem to have more stable explanation performance for a bigger range of classification accuracies. On the other hand, the explanation performance of models pre-trained with more general images quickly degrades with lower classification performance. The quantitative analysis of the explanations also shows a concerning variability of explanation performance values, suggesting that, when these methods are used to explain an individual prediction, a large uncertainty is associated with the correctness of the resulting importance map. This is a critical issue when using XAI methods to 'explain' predictions in high-stake fields such as medicine.
2301.06182
Bayesian Models of Functional Connectomics and Behavior
The problem of jointly analysing functional connectomics and behavioral data is extremely challenging owing to the complex interactions between the two domains. In addition, clinical rs-fMRI studies often have to contend with limited samples, especially in the case of rare disorders. This data-starved regimen can severely restrict the reliability of classical machine learning or deep learning designed to predict behavior from connectivity data. In this work, we approach this problem from the lens of representation learning and bayesian modeling. To model the distributional characteristics of the domains, we first examine the ability of approaches such as Bayesian Linear Regression, Stochastic Search Variable Selection after performing a classical covariance decomposition. Finally, we present a fully bayesian formulation for joint representation learning and prediction. We present preliminary results on a subset of a publicly available clinical rs-fMRI study on patients with Autism Spectrum Disorder.
Niharika Shimona D'Souza
2023-01-15T20:42:31Z
http://arxiv.org/abs/2301.06182v1
# Bayesian Models of Functional Connectomics and Behavior ###### Abstract The problem of jointly analysing functional connectomics and behavioral data is extremely challenging owing to the complex interactions between the two domains. In addition, clinical rs-fMRI studies often have to contend with limited samples, especially in the case of rare disorders. This data-starved regimen can severely restrict the reliability of classical machine learning or deep learning designed to predict behavior from connectivity data. In this work, we approach this problem from the lens of representation learning and bayesian modeling. To model the distributional characteristics of the domains, we first examine the ability of approaches such as Bayesian Linear Regression, Stochastic Search Variable Selection after performing a classical covariance decomposition. Finally, we present a fully bayesian formulation for joint representation learning and prediction. We present preliminary results on a subset of a publicly available clinical rs-fMRI study on patients with Autism Spectrum Disorder. ## 1 Introduction Resting state fMRI (rs-fMRI) is a popular paradigm for assessing brain activity and localize critical functions through steady state patterns of co-activation 14. Network-based approaches to rs-fMRI analysis often group voxels in the brain into regions of interest (ROIs) via a standard anatomical or dervied functional atlas 17,18. From here, the synchrony between the regional time courses can be summarized using a similarity matrix, which can be used as input for further analysis. In the context of neuropsychiatric disorders such as Autism, inter-patient variability often manifests as a spectrum of impairments, that clinicians quantify as "behavioral score" of clinical severity obtained from an exam. Identifying sub-networks in the brain that are predictive of such severity can help us understand the social and behavioral implications of the disorder for developing effective behavioral therapy. Building predictive models at the patient level continues to remain as an open challenge due to the high data dimensionality and considerable inter-subject variation and noise in the resting state acquisition. From a frequentist perspective, predictive models often follow a two step procedure. To combat the data-dimensionality, feature selection is first applied to the raw correlation values i.e. obtained by vectorizing the entries in the similarity matrices. Examples approaches include graph theoretic measures (betweenness,node degree), statistical and/or embedding features obtained from representation learning techniques such as PCA, k-PCA or ICA 16. Next, regression models such as Random Forests/ Support Vector Regression are applied to the derived features to predict the clinical measures. These strategies have shown success at modeling the group-averaged functional connectivity across the cohort but often fail to accurately capture individual variability. Consequently, the generalization power of these techniques is limited 4,8 12 In an attempt to address these limitations, recent focus has shifted towards mechanistic network models that are capable of modeling hierarchy onto existing connectivity notions. For example, community detection techniques are population-level models that are designed to identify interconnected subgraphs within a larger network. These techniques have refined our understanding of the organization of complex systems such as brain networks 2,1. Extensions to Bayesian community detection algorithms 24,22, 22, 21 have provided valuable insights in characterizing the social and communicative deficits in neurodevelopmental disorders such as schizophrenia and Autism. Unfortunately all of the above focus on group characterizations, and even studies that consider patient variability 15 or hierarchy in 6 have little generalization power on new subjects. The recent success of network decomposition models 3 in this space largely based on their ability to simultaneously model the patient and group level information. For example, the work of 13 introduces a common principal components formulation, where multiple rank one matrix outer products capturing the underlying 'generative' basis are combined using patient specific coefficients. The sparse basis networks identify meaningful co-activation patterns common to all the patients, and the coefficients model the patient variability. Similar to the joint network optimization model in 8; 5, this project explores the 'discriminative' nature of these coefficients. Specifically, we estimate clinical severity of every subject first by constructing bayesian regression models which map the subject-coefficients to the behavioral domain once the decomposition is estimated, and then in an end-to-end bayesian model. Through our experiments, we demonstrate the benefit of this joint bayesian formulation in terms of capturing the variability in the cohort, as well as for uncertainty quantification of the estimates. We have organised this letter as follows 1. We first briefly describe the ASD dataset which we validate on. Next, our methods section briefly introduces the dictionary decomposition to jointly model group-averaged and patient-specific representations, along with the corresponding inference algorithm. From here, we construct two bayesian regression algorithms, the first of the vanilla variety and the second of the variable selection (SVSS) flavour to predict clinical severity from the subject specific coefficients and the estimation algorithms. We compare and this performance to classical penalized linear regression. Finally, we propose a joint heirarchical bayesian model that simultaneously infers the dictionary representation and regression model parameters given the correlation matrices and scores in an end-to-end fashion. Footnote 1: This work was performed as a final project for graduate level Bayesian Statistics course offered by the Applied Mathematics and Statistics Department at Johns Hopkins University ### Dataset rs-fMRI Dataset and Preprocessing.We evaluate our method on a cohort of \(52\) children with high-functioning ASD released as a part of ABIDE 15 from the KKI site. Rs-fMRI preprocessing was performed according to the prevalidated pipeline in 24 We use the Automatic Anatomical Labeling (AAL) atlas to define \(P=116\) cortical, subcortical and cerebellar regions. Clinical Scores.We consider two measures of clinical severity: Autism Diagnostic Observation Schedule (ADOS) total raw score 20, which captures the social and communicative interaction deficits of the patient along with repetitive behaviors (dynamic range: \(0\)-\(30\)), and the Social Responsiveness Scale (SRS) total raw score 20 which characterizes social responsiveness (dynamic range: \(70\)-\(200\)). ## 2 Methods ### Dictionary Learning on rs-fMRI correlation matrices We define \(\mathbf{\Gamma}_{n}\in\mathcal{R}^{P\times P}\) as the correlation matrix for patient \(n\), where \(P\) is the number of regions given by the parcellation. We model \(\mathbf{\Gamma}_{n}\) using a group average basis representation and a patient-specific network strength term. The matrix \(\mathbf{B}\in\mathcal{R}^{P\times K}\) is a concatenation of \(K\) elemental bases vectors \(\mathbf{b}_{k}\in\mathcal{R}^{P\times 1}\), i.e. \(\mathbf{B}:=\mathbf{b}_{1}\quad\mathbf{b}_{2}\quad...\quad\mathbf{b}_{K}\), where \(K\ll P\). These bases capture steady state patterns of co-activation across regions in the brain. While the bases are common to all patients in the cohort, the combination of these subnetworks is unique to each patient and is captured by the non-negative coefficients \(\mathbf{c}_{nk}\). We include a non-negativity constraint \(\mathbf{c}_{nk}\geq 0\) on the coefficients to preserve the positive semi-definite structure of the correlation matrices \(\{\mathbf{\Gamma}_{n}\}\). The orthonormality constraint on \(\mathbf{B}\) helps us learn uncorrelated sub-networks that explain the rs-fMRI data well and implicitly regularize the optimization. Our complete rs-fMRI data representation is: \[\boldsymbol{\Gamma}_{n}\approx\sum_{k}\mathbf{c}_{nk}\mathbf{b}_{k}\mathbf{b}_{k} ^{T}\ \ s.t.\ \ \ \mathbf{c}_{nk}\geq 0\ \ \ \mathbf{B}^{T}\mathbf{B}=\mathcal{I}_{K} \tag{1}\] \(\mathcal{I}_{K}\) is the \(K\times K\) identity matrix. As seen in Eq. (1), we model the heterogeneity in the cohort using a patient specific term in the form of \(\mathbf{c}_{n}:=\mathbf{c}_{n1}\quad...\quad\mathbf{c}_{nK}^{T}\in\mathcal{R }^{K\times 1}\). Taking \(\textbf{diag}(\mathbf{c}_{n})\) to be a diagonal matrix with the \(K\) patient coefficients on the diagonal and off-diagonal terms set to zero, Eq. (1) can be re-written in matrix form as follows: \[\boldsymbol{\Gamma}_{n}\approx\textbf{Bdiag}(\mathbf{c}_{n})\mathbf{B}^{T}\ \ s.t.\ \ \mathbf{c}_{nk}\geq 0 \tag{2}\] Overall, this formulation is similar to common principal components from the statistics and manifold learning literature. Essentially, this strategically reduces the high dimensionality of the data, while providing a patient level description of the correlation matrices. We choose \(K=15\) based on the eigenspectrum on \(\{\boldsymbol{\Gamma}_{n}\}\) (See Fig. 1). #### 2.1.1 Optimization We use alternating minimization to optimize Eq. (2) with respect to \(\mathbf{B},\{\mathbf{c}_{n}\}\). Here, we cycle through the updates for the dictionary \(\mathbf{B}\), and loadings \(\{\mathbf{c}_{n}\}\), to obtain a _joint solution_. We note that there is a closed-form Procrustes solution for quadratic objectives. However, Eq. (2) is bi-quadratic in \(\mathbf{B}\), so it cannot be directly applied. Therefore, we adopt the strategy in 8, by which we introduce the constraints of the form \(\mathbf{D}_{n}=\textbf{Bdiag}(\mathbf{c}_{n})\), with corresponding augmented Lagrangian variables \(\{\boldsymbol{\Lambda}_{n}\}\). Thus, our objective from Eq. (2) now becomes: \[\mathcal{J}_{c}=\sum_{n}||\boldsymbol{\Gamma}_{n}-\mathbf{D}_{n} \mathbf{B}^{T}||_{F}^{2}+\sum_{n}\left[\text{Tr}\big{[}(\boldsymbol{\Lambda} _{n})^{T}(\mathbf{D}_{n}-\textbf{Bdiag}(\mathbf{c}_{n}))\big{]}+\frac{1}{2} \left.||\mathbf{D}_{n}-\textbf{Bdiag}(\mathbf{c}_{n})||_{F}^{2}\right]\right. \tag{3}\] along with the constraints \(\mathbf{c}_{nk}\geq 0\) and \(\mathbf{B}^{T}\mathbf{B}=\mathcal{I}_{K}\). See Algorithm 1 ### Bayesian Regression Models We combine this representation learning with a bayesian regression models to map to clinical severity. Our first set of models consider two classes of Bayesian Regression frameworks to predict behavior from the coefficients \(\{\mathbf{c}_{n}\}\). #### 2.2.1 Bayesian Linear Regression (BLR) Let \(\mathbf{y}_{n}\) be the scalar behavioral severity scores for a patient n. We model each \(\mathbf{y}_{n}=\beta_{0}+\beta^{T}\mathbf{c}_{n}+\epsilon_{n}\), where \(\epsilon_{n}\sim\mathcal{N}(0,\sigma^{2})\). In this model, we consider that the samples are drawn iid given \(\mathbf{c}_{n}\). Our likelihood function is parametrized by \(\beta\in\mathcal{R}^{K\times 1}\) and takes the form: \[\ell(\{\mathbf{y}_{n}\}|\{\mathbf{c}_{n}\},\beta,\beta_{0},\sigma^{2})=\prod_{n= 1}^{N}\ell(\mathbf{y}_{n}|\mathbf{c}_{n},\beta,\beta_{0},\sigma^{2})=\prod_{n=1 }^{N}\mathcal{N}(\mathbf{y}_{n};\beta^{T}\mathbf{c}_{n},\beta_{0},\sigma^{2}) \tag{4}\] We impose a conjugate prior on \((\beta,\sigma^{2})\) of the normal inverse-gamma form as follows: \[P(\beta;\beta_{0}|\sigma^{2})=\mathcal{N}(\mathbf{M},\sigma^{2}\mathbf{V})\; \;\text{and}\;\;\sigma^{2}\sim\text{IG}(a,b)\] We set \(\mathbf{M}=\mathbf{0}\in\mathcal{R}^{(K+1)\times 1}\) and \(a=3,b=1\) as mild assumptions on the prior. For our experiments, we apriori assume that the entries in \(\beta\) are uncorrelated, i.e. \(\mathbf{V}=\sigma_{\beta}^{2}\cdot\mathcal{I}_{K+1}\). For our experiments, we consider different values of \(\sigma_{\beta}^{2}\) to determine the model with the best fit as a grid search. We use a standard Gibbs Sampling algorithm (Implemented using Matlab's econometrics toolbox) to generate pairs of samples from the posterior \(\beta,\beta_{0},\sigma^{2}|\{\mathbf{c}_{n},\mathbf{y}_{n}\}\) as follows: 1. Initialize \(\beta,\beta_{0},\sigma^{2}\) 2. Sample \(\beta;\beta_{0}|\{\mathbf{c}_{n},\mathbf{y}_{n}\},\sigma^{2}\sim\mathcal{N}( \mu_{n},\mathbf{\Sigma}_{y})\) where \(\mu_{n}=(\hat{\mathbf{C}}\hat{\mathbf{C}}^{T}+\sigma^{2}\mathbf{V})^{-1}(\hat {\mathbf{C}}\hat{\mathbf{C}}^{T}\hat{\beta})\) and \(\mathbf{\Sigma}_{y}=(\hat{\mathbf{C}}\hat{\mathbf{C}}^{T}+\sigma^{2}\mathbf{V})\), \(\hat{\mathbf{C}}=\mathbf{1};\mathbf{C}\) 3. Sample \(\sigma^{2}|\{\mathbf{c}_{n},\mathbf{y}_{n}\},\beta;\beta_{0}\sim\text{IG} \Big{(}a+n/2,b_{0}+(\sum\mathbf{y}_{n}^{2}-\mu_{n}^{T}\mathbf{\Sigma}_{y}\mu_ {n})/2\Big{)}\) 4. After a burn-in, we keep the rest of the samples as generated from the posterior #### 2.2.2 Bayesian Stochastic Search Variable Selection (SVSS) The goal of variable selection over the linear regression model in Eq. (4) is to only include only those predictors supported by data in the final regression model. However, analysing \(2^{(K+1)}\) permutations of models is computationally inefficient. Instead, Stochastic Variable Selection looks at this problem from a Bayesian perspective. Here, if we wish to exclude a coefficient from a model, we assign it a degenerate posterior distribution that approximates a Dirac-Delta. Practically, if a coefficient is to be included, we draw the coefficient from an \(\beta_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{V}_{1k})\), else \(\beta_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{V}_{2k})\), where \(\mathbf{V}_{2k}\) is small relative to \(\mathbf{V}_{1k}\). We represent inclusion vs exclusion via a binary variable \(\gamma_{k}\), \(k=\{0,\dots K\}\). Thus the sample space \(\gamma_{k}\) has cardinality \(2^{(K+1)}\), and the coefficients \(\beta_{0},\dots\beta_{K}\) are independent apriori. Our likelihood model takes the form: \[\ell(\{\mathbf{y}_{n}\}|\{\mathbf{c}_{n}\},\beta,\beta_{0},\sigma^{2})=\prod_ {n=1}^{N}\mathcal{N}(\mathbf{y}_{n};\beta^{T}\mathbf{c}_{n},\beta_{0},\sigma^ {2})\] \[\text{If}\;\;\gamma_{k}=1,\beta_{k}\sim\mathcal{N}(0,\sigma^{2}\mathbf{V}_{1k})\] \[\text{If}\;\;\gamma_{k}=0,\beta_{k}\sim\mathcal{N}(0,\sigma^{2}\mathbf{V}_{2k})\] Given \(\beta_{k}\), \(\gamma_{k}\) is conditionally independent of the data. Therefore, the full conditional posterior distribution of the probability that the variable \(k\) is included in the model. \[P(\gamma_{k}|\beta_{0},\beta,\sigma^{2},\gamma_{k\pm})\propto \text{g}_{k}\mathcal{N}(\beta_{k};0,\sigma^{2}\mathbf{V}_{1k})\] \[\gamma_{k}\sim\text{Bernoulli}(\text{g}_{k})\] We impose a conjugate prior structure on \(\beta_{0};\beta,\{\gamma\},\sigma^{2}\) of the following form: \[p(\beta;\beta_{0},\gamma,\sigma^{2})=p(\sigma^{2})\prod_{k=0}^{K}p(\beta_{k}| \gamma_{k})p(\gamma_{k})\] \[\beta_{k}|\gamma_{k}\sim\mathbb{I}(\gamma_{k}=0)\mathcal{N}(0,\sigma^{2} \mathbf{V}_{2k})+\mathbb{I}(\gamma_{k}=1)\mathcal{N}(0,\sigma^{2}\mathbf{V}_{1k})\] \[\sigma^{2}\sim\text{IG}(a,b)\] Again, we use a diffuse prior with \(a=3,b=1\) and \(\mathbf{g}_{k}=0.5\). For our experiments, we consider different values of \(\{\mathbf{V}_{1k},\mathbf{V}_{2k}\}\) to determine the model with the best fit as a grid search. We use a standard Gibbs Sampling algorithm (Implemented using Matlab's econometrics toolbox) to generate pairs of samples from the posterior. Although a closed-form posterior exists for conjugate mixture priors, since the prior \(\{\beta\}|\sigma^{2},\{\gamma_{k}\}\) is marginalized by \(\gamma\), this implementation uses MCMC to sample from the joint posterior \(\beta,\beta_{0},\{\gamma_{k}\}|\{\mathbf{c}_{n},\mathbf{y}_{n}\},\sigma^{2}\). For both of these methods, we run the chains for \(5000\) samples as burn in and generate \(10000\) additional samples to approximate the posterior. #### 2.2.3 Evaluation and Results We evaluate the generalization performance of the model using a five fold cross validation like strategy. In each fold, eightty percent of the examples are used for estimation, and the rest of the twenty percent as forecasting. First, we vary the free parameters \(\sigma_{\beta}^{2}\) for BLR and \(\{\mathbf{V}_{1k},\mathbf{V}_{2k}\}\) for SVSS and determine the parameters that fit the estimation points the best. We then evaluate the performance on the held-out forecasting examples. Table 1 compares the performance based on the root mean square error (rMSE) and normalized mutual information (NMI) metric, between the predicted and true samples. SVSS performs better than BLR for both scores. As a baseline, we also report the performance of a classical ridge regression on \(\mathbf{c}_{n}\) to predict \(\mathbf{y}_{n}\) as well. Using bayesian approaches, we obtain more than a point estimate of each \(\mathbf{y}_{n}\) as in frequentist methods. We can also quantify the uncertainty of the estimate around the a posteriori maximum as summary statistics. In Fig. 2, we compare the histograms of the true samples against the samples generated by one run of the BLM and SVSS models for the ADOS score, according to the parameters selected above. The overlap with the true distribution is indicative of how well the models are able to approximate the data generating process. Fig. 3 compares the prior and posterior densities for the coefficients \(\beta,\beta_{0}\) learned for BLR. Finally, SVSS allows us to isolate the coefficients \(\beta_{k}\) which are consistently selected as predictors. For both ADOS and SRS, \(\{\mathbf{c}_{n,7},\mathbf{c}_{n,8}\}\) are the features that are selected most often (have the highest magnitude of \(\beta_{k}>90\%\) of the times). We plot the corresponding subnetworks \(\mathbf{B}_{k}\) for \(k=7,8\) in Fig. 2. Thus, SVSS offers us interpretability in terms of the features relevant to prediction. Altered connectivity in these networks, both default mode and in visual processing areas has been found to be associated with ASD previously 23. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Score** & **Method** & **rMSE Train** & **rMSE Test** & **MI Test** \\ \hline \hline \multirow{3}{*}{ADOS} & BLR & 2.76 \(\pm\) 0.27 & 4.15 \(\pm\) 0.26 & 0.48 \\ & SVSS & 2.51 \(\pm\) 0.45 & 3.80 \(\pm\) 0.37 & 0.56 \\ & Ridge Regression & 2.70 \(\pm\) 2.32 & 3.35 \(\pm\) 2.11 & 0.41 \\ \hline \multirow{3}{*}{SRS} & BLR & 34.33 \(\pm\) 5.44 & 29.23 \(\pm\) 5.81 & 0.71 \\ & SVSS & 25.23 \(\pm\) 5.10 & 27.99 \(\pm\) 4.99 & 0.76 \\ \cline{1-1} & Ridge Regression & 19.29 \(\pm\) 11.11 & 24.44 \(\pm\) 18.18 & 0.66 \\ \hline \end{tabular} \end{table} Table 1: Performance evaluation using **root Mean Squared Error (rMSE) \(\&\) Normalized Mutual Information (MI)**. Lower MAE & higher MI indicate better performance. Figure 3: Prior and Posterior Densities for \(\beta,\beta_{0}\) by Bayesian Linear Regression on ADOS. The blue line represents the posterior, while the dotted red line is the prior Figure 2: **(T)** Histogram for ADOS **(L)** true samples, **(M)** posterior from BLR **(R)** posterior from SVSS **(B)** Subnetworks from SVSS. **(L)** Visual & Subcortical **(R)** Default Mode Network ### A Bayesian Model for Joint Representation Learning and Prediction In the previous two approaches, the feature extraction is combined with prediction in a pipelined fashion, decoupling the two. However, inherently, the two views of data are complementary to each other. In this section, we propose to use a bayesian model which mimics this representation learning step. At the same time, the patient-specific coefficients relate to prediction via a bayesian linear regression. By combining the dictionary learning directly with prediction, we expect to learn a joint rs-fMRI representation that is more aligned with clinical prediction, similar to the principles in 8 Recall that the correlation matrices \(\{\mathbf{\Gamma}_{n}\}\) are positive semi-definite. Recall that we use the common principal components decomposition \(\mathbf{\Gamma}_{n}\approx\mathbf{Bdiag}(\mathbf{c}_{n})\mathbf{B}^{T}\) Additionally, we center \(\mathbf{y}_{n}\) to have zero mean. Accordingly, our data likelihood uses an Inverse Wishart distribution (\(\mathbf{\Phi}_{W}\)) on \(\mathbf{\Gamma}_{n}\) centered around \(\mathbf{Bdiag}(\mathbf{c}_{n})\mathbf{B}^{T}\) with degrees of freedom \(\nu_{0}\). We chose \(\nu_{0}=P+5\) to center \(\{\mathbf{\Gamma}_{n}\}\) loosely around \(\mathbf{Bdiag}(\mathbf{c}_{n})\mathbf{B}^{T}\). Again, we predict the clinical scores via a linear regression \(\mathbf{y}_{n}\approx\mathbf{c}_{n}^{T}\mathbf{w}\) with the linear regression weights \(\mathbf{w}\in\mathcal{R}^{K\times 1}\). Let \(\theta=(\sigma_{y}^{2},\sigma_{c}^{2},\sigma_{w}^{2})\). Given \(\{\mathbf{w},\{\mathbf{c}_{n}\}\}\), we assume that \(\mathbf{y}_{n}\) are independent, while given \(\{\mathbf{B},\{\mathbf{c}_{n}\}\}\), \(\mathbf{\Gamma}_{n}\) are independent. Thus, if \(Q=\nu_{0}-P-1\): \[\ell(\{\mathbf{\Gamma}_{n},\mathbf{y}_{n}\}|\{\mathbf{c}_{n}\}, \mathbf{w},\mathbf{B},\theta)=\prod_{n=1}^{N}\mathcal{N}(\mathbf{y}_{n}; \mathbf{c}_{n}^{T}\mathbf{w},\sigma_{y}^{2})\ \mathbf{\Phi}_{W}\Bigg{[}\mathbf{\Gamma}_{n};\frac{\mathbf{Bdiag}( \mathbf{c}_{n})\mathbf{B}^{T}}{Q},\nu_{0}\Bigg{]} \tag{5}\] We apriori assume that \(\mathbf{\Theta}=\{\mathbf{B},\{\mathbf{c}_{n}\},\mathbf{w}\}\) are independent given \(\theta\). Therefore: \[P(\mathbf{\Theta},\theta)=P(\sigma_{w}^{2})P(\sigma_{c}^{2})P( \sigma_{y}^{2})P(\mathbf{w}|\sigma_{w}^{2})\prod_{k=1}^{K}P(\mathbf{b}_{k}) \prod_{n=1}^{N}P(\mathbf{c}_{n}|\sigma_{c}^{2}) \tag{6}\] To approximate a basis \(\mathbf{B}\) that is almost orthogonal [], we use a multivariate normal prior of the form: \[P(\mathbf{b}_{k})=\mathcal{N}(\mathbf{0},\sigma_{B}^{2}\mathcal{I }_{P})\ \ \text{s.t.}\ \ \sigma_{B}^{2}=\frac{1}{P} \tag{7}\] We use a conjugate multivariate normal-inverse gamma prior on \(\{\mathbf{w},\sigma_{w}^{2}\}\) and a half multivariate normal-inverse gamma prior (for non-negativity) on \(\{\mathbf{c},\sigma_{c}^{2}\}\) and an inverse gamma prior on \(\sigma_{y}^{2}\): \[P(\mathbf{w},\sigma_{w}^{2})=P(\mathbf{w}|\sigma_{w}^{2})P( \sigma_{w}^{2})=\mathcal{N}(\mathbf{0},\sigma_{w}^{2}\mathcal{I}_{K})\text{IG}( \sigma_{w}^{2};a_{w},b_{w}) \tag{8}\] \[P(\mathbf{C},\sigma_{c}^{2})=P(\sigma_{c}^{2})\prod_{n=1}^{N}P( \mathbf{c}_{n}|\sigma_{c}^{2})=\text{IG}(\sigma_{c}^{2};a_{c},b_{c})\prod_{n=1} ^{N}\mathcal{N}(\mathbf{0},\sigma_{c}^{2}\mathcal{I}_{K})\] (9) \[P(\sigma_{y}^{2})=\text{IG}(\sigma_{y}^{2};a_{y},b_{y}) \tag{10}\] Figure 4: Hierarchical Bayesian Model for Joint Representation Learning and Prediction Notice that the complete posterior distributions of \(\mathbf{w},\sigma_{w}^{2},\sigma_{y}^{2},\sigma_{c}^{2}\) can be derived in closed form owing to the structure of the prior: \[\mathbf{w}|\mathbf{B},\{\mathbf{c}_{n},\theta\}\sim\mathcal{N}( \mathbf{w};\mu_{w},\boldsymbol{\Sigma}_{w})\;\;\text{s.t.}\;\;\boldsymbol{ \Sigma}_{w}=\left[\frac{\mathcal{I}_{K}}{\sigma_{w}^{2}}+\frac{\sum_{n} \mathbf{c}_{n}\mathbf{c}_{n}^{T}}{\sigma_{y}^{2}}\right]^{-1}\;\text{and}\;\; \mu_{n}=\boldsymbol{\Sigma}_{w}\frac{\sum_{n}\mathbf{c}_{n}^{2}}{\sigma_{y}^{2}} \tag{11}\] \[\sigma_{w}^{2}|\mathbf{w},\mathbf{B},\{\mathbf{c}_{n}\},\sigma_{c }^{2},\sigma_{y}^{2}\sim\text{IG}\Bigg{(}a_{w}+\frac{K}{2},b_{w}+\frac{\sum_{ k}\mathbf{w}_{k}^{2}}{2}\Bigg{)}\] (12) \[\sigma_{c}^{2}|\mathbf{w},\mathbf{B},\{\mathbf{c}_{n}\},\sigma_{w }^{2},\sigma_{y}^{2}\sim\text{IG}\Bigg{(}a_{c}+\frac{NK}{2},b_{c}+\frac{\sum_ {n}\sum_{k}\mathbf{c}_{nk}^{2}}{2}\Bigg{)}\] (13) \[\sigma_{y}^{2}|\mathbf{w},\mathbf{B},\{\mathbf{c}_{n}\},\sigma_{c }^{2},\sigma_{w}^{2}\sim\text{IG}\Bigg{(}a_{y}+\frac{N}{2},b_{y}+\frac{\sum_{ n}(\mathbf{c}_{n}^{T}\mathbf{w}-\mathbf{y}_{n})^{2}}{2}\Bigg{)} \tag{14}\] Thus, our inference algorithm performs a Gibbs-MH sampling based on the full conditionals for these variables and random-walk like proposal distributions to sample \(\mathbf{B},\{\mathbf{c}_{n}\}\). Our inference algorithm is summarised below (Algorithm 2). ### Implementation Details and Preliminary Results We implement Alg. 2 in \(R\) on an \(8\)-core machine with an Intel \(i7\) processor (\(16\)GB RAM). The approximate run time is about 10 hours to generate \(10000\) samples. \(8000\) of these were treated as burn-in. We experimented with several proposal distributions and found that a normal around the previous sample with a small variance to be the most stable (with acceptance ratio \(0.21\) and \(0.22\) for the two chains respectively) and concurs with best practices 25. Also, we fold samples \(\mathbf{C}^{t}\) to maintain non-negativity. We first examine the convergence of the chains for \(\sigma_{w}^{2},\sigma_{c}^{2},\sigma_{y}^{2}\) via the trace plots and autocorrelation in Fig. 6. Note that examining the convergence of the other latent variables is a less straightforward exercise. We observed that \(\sigma_{c}^{2}\) has the slowest mixing of these with high autocorrelations between samples, even after running the chains for very long. Additionally, we compare the posterior samples generated by our model i.e. \(\hat{\mathbf{y}}_{n}=\mathbf{c}_{n}^{T}\mathbf{w}\) against the distribution of the true scores \(\mathbf{y}_{n}\) for both scores. The overlap gives us a sense of how well the generating process is approximated. We obtain an rMSE of \(3.78\pm 2.51\) for ADOS and \(19.51\pm 7.51\) for SRS when using all the samples, which is higher than those obtained in Table 1. Additionally, as a sanity check, we plot the inner product measure the columns of \(\mathbf{B}\) for a representative sample (Fig. 7 (a)). Indeed, we see that our chains provide uncorrelated and nearly orthogonal bases. Finally, we plot side by side ``` Result: Posterior samples for \(\{\mathbf{B},\{\mathbf{c}_{n}\},\mathbf{w},\theta\}\) Initialize \(\mathbf{B}^{0},\{\mathbf{c}_{n}^{0}\},\mathbf{w}^{0},\theta^{0}\), \(a_{c}=a_{w}=a_{y}=3,b_{c}=b_{w}=b_{y}=1\) whileNot convergeddo Step1: Sample \(\mathbf{B}^{t}\sim q(\cdot|\mathbf{B}^{t-1})\) ; Determine whether to accept-reject samples Step2: Sample \(\mathbf{C}^{t}\sim q(\cdot|\mathbf{C}^{t-1})\) ; Determine whether to accept-reject samples Step3: Sample \(\mathbf{w}\) according to Eqn. (11); Step4: Sample \(\sigma_{y}^{2}\) according to Eqn. (12); Step5: Sample \(\sigma_{c}^{2}\) according to Eqn. (13); Step6: Sample \(\sigma_{y}^{2}\) according to Eqn (14); ``` **Algorithm 2**Gibbs-MH Sampling for the joint model Figure 5: Histogram of **(L)** True samples and **(R)** Samples from Alg. 2, i.e \(\mathbf{c}_{n}^{T}\mathbf{w}\) **(T)**: SRS **(B)**: ADOS a correlation matrix sample and the corresponding mean approximation error over samples from the chain (Fig. 7 (b)). We notice that while a large number of regions have relatively small approximation errors, the model has trouble determining the prominent patterns along the band diagonal. One of the reasons may be the scale of \(\mathbf{B}\) and \(\mathbf{C}\), which is difficult to simultaneously control in a random walk as can be seen with the trace plot for \(\sigma_{c}^{2}\). ## 3 Discussion In this letter, we first examined the efficacy of bayesian regression models coupled with dictionary learning to predict clinical severity from rs-fMRI correlation data. Of these, the stochastic variable selection generalized best and offered us with an approach to recognise features most relevant to clinical outcomes. Next, inspired by previous results, we took a fully bayesian approach to jointly learn an rs-fMRI representation and regression model. From the modeling standpoint, this framework presented several design challenges. For example, models selection in terms of order \(K\), prior parameters, eg. \((\nu_{0},a,b)\), currently chosen ad-hoc, and convergence (diagnostics/designing good proposal distributions) of the sampling procedure. Currently, our sampling procedure is computationally expensive and more work needs to be done to ensure that the proposals scan the latent parameter space better and avoid getting stuck in local models. Finally, another challenge is the inherent non-identifiability of the model (for example, scaling- \(\{\alpha\mathbf{B},\frac{1}{\alpha^{2}}\mathbf{C}\}\), \(\{\delta C,\frac{1}{\delta}\mathbf{w}\}\), rotations of \(\mathbf{B}\)), which contributes to convergence issues when the prior parameters are incorrectly chosen. Figure 6: **(T)** Trace Plots and **(B)** Autocorrelation Plots for **(L)**\(\sigma_{w}^{2}\)**(M)**\(\sigma_{y}^{2}\)**(R)**\(\sigma_{c}^{2}\) where \(\mathbf{y}_{n}\) is the SRS score (Top Set) ADOS score (Bottom Set) Figure 7: **(a)** Inner Product between columns of a representative sample of \(\mathbf{B}\)**(b)** Comparison between a true correlation matrix sample (left), and its mean approximation error (right) Future DirectionsAn immediate future direction could be the extension of the framework to incorporate multimodal structural and functional connectivity data such as 11; 9; 7 for behavioral prediction. Another interesting extension of the framework could be towards dynamic modeling of functional connectivity 10; 19; 1 as it evolves over the scan. Overall, this preliminary analysis is a first step at exploring the nascent potential of joint bayesian modeling for brain connectivity and behavior.
2305.14071
Disentangled Variational Autoencoder for Emotion Recognition in Conversations
In Emotion Recognition in Conversations (ERC), the emotions of target utterances are closely dependent on their context. Therefore, existing works train the model to generate the response of the target utterance, which aims to recognise emotions leveraging contextual information. However, adjacent response generation ignores long-range dependencies and provides limited affective information in many cases. In addition, most ERC models learn a unified distributed representation for each utterance, which lacks interpretability and robustness. To address these issues, we propose a VAD-disentangled Variational AutoEncoder (VAD-VAE), which first introduces a target utterance reconstruction task based on Variational Autoencoder, then disentangles three affect representations Valence-Arousal-Dominance (VAD) from the latent space. We also enhance the disentangled representations by introducing VAD supervision signals from a sentiment lexicon and minimising the mutual information between VAD distributions. Experiments show that VAD-VAE outperforms the state-of-the-art model on two datasets. Further analysis proves the effectiveness of each proposed module and the quality of disentangled VAD representations. The code is available at https://github.com/SteveKGYang/VAD-VAE.
Kailai Yang, Tianlin Zhang, Sophia Ananiadou
2023-05-23T13:50:06Z
http://arxiv.org/abs/2305.14071v1
# Disentangled Variational Autoencoder for Emotion Recognition in Conversations ###### Abstract In Emotion Recognition in Conversations (ERC), the emotions of target utterances are closely dependent on their context. Therefore, existing works train the model to generate the response of the target utterance, which aims to recognise emotions leveraging contextual information. However, adjacent response generation ignores long-range dependencies and provides limited affective information in many cases. In addition, most ERC models learn a unified distributed representation for each utterance, which lacks interpretability and robustness. To address these issues, we propose a **VAD**-disentangled **V**aition**E**ncoder (VAD-VAE), which first introduces a target utterance reconstruction task based on Variational Autoencoder, then disentangles three affect representations Valence-Arousal-Dominance (VAD) from the latent space. We also enhance the disentangled representations by introducing VAD supervision signals from a sentiment lexicon and minimising the mutual information between VAD distributions. Experiments show that VAD-VAE outperforms the state-of-the-art model on two datasets. Further analysis proves the effectiveness of each proposed module and the quality of disentangled VAD representations. The code is available at [https://github.com/SteveKGYang/VAD-VAE](https://github.com/SteveKGYang/VAD-VAE). Emotion Recognition in Conversations, Variational Autoencoder, Valence-Arousal-Dominance, Disentangled Representations. ## 1 Introduction Comotion Recognition in Conversations (ERC) aims to identify the emotion of each utterance within a dialogue from pre-defined emotion categories [1]. As an extension of traditional emotion detection from text, ERC attracts increasing research interest from the NLP community since it is more suitable for usage in real-world scenarios. For example, ERC aids dialogue systems in generating emotionally coherent and empathetic responses [2]. It has also been widely utilised in emotion-related social media analysis [3], [4] and opinion mining from customer reviews [5, 6]. Unlike sentence-level emotion recognition, the emotion of each utterance is dependent on contextual information in ERC. Some works enhance the context modelling ability by training the model to reconstruct the dialogue. For example, [7, 8] pre-train the utterance encoders on large-scale conversation data and transfer the pre-trained weights to ERC. More recent works utilise Pre-trained Language Models (PLMs) [9] to model the dialogue and avoid pre-training from scratch [10, 11]. [12] combine both methods by fine-tuning pre-trained BART [13] with an auxiliary response generation task on the dialogue, which trains the model to generate the next sentence given the target utterance. This task aims to force the model to recognise emotions considering context information. However, response generation only mines the dependencies between two adjacent utterances, while the influence of long-range history on the target utterance is ignored. Generating the next sentence also provides limited affective information for the target utterance in many cases, such as the sudden change of the discussion topic. An example is shown in Figure 1 to prove the above points. As illustrated, the emotion of the target utterance is dependent on long-range history, and response generation provides limited affective information as the discussion topic changes in the next utterance. We argue that a context-aware reconstruction of the target utterance itself is more appropriate since ERC is centred on the target utterance representations. In addition, current ERC methods mostly learn a unified distributed representation for each target utterance. Though achieving impressive results, entangled features are proved to lack interpretability and robustness [14]. Affective text generation models [2, 15] also postulate that emotions are Figure 1: A dialogue example. Solid lines show the influence of previous utterances on the emotion of the target utterance (marked blue), and dashed lines denote the change of the discussion topic in next utterance. As illustrated, the emotion of the target utterance is dependent on long-range history. As the topic changes, response generation provides limited affective information. independent of the content they modify, and their success shows the viability of disentangling emotion features from content representations. To address these issues, we propose a **VAD**-disentangled **V**ariational **A**utoEncoder (VAD-VAE) for ERC. Firstly, instead of generating the response, we introduce a target utterance reconstruction task based on the Variational Autoencoder (VAE) [16] generative model. We devise a PLM-based context-aware encoder to model the dialogue, and sample the latent representations from a Gaussian distribution estimated from the utterance representations. The Gaussian distribution also aims to regularise the latent space. Then another PLM-based decoder is leveraged to reconstruct the target utterance from the latent representations. VAD-VAE outperforms the state-of-the-art model on two ERC datasets. Secondly, we utilise disentangled representation learning [17] techniques to disentangle critical features from the utterance representations. Studies in affect representation models in psychology point out that Valence-Arousal-Dominance (VAD) are both orthogonal and bipolar, which are appropriate to define emotion states [18, 19]. Therefore, we propose to disentangle the three VAD features from the latent space of the VAE, where we also sample each of the VAD representations from a corresponding Gaussian distribution estimated from the utterance representations. Then the disentangled features are combined for both ERC and target utterance reconstruction tasks. Thirdly, two techniques are used to enhance the disentangled VAD representations. We boost their _informative_es [20] by introducing supervision signals from NRC-VAD [21], a sentiment lexicon that contains human ratings of VAD for all emotions. To enforce the _independence_[17] of latent spaces, we minimise the Mutual Information (MI) between VAD representations. During training, we estimate and minimise the variational Contrastive Log-ratio Upper-Bound (vCLUB) [22] of MI. Further analysis proves the quality of the disentangled VAD representations. To summarise, this work mainly makes the following contributions: * We propose a VAE-based target utterance reconstruction auxiliary task for ERC, which improves model performance and regularises the latent spaces. * For the first time in ERC, We explicitly disentangle the three VAD features from the utterance representations. Analysis shows it benefits interpretability and robustness, and bears potential in the affective text generation task. * We enhance the _informativeness_ of the disentangled representations with VAD supervision signals from the lexicon NRC-VAD and minimise the vCLUB estimate of their mutual information to improve _independence_. ## 2 Related Work ### _Emotion Recognition in Conversations_ For ERC, the emotion of a dialogue participant is primarily influenced by the dialogue history, which makes context modelling a key challenge. Early works utilised Recurrent Neural Networks (RNN) to model each participant's dialogue flow as a sequence and revise them as memories at each time step [23, 24]. Considering multi-party relations, [25] leveraged another global-state RNN to model inter-speaker dependencies and emotion dynamics. To avoid designing complex model structures, more recent works leveraged the strong context-modelling ability of PLMs to model the conversation as a whole [10, 26]. Some other works [27, 28] built a graph upon the dialogue with each utterance as a node and used graph neural networks to model ERC as a node-classification task. Enhancing the utterance representations is also crucial for ERC. Some works managed to incorporate task-related information. For example, commonsense knowledge was introduced [11, 29] to enrich the semantic space. To enhance the conversation modelling ability, some methods pre-trained the model on large-scale conversation data and transferred the weights to ERC [7, 8]. Multi-task learning was also leveraged to introduce topic information [30, 31], discourse roles [32] and speaker-utterance relations [26] to aid emotion reasoning. [33, 34] incorporated VAD information to introduce fine-grained sentiment supervision. Contrastive learning [35, 12, 36] was also devised to distinguish utterances with similar emotions. ### _Disentangled Representation Learning_ Disentangled Representation Learning (DRL) aims to map key data features into distinct and independent low-dimensional latent spaces [17]. Current DRL methods are mainly divided into unsupervised and supervised disentanglement. Early unsupervised methods mainly designed constraints on the latent space to enforce the independence of each dimension, such as information capacity [37] and mutual information gap [38]. Supervised methods introduced supervision signals to different parts of the latent space to enforce _informativeness_. Some works utilised ground-truth labels of the corresponding generative factors, such as syntactic parsing trees [39] and style labels [40]. In contrast, other works used weakly-supervised signals, including pairwise similarity between representations [41] and semi-supervised ground-truth labels [42]. Still, supervised methods devised techniques such as mutual information minimisation [42] and adversarial learning [39, 43] to enforce _independence_ and _invariance_[44] of the disentangled representations. ## 3 Methodology ### _Task Definition_ The ERC task is defined as follows: a dialogue \(D\) contains \(n\) utterances \(\{u_{1},u_{2},...,u_{n}\}\), with the corresponding ground-truth emotion labels \(\{y_{1},y_{2},...,y_{n}\}\), where \(y_{i}\in E\), \(E\) is the pre-defined emotion label set. Each \(u_{i}\) contains \(m_{i}\) tokens: \(\{u_{i}^{1},u_{i}^{2},...,u_{i}^{m_{i}}\}\). The dialogue is also accompanied by a speakers list \(S(D)=\{S(u_{1}),S(u_{2}),...,S(u_{n})\}\), where \(u_{i}\) is uttered by \(S(u_{i})\in S\), and \(S\) is the set of dialogue participants. With the above information, ERC aims to identify the emotion of each target utterance \(u_{i}\), which is formalised as: \(\hat{y}_{i}=f(u_{i},D,S(D))\). ### _Target Ut Utterance Reconstruction_ This section introduces the target utterance reconstruction auxiliary task. Based on the context-aware utterance encoder, we also disentangle VAD latent representations from the utterance representations and build a VAE-based generative model to reconstruct the target utterance, which is the backbone of VAD-VAE, as illustrated in Figure 2. #### 3.2.1 Context-Aware Utterance Encoder To introduce speaker information explicitly, we prepend the speaker \(S(u_{j})\) to each utterance: \(u_{j}=\{S(u_{j}),u_{j}^{1},u_{j}^{2},...,u_{j}^{k}\}\), where \(u_{j}^{k}\) is the \(k\)-th token of \(u_{j}\). Then the target utterance \(u_{i}\) is concatenated with both past and future dialogues to obtain the context-aware input \(\hat{u}_{i}\): \[\hat{u}_{i}=\{\langle cls\rangle;u_{i-W_{p}};...;\langle sep\rangle;u_{i}; \langle sep\rangle;...;u_{i+W_{f}};\langle eos\rangle\}\] where \(\{;\}\) denotes concatenation, \(W_{p}\) and \(W_{f}\) denotes the past and future context window size, \(\langle cls\rangle\) and \(\langle eos\rangle\) denote the start-of-sentence and end-of-sentence token. Two \(\langle sep\rangle\) tokens are added at the start and end place of \(u_{i}\) to identify the target utterance. Utilising a PLM-based encoder, we obtain the context-aware utterance embeddings: \[r_{i}=Encoder(\hat{u}_{i}) \tag{1}\] where \(Encoder\) denotes the RoBERTa-Large [45] utterance encoder, \(r_{i}\in\mathbb{R}^{L\times D_{h}}\) is the utterance representations, \(L\) denotes the sequence length, and \(D_{h}\) is the hidden states dimension. We leverage the embedding of the start-of-sentence token at position 0: \(r_{i}^{[CLS]}\in\mathbb{R}^{D_{h}}\) as the utterance-level representation of \(u_{i}\). #### 3.2.2 VAE-based Generative Model We build a VAE-based generative model and disentangle three latent features, Valence-Arousal-Dominance (VAD) from the utterance representation, where Valence reflects the pleasantness of a stimulus, Arousal reflects the intensity of emotion provoked by a stimulus, and Dominance reflects the degree of control exerted by a stimulus [46]. We also define a "Content" feature that controls the content generation of the target utterance. A VAE is utilised to estimate this model, which imposes a standard Gaussian prior distribution on each latent space \(Z\). The deterministic utterance representation is replaced with an approximation of the posterior \(q_{\phi}(z|x)\), which is parameterised by a neural network. We utilise four feed-forward neural networks to map \(x=r_{i}^{[CLS]}\) to four sets of Gaussian distribution parameters \((\mu,\sigma)\), which parameterise the latent distributions of Valence, Arousal, Dominance, and Content, denoted as \(\mathcal{R}\in\{V,A,D,C\}\). For each feature, we sample the latent representation \(z^{(\mathcal{R})}\) from the Gaussian distribution defined by the corresponding \((\mu^{(\mathcal{R})},\sigma^{(\mathcal{R})})\) using the re-parameterisation trick [16]: \[z_{i}^{(\mathcal{R})}=\mu^{(\mathcal{R})}\odot\sigma^{(\mathcal{R})}+\epsilon \sim\mathcal{N}(\mathbf{0},\mathbf{I}) \tag{2}\] where \(z_{i}^{(\mathcal{R})}\in\mathbb{R}^{D_{(\mathcal{R})}}\), \(D_{(\mathcal{R})}\) is the pre-defined latent space dimension. Then the latent representations are concatenated: \(z_{i}=[z_{i}^{\prime};z_{i}^{A};z_{i}^{D};z_{i}^{C}]\). \(z_{i}\) is used to initialise the decoder and reconstruct the target utterance: \[u_{i}^{j}=Softmax(Decoder(z_{i},u_{i}^{<j})) \tag{3}\] where \(Softmax\) denotes the softmax operation, \(u_{i}^{j}\) denotes the \(j\)-th generated tokens, and \(u_{i}^{<j}\) denotes the previously generated tokens. We utilise BART-Large decoder [13] as \(Decoder\), since it shares the vocabulary with RoBERTa-Large in Huggingface [47] implementations and is proved powerful in many generative tasks. As in standard VAE, we include a KL-divergence term for each latent space to keep the approximate posterior close to the prior distribution. During training, we utilise the Evidence Lower BOund (ELBO) as the training objective: \[\begin{split}&\mathcal{L}_{ELBO}(\phi,\theta)=-\mathbb{E}_{q_{\phi}(z_ {i}|x)}\left[log\ p_{\theta}(x|z_{i})\right]+\\ &\sum_{\mathcal{R}\in\{V,A,D,C\}}\alpha_{\mathcal{R}}KL\left[q_{ \phi}^{(\mathcal{R})}(z_{i}^{(\mathcal{R})}|x)||p(z_{i}^{(\mathcal{R})})\right] \end{split} \tag{4}\] where \(\phi\) and \(\theta\) denote the parameters of the encoder and decoder, each \(\alpha_{\mathcal{R}}\) weights the corresponding KL-divergence term, and standard Gaussian prior is used for each \(p(z_{i}^{(\mathcal{R})})\). ### _Enhancing VAD Representations_ We aim to enhance the disentangled VAD representations considering the following two aspects: (a). _informativeness_: the representation should include enough information to predict the corresponding generative factor well [20, 48]. (b). _Independence_: for each generative factor, the representation should lie in an independent latent space [17]. Therefore, we introduce supervision signals from a sentiment lexicon to enforce _informativeness_ and a mutual information minimisation objective to enforce _independence_. Fig. 2: Main components of VAD-VAE. Each latent representation \(z^{(\mathcal{R})}\) is sampled from a Gaussian distribution estimated from the context-aware utterance representation \(r\). The VAD prediction \(P^{(\mathcal{R})}\) is obtained from \(z^{(\mathcal{R})}\) via the VAD predictors. The concatenated representation \(z\) is utilised for both ERC and target utterance reconstruction. In the utterance decoder, “\(\langle cls\rangle\)” denotes the start-of-sentence token, and “\(\langle eos\rangle\) denotes the end-of-sentence token. #### 3.3.1 Informativeness To enhance the representation's ability to predict the corresponding generative factor, we introduce supervision signals from NRC-VAD [21], a VAD sentiment lexicon that contains reliable human ratings of VAD for 20,000 English terms. All NRC-VAD terms denote or connote emotions and are selected from commonly used sentiment lexicons and tweets. Each term is strictly annotated via best-worst scaling with crowdsourcing annotators, and an aggregation process calculates the VAD for each term ranging from 0 to 1. For example, the emotion _happiness_ is assigned \(vad_{happiness}=\{0.960,0.732,0.850\}\). More details about NRC-VAD are in Sec. 4.1. With the pre-defined categorical emotion set \(E\), we extract the VAD score \(vad_{e_{j}}=\{vad_{e_{j}}^{V},vad_{e_{j}}^{A},vad_{e_{j}}^{D}\}\)for each of the emotion \(e_{j}\in E\) from NRC-VAD, where \(j\in[1,|E|]\). Since fine-grained VAD supervision signals are introduced, we expect to improve both the _informativeness_ of VAD representations and model performance on ERC. Specifically, for each \(\hat{\mathcal{R}}\in\{V,A,D\}\), we compute the corresponding prediction from the latent representation using a feed-forward neural network predictor, and a sigmoid function is utilised to map the prediction to the range \((0,1)\): \[P_{i}^{(\hat{\mathcal{R}})}=\frac{1}{1+e^{-(z_{i}^{(\hat{\mathcal{R}})}W^{( \hat{\mathcal{R}})}+b^{(\hat{\mathcal{R}})})}} \tag{5}\] where \(W^{(\hat{\mathcal{R}})}\) and \(b^{(\hat{\mathcal{R}})}\) are parameters of the predictor corresponding to \(\hat{\mathcal{R}}\). As the training objective, we compute the mean squared error loss between the predictions and the supervision signals: \[\mathcal{L}_{INFO}(\phi,\lambda)=\frac{1}{N}\sum_{i=1}^{N}\sum_{\hat{\mathcal{ R}}\in\{V,A,D\}}(P_{i}^{(\hat{\mathcal{R}})}-vad_{y_{i}}^{(\hat{\mathcal{R}})})^{2} \tag{6}\] where \(\phi\) and \(\lambda\) denote the parameters of the encoder and the predictor, \(y_{i}\) denotes the emotion label of \(i\)-th utterance, and \(N\) denotes the batch size. #### 3.3.2 Independence We improve the independence of all disentangled latent spaces by making their distributions as dissimilar as possible. A common method is to minimise the Mutual Information (MI) [49] between each pair of latent variables: \(\mathbf{I}(\hat{\mathcal{R}}_{i};\hat{\mathcal{R}}_{j})\), which is defined as: \[\mathbf{I}(\hat{\mathcal{R}}_{i};\hat{\mathcal{R}}_{j})=\mathbb{E}_{p(\hat{ \mathcal{R}}_{i},\hat{\mathcal{R}}_{j})}\Big{[}log\frac{p(\hat{\mathcal{R}}_{i },\hat{\mathcal{R}}_{j})}{p(\hat{\mathcal{R}}_{i})p(\hat{\mathcal{R}}_{j})} \Big{]} \tag{7}\] where \(\hat{\mathcal{R}}_{i}\), \(\hat{\mathcal{R}}_{j}\in\{V,A,D\}\) and \(i\neq j\). However, MI is hard to calculate in high-dimensional spaces. The conditional distribution between each pair of latent variables is also unavailable in our cases. Therefore, we utilise the variational Contrastive Log-ratio Upper-Bound (vCLUB) proposed by Cheng et al. [22] to estimate and minimise an upper bound of MI, which is calculated as follows: \[\begin{split}\mathbf{I}_{vCLUB}(\hat{\mathcal{R}}_{i};\hat{ \mathcal{R}}_{j}):=\mathbb{E}_{p(\hat{\mathcal{R}}_{i},\hat{\mathcal{R}}_{j}) }[log\ q_{\delta}(\hat{\mathcal{R}}_{j}|\hat{\mathcal{R}}_{i})]-\\ \mathbb{E}_{p(\hat{\mathcal{R}}_{i})}\mathbb{E}_{p(\hat{\mathcal{ R}}_{j})}[log\ q_{\delta}(\hat{\mathcal{R}}_{j}|\hat{\mathcal{R}}_{i})]\end{split} \tag{8}\] A variational distribution \(q_{\delta}(y|x)\) with parameter \(\delta\) is used to approximate \(p(y|x)\). In practice, we separately use a feed-forward neural network as an estimator to approximate the conditional distribution between each pair in VAD variables: \(P(\hat{\mathcal{R}}_{i}|\hat{\mathcal{R}}_{j})\) where \(i\neq j\), and the parameters are updated along with VAD-VAE at each time step. Unbiased vCLUB estimates between each pair are summed to get the MI minimisation loss as the training objective: \[\begin{split}\mathcal{L}_{MI}(\phi,\delta)=\frac{1}{N}\sum_{k=1 }^{N}\sum_{i,j}&\ \Big{[}log\ q_{\delta_{ij}}(z_{k}^{(\hat{\mathcal{R}}_{i})}|z_{k}^{(\hat{ \mathcal{R}}_{j})})-\\ &\ \frac{1}{N}\sum_{l=1}^{N}log\ q_{\delta_{ij}}(z_{l}^{(\hat{\mathcal{R} }_{i})}|z_{k}^{(\hat{\mathcal{R}}_{j})})\Big{]}\end{split} \tag{9}\] where \(\delta_{ij}\) denotes the parameters of the corresponding estimator. The detailed proof of Eqn. 8 and 9 is available in [22]. Since no extra supervision signals are introduced, we expect the model to still achieve comparable performance as a trade-off for more _independence_ of each latent space. ### _Model Training_ For ERC task, the concatenated latent representation \(z_{i}\) is utilised to compute the final classification probability: \[\hat{y}_{i}=Softmax(z_{i}W_{0}+b_{0}) \tag{10}\] where \(W_{0}\) and \(b_{0}\) are learnable parameters. Then we compute the ERC loss \(\mathcal{L}_{ERC}\) using standard cross-entropy loss: \[\mathcal{L}_{ERC}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{|E|}y_{j}^{j}log\ \hat{y}_{i}^{j} \tag{11}\] where \(y_{i}^{j}\) and \(\hat{y}_{i}^{j}\) are \(j\)-th element of \(y_{i}\) and \(\hat{y}_{i}\). Finally, we combine all proposed modules and train in a multi-task learning manner: \[\mathcal{L}=\mathcal{L}_{ERC}+\mu_{E}\mathcal{L}_{ELBO}+\mu_{I}\mathcal{L}_{ INFO}+\mu_{MI}\mathcal{L}_{MI} \tag{12}\] where the \(\mu\)s are the pre-defined weight coefficients. ## 4 Experimental Settings ### _Datasets_ We evaluate our model on the following three benchmark datasets, where the statistics are listed in Table I: American TV show _Friends_. The pre-defined emotions are _neutral_, _sad_, _anger_, _disgust_, _fear_, _happy_, _surprise_. **DailyDialog**[52]: A dataset compiled from human-written daily conversations with only two parties involved and no speaker information. The pre-defined emotion labels are the Ekman's emotion types: _neutral_, _happy_, _surprise_, _sad_, _anger_, _disgust_, _fear_. We utilise information from **NRC-VAD**[21] in our methodology, a sentiment lexicon that contains human ratings of Valence, Arousal, and Dominance for more than 20,000 English words. All the terms in NRC-VAD denote or connote emotions. Specifically, the terms are collected from sentiment lexicons such as NRC emotion lexicon [53], General Inquirer [54], and ANEW [46]. These terms are first very strictly annotated via best-worst scaling with crowdsourcing annotators. Then an aggregation process calculates the VAD for each term ranging from 0 to 1. Take Valence as an example. The annotators are presented with four words at a time (4-tuples) and asked to select the word with the highest and lowest Valence. The questionnaire uses paradigm words that signify the two ends of the valence dimension. The final VAD scores are calculated from the responses: For each item, the score is the proportion of times the item was chosen as the best (highest V/A/D) minus the proportion of times the item was selected as the worst (lowest V/A/D). The scores were linearly transformed from interval 0 (lowest V/A/D) to 1 (the highest V/A/D). In NRC-VAD, the emotion prototypes of the labels for the three ERC datasets are listed in Table II. According to the assignments, most cluster centres reflect appropriate positions of the corresponding emotions in VAD space, where similar emotions are measurably closer to each other while maintaining a fine-grained difference to facilitate the model to distinguish them. For example, _happy_ stays closer to _excited_ than _anger_ in IEMOCAP. In addition, for all four datasets, positive and negative emotions are separated mainly by _neutral_ in the dimension Valence, while the emotions within each sentiment polarity mainly differ in Arousal and Dominance. We also notice that human-labeled utterance-level VAD scores are available in IEMOCAP. Therefore, we further explore utilising these utterance-level VAD scores instead of NRC-VAD to supervise informativeness. The aggregation process calculates the VAD for each utterance ranging from 1 to 5. In our experiments, we linearly transform all VAD scores to the range \([0,1]\) during inference. ### _Baseline Models_ We select the following baseline models for comparison: **TL-ERC**[7]: The method pre-trains an encoder-decoder architecture on large-scale conversation data, then the weights of the encoder are transferred to ERC. **BERT-Large**[55]: Following the standard pretraining-finetuning paradigm, the model is initialised from the pre-trained weights of BERT-Large, then a linear classifier follows the output of the BERT encoder for ERC task. **DialogXL**[10]: Based on the XLNet [56], this work combines four types of dialogue-aware self-attention (global self-attention, local self-attention, speaker self-attention, listener self-attention) to model inter- and intra-speaker dependencies and proposes an utterance recurrence mechanism to model the long-range contexts. **DAG-ERC**[27]: Utilising RoBERTa-Large [45] as the context-independent utterance encoder, this model builds a directed acyclic graph on the dialogue and uses a multi-layer graph neural network to aggregate the information on the graph. The node representations are utilised for ERC classification. **SKAIG**[28]: This work extracts psychological commonsense knowledge and builds a graph on the dialogue according to different aspects of the knowledge. The corresponding knowledge representations are used to enhance the edge representations. **COSMIC**[29]: This work uses the RNN to model the dialogue history for each participant and the context information. It also extracts utterance-level commonsense knowledge to model the speakers' mental states and attentively infuses the knowledge into the utterance representations. **Dis-VAE**[32]: This work utilises a VAE to model discourse information between utterances in an unsupervised manner and combines the learnt latent representations to the ERC encoder. **SGED**[57]: This method proposes a speaker-guided encoder-decoder framework for ERC to model the intra- and inter-speaker dependencies in a dynamic manner. **CoG-BART**[12]: Based on BART-Large, this work utilises contrastive learning and a response generation task to enhance utterance representations. **SPCL**[36]: This work proposes a supervised prototypical contrastive learning to improve performance on class-imbalanced data. Curriculum learning is also utilised to further enhance the model. **CoMPM**[58]: This work proposes a context embedding module and a speaker-aware memory module to efficiently model the conversation and utterance-speaker relations. ### _Implementation Details_ We conduct all experiments using a single Nvidia Tesla A100 GPU with 80GB of memory. We initialise the pre-trained weights of all PLMs and use the tokenisation tools both provided by Huggingface [47]. We leverage AdamW optimiser [59] to train the model. We use the weighted-F1 measure as the evaluation metric for MELD and IEMOCAP. Since "neutral" occupies most of DailyDialog, we use micro-F1 for this dataset and ignore the label "neutral" when calculating the results as in the previous works [12, 27]. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline **IEMOCAP** & neutral & frustrated & sad & anger & excited & happy & – \\ \hline Valence & 0.469 & 0.060 & 0.052 & 0.167 & 0.098 & 0.940 & – \\ Arousal & 0.184 & 0.270 & 0.288 & 0.865 & 0.931 & 0.732 & – \\ Dominance & 0.357 & 0.280 & 0.164 & 0.657 & 0.709 & 0.850 & – \\ \hline \hline **MED** & neutral & gy & surprise & anger & sad & disgust & fear \\ \hline Valence & 0.469 & 0.980 & 0.875 & 0.167 & 0.052 & 0.072 & 0.073 \\ Arousal & 0.184 & 0.824 & 0.875 & 0.865 & 0.288 & 0.775 & 0.840 \\ Dominance & 0.357 & 0.794 & 0.562 & 0.657 & 0.164 & 0.317 & 0.295 \\ \hline \hline **DailyDialog** & neutral & anger & disgust & fear & happy & sad & surprise \\ \hline Valence & 0.469 & 0.167 & 0.052 & 0.073 & 0.960 & 0.052 & 0.875 \\ Arousal & 0.184 & 0.865 & 0.775 & 0.840 & 0.732 & 0.288 & 0.875 \\ Dominance & 0.357 & 0.657 & 0.317 & 0.293 & 0.850 & 0.164 & 0.562 \\ \hline \hline \end{tabular} \end{table} TABLE II: The NRC-VAD assignments to all emotions in the four datasets. The BART decoder brings about 1/3 more parameters to VAD-VAE than most of the selected baselines. Therefore, we also provide the results with a single-layer uni-directional LSTM utterance decoder to reduce the parameters and provide a more fair comparison. All hyper-parameters are tuned on the validation set. We also provide more implementation details: The batch size of experiments on all datasets is set to 4 except in DailyDialog, which is 16. Each iteration of training takes less than 0.75 GPU hours. We use a linear warm-up learning rate scheduling [60] of warm-up ratio 20\(\%\) and a peak learning rate 1e-5. We set a dropout rate of 0.1 and an L2-regularisation rate of 0.01 to avoid over-fitting. We do hyper-parameter search and determine \(D_{(V)}=D_{(A)}=D_{(D)}=64\), \(D_{(C)}=832\), \(D_{h}\) = 1024, \(\mu_{E}=0.8\), \(\mu_{I}=1.0\). \(\mu_{MI}\) is tuned between \([0.001,0.01]\) on each validation set. More hyper-parameters will be listed within the source code. The names and versions of relevant software libraries and frameworks will be described in the source code. All reported results are averages of five random runs. ## 5 Results and Analysis ### _Overall Performance_ We present the performance of VAD-VAE and the baseline models on the three benchmark datasets in Table III. According to the results, BERT-Large and DialogXL outperform TL-ERC on all datasets, showing the advantages of PLM-based methods over RNN-based models that pre-train from scratch. Following this trend, the rest of the baseline models and our VAD-VAE all utilise RoBERTa as the utterance encoder (except CoG-BART which uses BART). COSMIC explicitly introduces mental state information to enrich the contexts, and the performance improves significantly on simple-context datasets MELD and DailyDialog. Dis-VAE, SGED, and CoMPM implicitly introduce speaker-related information and achieve over 68% on IEMOCAP. To enhance the utterance representations, DAG-ERC and SKAIG build dialogue-level graphs to introduce priors on context modelling and perform well on all datasets. The competitive performance of CoG-BART and SPCL also proves the effectiveness of contrastive learning and response generation. Overall, VAD-VAE achieves new state-of-the-art performance 70.22% on IEMOCAP, 62.14% on DailyDialog, and very competitive performance 65.94% on MELD. Our model reaches an impressive 4.04% improvement on IEMOCAP and a 5.85% improvement on DailyDialog over CoG-BART, showing the advantage of VAE-based target utterance reconstruction over response generation. Test results also show that VAD-VAE outperforms Dis-VAE on all three datasets. We notice that Dis-VAE only builds a VAE to model the discourse information, while VAD-VAE uses VAE to reconstruct the target utterance directly. The results prove that target utterance reconstruction is appropriate to leverage the full potential of VAE. In addition, our method introduces NRC-VAD information and outperforms several strong knowledge-enhanced methods. For example, VAD-VAE outperforms COSMIC by over 4% on both IEMOCAP and DailyDialog, which also utilises RoBERTa-Large as the utterance encoder and introduces mental state knowledge. These advantages reflect the effectiveness of the VAD supervision signals. With the LSTM utterance decoder, VAD-VAE\({}_{LSTM}\) still outperforms previous state-of-the-art models on both IEMOCAP and DialyDialog. These results further prove the effectiveness of our proposed VAD-VAE structure under approximately the same number of parameters. In addition, VAD-VAE\({}_{LSTM}\) also achieves slightly worse performances on all datasets compared to VAD-VAE, which shows that a more powerful utterance decoder can benefit more on ERC. However, as VAD-VAE\({}_{LSTM}\) is more efficient in training, we recommend using the LSTM utterance decoder in most cases. We can only test VAD-VAE\({}_{H}\) on IEMOCAP since all other datasets do not provide human-labeled utterance-level VAD scores. According to the results, VAD-VAE\({}_{H}\) only achieves a comparable performance of 70.2% with VAD-VAE, which does not correspond with our hypothesis that fine-grained utterance-level VADs would provide more useful information. We notice that NRC-VAD follows strict best-worst scaling annotation and aggregation processes with a minimum of 6 annotators per word. In comparison, the IEMOCAP VAD annotation process follows a rough rule with two annotators for each utterance, which can bring inaccuracy to many labels and weaken the advantage of context-dependent VAD labels. ### _Ablation Study_ To investigate the effect of each module, we provide ablation analysis in Table IV. "-" denotes removing a module. "vCLUB" denotes the MI minimisation modules. "VAE Decoder" denotes the VAE decoder module for target utterance reconstruction. "V Sup.", "A Sup.", and "D Sup." denote the NRC-VAD supervision signals corresponding to Valence, Arousal, and Dominance. "Ulter. Encoder" directly trains an ERC model on the context-aware utterance encoder. \begin{table} \begin{tabular}{l|c c c} \hline \hline Model & IEMOCAP & MELD & DailyDialog \\ \hline TL-ERC & 59.30 & 57.46 & 52.46 \\ BERT-Large & 60.98 & 61.50 & 54.09 \\ DialogXL & 65.94 & 62.41 & 54.93 \\ \hline COSMIC & 65.28 & 65.21 & 58.48 \\ Dis-VAE & 68.23 & 65.34 & 60.95 \\ SGED & 68.53 & 65.46 & – \\ CoMPM & 66.33 & 66.52 & 60.34 \\ \hline DAG-ERC & 68.03 & 63.65 & 59.33 \\ SKAIG & 66.96 & 65.18 & 59.75 \\ COG-BART & 66.18 & 64.81 & 56.29 \\ SPCL & 69.74 & **67.25** & – \\ \hline VAD-VAE & **70.22\({}^{\star}\)**(\(\pm\)0.85) & 64.96(\(\pm\)0.19) & **62.14\({}^{\star}\)**(\(\pm\)0.23) \\ -vCLUB & 69.19(\(\pm\)0.66) & 65.94(\(\pm\)0.31) & 61.23\({}^{\star}\)**(\(\pm\)0.77) \\ VAD-VAE\({}_{LSTM}\) & 69.92 & 64.6 & 61.96\({}^{\star}\) \\ -vCLUB & 69.72 & 64.89 & 61.4\({}^{\star}\) \\ VAD-VAE\({}_{H}\) & 70.2\({}^{\star}\) & – & – \\ \hline \hline \end{tabular} \end{table} TABLE III: Test results on IEMOCAP, MELD and DailyDialog. “-vCLUB” denotes VAD-VAE trained without the vCLUB loss \(\mathcal{L}_{MI}\). VAD-VAE\({}_{LSTM}\) denotes the results with an LSTM utterance decoder. VAD-VAE\({}_{H}\) denotes the results using the human-labeled utterance-level VAD scores. The numbers with \(\star\) indicate that the improvement over all baselines is statistically significant with p \(<\) 0.05 under t-test. Best values are highlighted in bold. According to the results, VAD-VAE achieves comparable performance with "-vCLUB", which corresponds with our early hypothesis. With vCLUB loss added, the performance improves on IEMOCAP and DailyDialog. A possible reason is that MI minimisation enforces the model to learn dissimilar representations for VAD, which corresponds with the orthogonal nature of VAD in psychology. Without the VAE Decoder, the performance drops significantly on all datasets, which further indicates the effectiveness of the target utterance reconstruction task. "-V, A, D Sup." leads to significant drops in all datasets, which proves our hypothesis that NRC-VAD supervision signals also provide fine-grained information to enhance ERC performance. In further comparisons of separately removing supervision signals for Valence, Arousal, and Dominance, the performance drops most with "D Sup." removed for IEMOCAP and MELD and "A Sup." removed for DailyDialog. We notice that the sentiment polarity of emotions is primarily determined by Valence, and similar emotions mainly differ in Arousal and Dominance. Therefore, these results show that our model benefits more from fine-grained information from Arousal and Dominance to distinguish similar emotions. ### _Context Influence_ In previous experiments, we set both past and future context window sizes \(W_{p}\) and \(W_{f}\) as wide as possible (the maximum input length of RoBERTa encoder is 512) to provide more contextual information. In Figure 3, we further present VAD-VAE performances with different context window sizes to investigate the influence of multi-range contextual information and guide future research directions on context modeling. According to the results, both past and future contexts significantly enhance the model performance compared to no context (window size 0), but \(past\&future\) further outperforms both _only past_ and _only future_ on all window sizes, which shows that VAD-VAE benefits from combined emotional reasoning with past and future contexts. In addition, though the nearest 5 contexts provide most of the improvement (over 10%) for \(past\&future\) and _only past_, their performances continuously increase with wider window sizes, proving that long-range conversational history also provides useful information. On the other hand, the performance of _only future_ reaches the top with a window size of 10 but decreases with longer contexts. These results denote near-future contexts as more indicative of emotional cues for the target utterance, while unrelated longer future contexts can bring noises to the emotional reasoning. ### _Disentanglement Evaluation_ We analyse the effects of VAD supervision signals (\(\mathcal{L}_{INFO}\)) and MI minimisation (\(\mathcal{L}_{MI}\)) on enhancing VAD disentanglement. In Table V, we present the Pearson's correlation coefficients between the predicted VAD scores from latent representations and the supervision signals from NRC-VAD on all three test sets. Higher values indicate more precise predictions, denoting better _Informativeness_. We also provide the average vCLUB estimates of MI between VAD latent distributions on each test set, with lower values denoting lower estimates of MI upper bounds and better _Independence_. #### 5.4.1 Informativeness According to the results, the model performs poorly on all datasets (Pearson's correlation coefficients below 0.2 in most cases) with VAE reconstruction loss (\(\mathcal{L}_{ELBO}\)) or \(\mathcal{L}_{MI}\) introduced, since VAD features can be embedded in Content space without specific supervisions. We observe significant improvement in _Informativeness_ for VAD with \(\mathcal{L}_{INFO}\), which brings over 0.5 Pearson's correlation coefficients gain for IEMOCAP and 0.3 for MELD and DailyDialog. These results reflect the effectiveness of NRC-VAD supervision signals. On top of \(\mathcal{L}_{INFO}\), \(\mathcal{L}_{MI}\) further improves the Pearson's correlation coefficients scores in most cases, which shows that MI minimisation also helps to enhance _Informativeness_ of VAD representations to some extent. #### 5.4.2 Independence For all datasets, The vCLUB estimates remain high with only \(\mathcal{L}_{ELBO}\) introduced since the unified distributed representation in VAE encourages strong correlations between each part. With \(\mathcal{L}_{INFO}\), we observe even higher vCLUB in MELD and DailyDialog. In this case, our model is only optimised for _Informativeness_, and enforces full utilisation of all latent spaces, which leads to high MI. Introducing only \(\mathcal{L}_{MI}\) has the lowest vCLUB in all datasets. However, it achieves bad performance in _Informativeness_. With both \(\mathcal{L}_{MI}\) and \(\mathcal{L}_{INFO}\), VAD-VAE not only achieves the best \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & IEMOCAP & MELD & DailyDialog \\ \hline VAD-VAE & **70.22** & 64.96(\(\downarrow\)0.98) & **62.14** \\ -vCLUB & 69.19(\(\downarrow\)1.03) & **65.94** & 61.23(\(\downarrow\)0.91) \\ -VAE Decoder & 67.92(\(\downarrow\)2.30) & 63.99(\(\downarrow\)1.95) & 60.16(\(\downarrow\)1.98) \\ \hline -V, A, D Sup. & 66.85(\(\downarrow\)3.37) & 63.99(\(\downarrow\)1.95) & 61.01(\(\downarrow\)1.13) \\ -V Sup. & 67.60(\(\downarrow\)2.62) & 64.20(\(\downarrow\)1.74) & 61.92(\(\downarrow\)0.22) \\ -A Sup. & 68.06(\(\downarrow\)2.16) & 64.59(\(\downarrow\)1.35) & 61.38(\(\downarrow\)1.07) \\ -D Sup. & 67.16(\(\downarrow\)3.06) & 64.03(\(\downarrow\)1.91) & 61.66(\(\downarrow\)0.48) \\ \hline Utter. Encoder & 66.52(\(\downarrow\)3.70) & 63.7(\(\downarrow\)2.24) & 59.80(\(\downarrow\)2.34) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Results of ablation study. Best values are highlighted in bold. Fig. 3: The performance of VAD-VAE on IEMOCAP with different context window sizes in the input sequence. We separately present results with both past and future contexts and only past/future contexts. “all” denotes setting the corresponding window sizes as wide as possible. results in VAD predictions but also greatly decreases vCLUB compared with only \(\mathcal{L}_{INFO}\), showing the satisfactory trade-off between _Informativeness_ and _Independence_. #### 5.4.3 ERC-Independence Trade-off To further investigate the influence of MI minimisation on ERC performance, we present the results of tuning vCLUB loss coefficients \(\mu_{MI}\) on the three test sets in Figure 4. For all datasets, the MI estimates decrease as \(\mu_{MI}\) increases, showing the effectiveness of the vCLUB method. Meanwhile, ERC performance decreases at acceptable rates. For IEMOCAP, with \(\mu_{MI}\)=0.005, all MI upper bounds steadily drop below 0.3 while keeping a competitive ERC performance: over 67%. As \(\mu_{MI}\) further increases, there is no apparent decrease in MI estimates. We have similar observations for MELD at \(\mu_{MI}\)=0.006. On the other hand, for DailyDialog, all MI estimates drop rapidly below 0.1 with small \(\mu_{MI}\)=0.001. According to the statistics in Table I, DailyDialog has more abundant training data, which provides rich information for understanding Valence, Arousal, and Dominance. Simpler contexts of DailyDialog can also facilitate VAD disentanglement. From the perspective of each VAD pair, DailyDialog achieves high disentanglement for all VAD pairs, while on the other two datasets, the performances are uneven. Valence-Dominance in IEMOCAP and Valence-Arousal in MELD remain above 0.2. These results show that the pair-wise difficulties of VAD disentanglement are possibly related to the complexity of the contexts. ### _Disentangled Representations Visualisation_ To conduct a more interpretable analysis of the disentangled representations, we present the UMAP [61] visualisations of VAD representations in the IEMOCAP test set for four representative emotions in Figure 5. Their corresponding NRC-VAD assignments are presented in Table II. As shown, for Valence and Dominance, positive and negative emotions are well separated, while emotions within one polarity overlap. In the visualisation of Arousal, "happiness", "excited" and "frustration" lie close while "sadness" separates away. These observations correspond with the NRC-VAD assignments, indicating the quality of the learnt VAD representations. In addition, the distribution of each emotion shows continuity and completeness conditions. Fig. 4: The performance of VAD-VAE on ERC and the MI estimates between VAD pairs with different vCLUB weight coefficients \(\mu_{MI}\) on the three test sets. Fig. 5: UMAP visualisations of Valence, Arousal, Dominance and Content representations for IEMOCAP test results. \begin{table} \begin{tabular}{l||c c c c|c c c c|c c c} \hline \hline & \multicolumn{4}{c|}{IEMOCAP} & \multicolumn{4}{c|}{MELD} & \multicolumn{4}{c}{DailyDialog} \\ \hline Methods & V & A & D & MI & V & A & D & MI & V & A & D & MI \\ \hline \(\mathcal{L}_{ELBO}\) & 0.434 & 0.156 & 0.094 & 0.732 & 0.021 & 0.133 & 0.152 & 0.398 & 0.071 & 0.029 & -0.017 & 0.356 \\ \(+\mathcal{L}_{MI}\) & 0.306 & 0.132 & 0.150 & **0.133** & 0.044 & 0.230 & 0.141 & **0.068** & 0.115 & -0.026 & -0.016 & **0.094** \\ \(+\mathcal{L}_{INFO}\) & **0.882** & 0.708 & 0.743 & 0.531 & 0.565 & **0.643** & 0.557 & 0.885 & 0.424 & 0.348 & 0.321 & 0.789 \\ \(+\mathcal{L}_{INFO}\)\(+\mathcal{L}_{MI}\) & 0.872 & **0.715** & **0.765** & 0.355 & **0.568** & 0.639 & **0.558** & 0.312 & **0.433** & 0.355 & **0.330** & 0.226 \\ \hline \hline \end{tabular} \end{table} TABLE V: The Pearson’s correlation coefficients between predicted VAD scores and supervision signals on all test sets, and the average vCLUB estimates of MI between VAD latent distributions. Best values are highlighted in bold. We observe very regularised latent distributions for Content. The distribution of each emotion is close to a Gaussian distribution and shows continuity and completeness conditions for "frustration", "sadness" and "happiness" utterances. However, the representations of "excited" and "happiness" still overlap, possibly due to the similar way of expressing excited and happy emotions. Overall, with the well learnt disentangled VAD representations and sound properties of the content space, one direction of future work is to explore the potential of VAD-VAE in the controlled affective text generation task. Unlike previous works, which control categorical emotions, our work aims to control more fine-grained sentiments by adjusting Valence, Arousal, and dominance separately. To provide a more intuitive view, we provide a case of VAD-based controlled generation in Figure 6. The VAD representations of the neutral target utterance are replaced by the VAD representations of a randomly sampled excited utterance. As shown, the utterance decoder can adjust the tones of the utterance according to the input VAD representations. For example, with the new 0.908 Valence representation, the reconstructed utterance "so glad I got these at a reasonable price!" shows more pleasantness. ### _Robustness Evaluation_ In Figure 7, we evaluate the robustness of disentangled representations by randomly replacing a percentage (0% to 50%) of training labels with false labels, then comparing the performance decrease rate (For \(\alpha\%\) replacement, the rate is computed as \(\frac{F1\,for\,\alpha\%\,replacement}{F1\,for\,0\%\,replacement}\)) of VAD-VAE and the vanilla VAE method. Higher rates indicate better performance and more robustness. According to the results, disentangled VAD representations achieve higher performance decrease rates than entangled representations on all datasets. For example, VAD-VAE outperforms VAE by an average of 12.24% with all label replacement percentages (orange shaded area). With 50% of random label replacement, VAD-VAE retains over 75% of performances on all test sets while the performances of VAE all fall below 70%. These results show that VAD-VAE can tolerate a higher level of misinformation during training, indicating the robustness of disentangled VAD representations over the entangled representations. A possible reason is that disentangled representations are explicitly trained to extract VAD information, which provides helpful guidance during inference when other features are misleading. ### _Case Study for Target Utterance Reconstruction_ Target utterance reconstruction enables VAD-VAE to learn long-range dependencies and outperforms the response generation-based model CoG-BART. We provide four cases \begin{table} \begin{tabular}{c c c c c} \hline \hline Target Utterance & Next Utterance & Context & CoG-BART & VAD-VAE & Human \\ \hline \#1 Chandler: It’s all finished! & Joy: This was Carol’s favorite beer. She always drank it out of can, I should have & Chandler: That’s OK, anyway. & & \\ & & Joy: Tell Monce’s we are **done with** & & \\ & & **the bookcase at last!** & & \\ \hline \#2 James: OK, that’s OK. Improvise & James: Well, what about we go to a great & James: Well, you’ve got to tell me & & \\ & is just good. & jazz show now, and um- & the details. What did he say? & & \\ & & Joy: I don’t don’t don’t know **how much** & & \\ & & **plan** he had done. & & \\ \hline \#3 Mary: It’s a good idea to live together & James: I think so, too. Are you going & Linda: I have to think about moving now. & & \\ & before you get married. & to get a house or. & We are going to **move in together** & \\ & & & Monce: I thought you two already & \\ & & & did that. & \\ \hline \hline \end{tabular} \end{table} TABLE VI: Four cases where target utterance reconstruction leverages the critical context of the target utterance for emotion reasoning, while the next utterance provides limited information. “CoG-BART”, “VAD-VAE” and “Human” denotes the corresponding predictions for response generation (CoG-BART), target utterance reconstruction (VAD-VAE) and the human-annotated emotion labels. Key information in the context: bold. Fig. 6: A case where the target utterance is reconstructed via VAD-VAE by separately adjusting Valence, Arousal, and Dominance. Fig. 7: Performance decrease rate of VAD-VAE and vanilla VAE on three ERC test sets with different random replacement percentages of training labels. from the test results of VAD-VAE and CoG-BART in Table VI for further investigation. According to our observation, there are mainly two scenarios where response generation provides limited information and mispredicts the emotion: (a) Change of the discussion topic in the response. In the next utterance of case #1, the discussion topic changes from "Finish assembling the bookcase" to "Carol's favorite beer". In the next utterance of case #2, the topic changes from "His plan" to "Go to a great jazz show". In these cases, emotion-related clues in the response are irrelevant to the target utterance. (b) Indirect response to the current utterance. In case #3, the next utterance merely expresses agreement with the target utterance and expands the dialogue. In case #4, the response is simply sarcasm. These responses provide crucial clues for future utterances but little information for the target utterance. These scenarios are more common in multi-party dialogues. In contrast, VAD-VAE can learn complex inter- and intra-speaker influences since it learns critical information via the context-aware target utterance reconstruction. For example, VAD-VAE focuses on the previous discussion topic "done with the bookcase"(case #1) and "his plan"(case #2). It can also extract key information from long-range contexts, such as in cases #3 and #4. These advantages allow VAD-VAE to make correct emotion predictions in these cases. ## 6 Conclusion This paper proposes a VAD-disentangled Variational Autoencoder for emotion recognition in conversations. We first introduce an auxiliary target utterance reconstruction task via the VAE framework. Then we disentangle three key features: Valence, Arousal, and Dominance, from the latent space. VAD supervision signals and a mutual information minimisation task are also utilised to enhance the disentangled VAD representations. Experiments show that VAD-VAE outperforms the state-of-the-art model on two ERC datasets. Ablation and case studies prove the effectiveness of the proposed VAE-based target utterance reconstruction task and VAD supervision signals. Further quantitative analysis and visualisation also show that VAD-VAE learns disentangled VAD representations with informativeness, independence and robustness. In the future, we will leverage these decent VAD representations to explore fine-grained emotion control for affective text generation by adjusting Valence, Arousal, and Dominance separately. ## Acknowledgments This work was supported in part by the Alan Turing Institute, UK. This work was also supported by the University of Manchester President's Doctoral Scholar award.
2304.14358
Structure Analysis of the FRP Rebar Using Computer Vision Techniques
In this paper we present a method to analyze the inner structure of the composite FRP rebar, namely the shift of the real center of gravity with a respect to the geometrical center of rebar and changes of cross-sectional characteristics. We propose an automated pipeline based on classical computer vision techniques and on the ratio between the glass fibers and epoxy filament in the analyzed cross-section to compute the shift vector of the real center of gravity in respect to the geometrical center together with the cross-section area and its principal moments. We discuss the achieved results over two cross sections in a different portion of the rebar and in the end, we suggest possible direction and improvements for our future work. We also made our code publicly available.
Juraj Lagin, Simon Bilik
2023-04-27T17:37:23Z
http://arxiv.org/abs/2304.14358v1
# Structure Analysis of the FRP Rebar Using Computer Vision Techniques ###### Abstract In this paper we present a method to analyze the inner structure of the composite FRP rebar, namely the shift of the real centre of gravity with a respect to the geometrical centre of rebar and changes of cross-sectional characteristics. We propose an automated pipeline based on classical computer vision techniques and on the ratio between the glass fibres and epoxy filament in the analyzed cross-section to compute the shift vector of the real centre of gravity in respect to the geometrical centre together with the cross-section area and its principal moments. We discuss the achieved results over two cross sections in a different portion of the rebar and in the end, we suggest possible direction and improvements for our future work. We also made our code publicly available. FRP rebar, center of the gravity, image segmentation, image morphology ## I Introduction Composite reinforcement is a widely used alternative for reinforcing concrete structures. The FRP (fibre-reinforced-polymer) is known as high tensile strength reinforcement made by _pultrusion_ technology. The reinforcement consists of fibres, matrix and surface treatment, which defines material characteristics of rebar. The straight FRP reinforcement is well-known and was described in many books or reviews, such as for example [1, 2]. However, for all-composite reinforced structures need to be designed indirect reinforcement as well. The first and oldest design code for bent FRP reinforcement was the JSCE 1997 [3]. The next generation of design codes is based on knowledge of this design approach - for example the American ACI 440.1R-15 [4] or Canadian CSA 806-12 [5] retrieved the approach of determination of ultimate tensile strength for bent FRP reinforcement, which depends on geometrical dimensions of rebar. In fact, bent reinforcement is made by bending a straight reinforcement while the manufacturing process, which for thermosetting material leads to to a reduction of the ultimate tensile strength of bent rebar. The subject of this study is to further describe the influence of inner structure bent FRP reinforcement for its ultimate tensile strength. Our study uses computer vision techniques for describing differences in inner structure straight and bent FRP reinforcement. ## II Related Research From point of inner structure view, the straight FRP rebars are quite widely described, as can be seen for example in [6, 7, 8]. The research of the inner structure of indirect FRP reinforcement was made just in scale visible by a bare eyed. In the studies [9, 10] could be clearly seen the effect of bending while manufacturing process, which leads to kinking fibres on the inner side of the bent portion of rebar and to a flattering of the cross-section of rebar as shown in Fig. 1. This effect causes a reduction of the ultimate tensile strength of bent FRP rebar, and it was already included in design code JSCE 1997 [3]. Computer vision techniques are today widely used in various applications including traffic [17], industrial inspection [15] or biology [14]. In this paper, we use classical morphological operations described for example in [13], Otsu method for segmentation threshold estimation [12] and the region properties analysis over a binary image. ## III Motivation The main motivation of this research was to describe the differences of the inner structure in the straight and bent portion of FRP rebar. For the purpose of calculating internal forces Fig. 1: Scheme of kinked kinked fibres along the bent portion of FRP rebar [10] and stresses in structural engineering, structural mechanics is usually used. The summary is described in many literatures such as [16]. Normally, the ultimate tensile strength is defined as a force acting on a rebar area, expressed by the equation: \[\sigma_{u}=\frac{N}{A}, \tag{1}\] where \(\sigma_{u}\) is the ultimate tensile strength, \(N\) load force, and \(A\) is the area of the cross-section. As described in the previous capture, the manufacturing process of bent FRP rebar leads to a flattening circle cross-section. The changes cause translocation and shifting of load-bearing fibres to the inner surface of the cross-section. As is the fibre density several times higher, the real centre of gravity of the cross-section is shifted from the geometric centre of gravity. The distance between them is called eccentricity. When the load force is not acting in the real centre of gravity, it creates an additional bending moment, calculated as: \[M=N\cdot e, \tag{2}\] where \(N\) is a load force and \(e\) is an eccentricity (distance) between the real and geometric centre of gravity. If the cross-section is loaded by load force and bending moment, the tensile stress in the arbitrary point of the cross-section could be expressed as: \[\sigma_{u}=\frac{N}{A}+\frac{M}{I_{y}}z-\frac{M}{I_{z}}y, \tag{3}\] where \(M\) is the bending moment, \(I_{y}\) and \(I_{z}\) are moment of inertia of cross-section, \(z\) and \(y\) are the distance of the considered point from the real centre of gravity of cross-section. The moment of inertia can be calculated: \[I_{y}=\int_{A}z^{2}\,dA,\text{ and }I_{z}=\int_{A}y^{2}\,dA, \tag{4}\] where \(A\) is an area of cross-section and \(z\) is the distance of the considered point from the real centre of gravity of the cross-section. Because the rotation of the cross-section in the image is random, the principal moments of inertia of the cross-section needs to be calculated. The principal moment of inertia also defines a principal axis \(1\) and \(2\). The principal moment of inertia can be calculated as: \[I_{1,2}=\frac{1}{2}\big{(}I_{y}+I_{z}\big{)}\pm\frac{1}{2}\sqrt{(I_{y}-I_{z})^ {2}+4{D_{yz}}^{2}}, \tag{5}\] where \(D_{yz}\) is a deviation moment of the cross-section calculated as: \[D_{yz}=\int_{A}yz\,dA, \tag{6}\] In a cross-section loaded by a combination of tensile force and bending moment, it is necessary to find the point where the greatest tensile stress is applied. According to structural mechanics, this point can be found based on the neutral axis (see Fig 2), which is defined by the points _1\({}_{n}\)_ and _2\({}_{n}\)_, calculated as: \[1_{n}=\frac{i_{2}^{2}}{e_{1}},\text{ and }2_{n}=\frac{i_{1}^{2}}{e_{2}}, \tag{7}\] where _1\({}_{n}\)_ and _2\({}_{n}\)_ are an inertia radius, expressed as: \[i_{1}^{2}=\frac{I_{1}}{A},\text{ and }i_{2}^{2}=\frac{I_{2}}{A}, \tag{8}\] In general, it could be assumed that cross-sections with a lower cross-sectional heights have a higher moment of inertia. This leads to higher tensile stress in critical locations of the cross-section in the same magnitude of the load. The aim of our study is to determine the changes in the area of cross-section, translation of the centre of gravity and influence of additional bending moment for the reduction of tensile strength of rebar. ## IV Materials and methods In this section, we describe our data, its collection and the method used to find the shift of the center of the gravity based on the ratio between the fibers and filament masses. ### _Input data description_ The samples for the study were cut from FRP reinforcement, made by E-glass load-bearing fibres, epoxy matrix and sand-coating surface treatment. The reinforcement was made by pultured method, nearly described in [11]. The radius of curvature was 30mm and the diameters of rebars _8mm_ and _10mm_. Samples were taken from FRP rebar in straight portion (cut n.1) and in a bent portion (cut n.3) as shown in Fig. 3. The samples used for our study (see Fig 4) were grinded by grinding discs with the roughness of surface _75um_, _35um_ and _8um_ after cutting. Then they were painted with a gold-plated layer and embedded in the vacuum chamber. Microscopical captures of the samples are shown in Fig. 5. The images of samples were obtained by electron microscope at 20 times enlarged. Fig. 2: Normal stress according eccentric tensile stress ### _Center of the gravity detection_ As a first step to detect the center of the gravity, we had to segment the image's ROI, which covers the central section of the cut without the bigger fibers on the side as could be seen in Fig. 5. The other areas, which should be removed are the darker grey spots and they are most likely caused by overheating the material while cutting and polishing. To achieve such segmentation, we decided to use a cascade of two morphological reconstructions for a rough and fine segmentation. The rough segmentation as a first step is performed over the gray scale image using opening-close reconstruction with the disk kernel of a high diameter. It is followed by multiple Otsu-threshold and it results in a rough mask, which still contains parts of the side fibres. The fine segmentation aims to remove the side fibres and the overheated parts on the cross section edges, which is done by unsymmetrical close-opening reconstruction with slightly stronger erosion using a disk kernel. This results into the fine mask and ROI segmentation, as shown in Fig. 6. In order to compute the real centre of the gravity, we assume that the segmented ROI contains only glass fibres and epoxy filament. To separate them and the background, we again apply multiple Otsu-threshold and we assigned the corresponding values of density and tensile strength to the fibres and epoxy which are together with the ROI used to compute the weighted center. To assign the corresponding values of densities, we compared the effect of a global and local approach. In the global Fig. 4: Samples of experiment. From upper left to lower right: _x8_1-1_, _x8_3-3_, _x10_1-1_, _x10_3-3_ Fig. 5: FRP rebar cuts used in our experiment. From upper left to lower right: _x8_1-1_, _x8_3-3_, _x10_1-1_, _x10_3-3_ Fig. 3: Cut section of experiment samples Fig. 6: Segmentation steps over the RFB rebar _x10_3-3_ sample. From upper left to lower right: original image, rough segmentation, binary mask, segmented ROI approach, we considered the brightest pixels as glass fibres and all other pixels under the mask as the epoxy filament. This approach nevertheless did not bring satisfactory results, because the brightness of the cross-section changes with the position. Therefore we used a sliding window of size 7x7 under which we assign the glass fiber density to the pixel with a maximal value and the epoxy density to the pixel of minimal value. Pixels between this range are considered as invalid and we assign them a zero weight. A comparison of the global and local approach is shown in Fig. 7 Geometrical centre is computed in a similar way over the binary mask without assigning any weights. In the end, we compute a shift vector between the geometrical and real centre of gravity, which is also displayed in the original image. To compute the distance in _mm_ from the distance in _pixels_, we use a scale contained in the header of the microscopic images. ### _Momentum and cross-section points calculation_ To compute principal moments of inertia \(I_{1}\), \(I_{2}\) together with the intersections of their axis and eccentricity line with the cross-section's mask, we had to first compute the moments \(I_{z}\), \(I_{y}\), deviation moment \(D_{yz}\). To compute \(I_{z}\), \(I_{y}\) and \(D_{yz}\), we iteratively apply morphological erosion with the size of one pixel to the given mask and compute difference between the previous and eroded image. For every pixel of such obtained contour, we compute its contribution to the equations 4 and 6 until there were no remaining non-zeros pixels in the difference image. Using those values, we compute the principal moments of inertia \(I_{1}\), \(I_{2}\) described with equation 5 and rotation of their axes with the respect to the centre of the gravity axes. Using the \(I_{1}\), \(I_{2}\) axis direction, we compute the intersection points with the given mask and rotate their coordinates with the respect to the \(I_{1}\), \(I_{2}\) axis. The same is applied for the intersection of the eccentricity line in the direction of the centre of the masses shift vector. To find the intersection point coordinates, we test if the mask pixel under the given line has value of zero or one. After we find an edge, we save the pixel coordinates as an intersection point in a given direction. ## V Experimental results As a first part of our experiment, we compare the effect of the sample's shape on the resulting shift vector. Obtained results are shown in Tab. I, vector size and angle are shown with respect to the Cartesian coordinate system with the beginning in the lower left image corner. The density of the glass fibres was set as 2600 kg/m \({}^{3}\) and the density of the epoxy filament as 1300 kg/m \({}^{3}\). We could see that the curvature of the rebar significantly affects the resulting shift vector size and angle in the comparison with the non-bended cross-sections - the resulting shift of the elliptic cross-sections _3-3_ could be considered as approximately 10x higher than by the circular cross-sections _1-1_. Computations of the geometrical centre of the sample _10_3-3_ seems to be misleading, because the sample was too big to fit in the microscope completely. The resulting center of the mass shifts, axes and intersection points of the analyzed samples are shown in Fig. 8. Predicted and obtained cross-sectional characteristics compared in the second part of our experiment are shown in Tab II: where \(A_{s,p}\) is a predicted area of cross-section according to declared diameter by the manufacturer, and \(A_{s}\), \(I_{1}\), \(I_{2}\) are Fig. 8: FRP rebar cuts, their centers of the mass (red - original, blue - shifted), axis (red - original, green - principal moments) and cross-section points. From upper left to lower right: \(\varnothing\)8_1-1, \(\varnothing\)8_3-3, \(\varnothing\)10_1-1, \(\varnothing\)10_3-3 Fig. 7: Assigning the weights over the RFB rebar \(\diameter\)10_3-3 sample. 7x7 local assigning result on the left and global assigning result on the right. obtained results of the area and principal moment of inertia. As part of the control, comparisons of the computed moment of inertia of a circular cross-section (but also simpler rectangular cross-sections) of the identical cross-sectional area were also processed. The observed deviations are low and comparing the measured and computed results, our proposed method could be considered to be functional. From the obtained results shown in Tab II, we can see a clear decrease in the effective cross-sectional area, which nevertheless may be partly caused by a deviation of our method. The most significant difference can be observed by the moment of inertia, caused by the change in cross-sectional shape. In our previous experiments carried out on a larger number of specimens, we investigated the strength of straight and bent rebars. The resulting tensile strength \(\sigma_{u,\mathit{exp}}\) was proved to be related to the declared cross-sectional area. Based on our obtained values shown in Tab II, it was possible to apply the experimentally determined average failure force to the 8mm diameter members and then to calculate the stresses in points C, A, E etc. (see Fig 9). The tensile stresses \(\sigma_{u,\mathit{exp}}\) and \(\sigma_{u,C}\) were calculated by equation 1. In points A and E is also a bending moment, so the tensile stresses \(\sigma_{u,A}\) and \(\sigma_{u,E}\) could be calculated by equation 3. Those calculations were performed only on the samples \(\mathcal{BN}\) because the sample \(\mathcal{BI}\)_3-3_ was not fully displayed. The obtained results are summarized in Tab III: The results clearly show an increase of the tensile stresses at the centre of gravity of the cross-section compared to the tensile stresses when considering the manufacturer's declared reinforcement diameter. Considering the applied eccentricity, an increase in the tensile stresses at the extreme point \(E\) was 2.55% for straight rebar and 21.10% for bent rebar. If we compare the maximum tensile stress in point E at straight (\(\mathcal{BN}\)_1-1_) and bent (\(\mathcal{BN}\)_3-3_) sections of the rebar, we can observe a difference of 54.86%. This difference indicates that the bending moment is not the only factor causing the reduction of the tensile strength of rebar and as was mentioned in the introduction of the paper, the overall reduction in tensile strength is also caused by the shrinkage of the fibres on the inner side of the bend, which causes that the fibres to lose their effectiveness. However, based on the results obtained in previous experiments, it can be argued that from the total experimentally determined tensile strength reduction of the reinforcement of 60.98%, approximately 6.12% can be attributed to the effect of bending moment and change in cross-sectional characteristics. Our code is available at 1. Footnote 1: [https://github.com/bonetl/FRP-Rebar-Structure-Analysis](https://github.com/bonetl/FRP-Rebar-Structure-Analysis) ## VI Conclusion In this study, we analysed microscopic samples of the FRP rebar cuts in order to compute the centre of the gravity shift vector with the respect to the rebar's geometrical centre. To achieve this, we presented a method how to automatically segment the rebar's ROI and how to compute shifted centre of gravity based on the given weights and ratio between the glass fibres and epoxy resin. Based on the obtained results was found the following: * We proved a decrease of the cross-section area based on a comparison of the experimentally obtained value and area computed from a manufacturer-declared diameter. * We observed that the centre of gravity translations for samples \(\mathcal{BI}\)_1-1_ and \(\mathcal{BN}\)_1-1_ were insignificant and that they might be caused by a deviation of our method. Sample \(\mathcal{BI}\)_3-3_ shows a translation of the centre of gravity in the direction of the outer surface. However the sample was not fully displayed, so the geometrical centre of gravity is not determined correctly; For the sample \(\mathcal{BN}\)_3-3_ the translation of the centre of the gravity to the inner surface of the cross-section is several times higher in comparison to \(\mathcal{BN}\)_1-1_. It suggests, that the fibres of rebar are concentrated on the inner surface of the cross-section; * In the case of straight reinforcement, the additional bending moment caused an increase in tensile stresses of 2.55%, which seems to be consistent with the measurement deviation mentioned above. However, for the bent Fig. 9: Scheme of points considered for calculation of tensile stress rebars there was an increase up to 21.10% and it can be argued that the effect of the bending moment on the reduction of the tensile strength of the bent reinforcement has been demonstrated; * By our previous experiments, the reduction of tensile strength was determined to be 60.98%. By comparing the tensile stresses at the point E of the cross-section of straight and bent reinforcement, it can be argued that the effect of bending moment on the overall tensile strength reduction is approximately 6.12%; * Weight assignment shown at Fig. 7 should be further improved in such way, that we decrease the number of the pixels with the zero weights; * Our experiments were performed on a low amount of samples and for determination of influence for reduction of tensile strength, experimental dataset should be increased; * As a future experiment with a bigger dataset, we suggest to use machine learning methods as semantic segmentation to automatically segment the ROI and to assign the weights. Also, a method should be developed for the automatic calculation of the neutral axis, based on which the point with the maximum tensile stress along the cross section can be determined. ## Acknowledgment The completion of this paper was made possible by grant No. FEKT-S-23-8451 - "Research on advanced methods and technologies in cybernetics, robotics, artificial intelligence, automation and measurement" financially supported by the Internal science fund of Brno University of Technology and by the grant "FW01010520 - Development of bent composite reinforcement for environmentally exposed concrete constructions" conducted on Institute of Concrete and Masonry Structures, Faculty of Civil Engineering - Brno University of Technology.
2303.11287
Simultaneous Color Computer Generated Holography
Computer generated holography has long been touted as the future of augmented and virtual reality (AR/VR) displays, but has yet to be realized in practice. Previous high-quality, color holographic displays have made either a 3$\times$ sacrifice on frame rate by using a sequential color illumination scheme or used more than one spatial light modulator (SLM) and/or bulky, complex optical setups. The reduced frame rate of sequential color introduces distracting judder and color fringing in the presence of head motion while the form factor of current simultaneous color systems is incompatible with a head-mounted display. In this work, we propose a framework for simultaneous color holography that allows the use of the full SLM frame rate while maintaining a compact and simple optical setup. Simultaneous color holograms are optimized through the use of a perceptual loss function, a physics-based neural network wavefront propagator, and a camera-calibrated forward model. We measurably improve hologram quality compared to other simultaneous color methods and move one step closer to the realization of color holographic displays for AR/VR.
Eric Markley, Nathan Matsuda, Florian Schiffers, Oliver Cossairt, Grace Kuo
2023-03-20T17:23:02Z
http://arxiv.org/abs/2303.11287v2
# Simultaneous Color Holography ###### Abstract. Computer generated holography has long been tested as the future of augmented and virtual reality (AR/VR) displays, but has yet to be realized in practice. Previous high-quality, color holographic displays have either made a 3\(\times\) sacrifice on frame rate by using a sequential illumination scheme or have made use of multiple spatial light modulators (SLM) and/or bulky, complex optical setups. The reduced frame rate of sequential color introduces distracting judder and color fringing in the presence of head motion while the form factor of current simultaneous color systems is incompatible with a head-mounted display. In this work, we propose a framework for simultaneous color holography that allows the use of the full SLM frame rate while maintaining a compact and simple optical setup. State-of-the-art hologram quality is achieved through a perceptual loss function, a physics-based neural network wavefront propagator, and a camera-calibrated forward model. We measurably improve hologram quality compared to other simultaneous color methods and move one step closer to the realization of color holographic displays for AR/VR. Eric Markley, Nathan Matsuda, Florian Schiffers, Oliver Coissart, and Grace Kuo. 2023. Simultaneous Color Holography. 1, 1, Article 1 (March 2023), 20 pages. + Footnote †: journal: Computer generated holography has long been tested as the future of augmented and virtual reality (AR/VR) displays, but has yet to be realized in practice. Previous high-quality, color holographic displays have either made a 3\(\times\) sacrifice on frame rate by using a sequential illumination scheme or have made use of multiple spatial light modulators (SLM) and/or bulky, complex optical setups. The reduced frame rate of sequential color introduces distracting judder and color fringing in the presence of head motion while the form factor of current simultaneous color systems is incompatible with a head-mounted display. In this work, we propose a framework for simultaneous color holography that allows the use of the full SLM frame rate while maintaining a compact and simple optical setup. State-of-the-art hologram quality is achieved through a perceptual loss function, a physics-based neural network wavefront propagator, and a camera-calibrated forward model. We measurably improve hologram quality compared to other simultaneous color methods and move one step closer to the realization of color holographic displays for AR/VR. + Footnote †: journal: Computer generated holography has long been tested as the future of augmented and virtual reality (AR/VR) displays, but has yet to be realized in practice. Previous high-quality, color holographic displays have either made a 3\(\times\) sacrifice on frame rate by using a sequential illumination scheme or have made use of multiple spatial light modulators (SLM) and/or bulky, complex optical setups. The reduced frame rate of sequential color introduces distracting judder and color fringing in the presence of head motion while the form factor of current simultaneous color systems is incompatible with a head-mounted display. In this work, we propose a framework for simultaneous color holography that allows the use of the full SLM frame rate while maintaining a compact and simple optical setup. State-of-the-art hologram quality is achieved through a perceptual loss function, a physics-based neural network wavefront propagator, and a camera-calibrated forward model. We measureably improve hologram quality compared to other simultaneous color methods and move one step closer to the realization of color holographic displays for AR/VR. ## 1. Introduction Holographic displays are a promising technology for augmented and virtual reality (AR/VR). Such displays use a spatial light modulator (SLM) to shape an incoming coherent wavefront so that it appears as though the wavefront came from a real, three-dimensional (3D) object. The resulting image can have natural defocus cues, providing a path to resolve the vergence-accommodation conflict of stereoscopic displays [14]. Additionally, the fine-grain control offered by holography can also correct for optical aberrations, provide custom eyeglass prescription correction in software, and enable compact form-factors [16], while improving light efficiency compared traditional LCD or OLED displays [15, 17]. Recent publications have demonstrated significant improvement in hologram image quality [18, 19, 20] and computation time [2, 21], bringing holographic displays one step closer to practicality. However, color holography for AR/VR has remained an open problem. Traditionally, red-green-blue (RGB) holograms are created through _field sequential color_, where a separate hologram is computed for each of the three wavelengths and displayed in sequence and synchronized with the color of the illumination source. Due to persistence of vision, this appears as a single full color image if the update is sufficiently fast, enabling color holography for static displays. However, in a head-mounted AR/VR system displaying world-locked content, framerate requirements are higher to prevent noticeable judder [15]. Furthermore, field sequential color can lead to visible spatial separation of the colors, particularly when the user rotates their head while tracking a fixed object with their eyes [22]. Although these negative effects can be mitigated with high framerate displays, the most common SLM technology for holography, liquid-crystal-on-silicon (LCoS), is quite slow due to the physical response time of the liquid crystal (LC) layer [23]. Although most commercial LCoS SLMs can be driven at 60 Hz, at that speed the SLM will have residual artifacts from the prior frames [13]. Micro-electro-mechanical system (MEMS) SLMs can be much faster (in the kilobertz range) but so far have larger pixels and limited bit depth [18, 19]. In this work, we aim to display RGB holograms using only a single SLM pattern, enabling a 3\(\times\) increase in framerate compared to sequential color and removing color fringing artifacts in the presence of head motion. Our compact setup does not use a physical filter in the Fourier plane or bulky optics to combine color channels. Instead, the full SLM is simultaneously illuminated by an on-axis RGB source, and we optimize the SLM pattern to form the three color image. We design a flexible framework for end-to-end optimization of the digital SLM input from the target RGB intensity, allowing us to optimize for SLMs with extended phase range, and we develop a color-specific perceptual loss function which further improves color fidelity. Our method is validated experimentally on 2D and 3D content. Specifically, we make the following contributions: * We introduce a novel algorithm for generating simultaneous color holograms which takes advantage of the extended phase range of the SLM in an end-to-end manner and uses a new loss function based on human color perception. * We analyze the "depth replicas" artifact in simultaneous color holography and demonstrate how these replicas can be mitigated with extended phase range. * We demonstrate high quality experimental simultaneous color holograms in both 2D and 3D using a custom camera-calibrated model. ## 2. Related Works Field Sequential ColorThe vast majority of color holographic displays use field sequential color in which the SLM is sequentially illuminated by red, green, and blue sources while the SLM pattern is updated accordingly (Chakravarthula et al., 2022, 2019, 2020; Choi et al., 2021, 2021, 2018; Li et al., 2016; Maimone et al., 2017; Peng et al., 2021, 2020; Shi et al., 2021; Yang et al., 2022). Field sequential color is effective at producing full color holograms but reduces framerate by a factor of 3\(\times\). This is a challenge for LCoS SLMs where framerate is severely limited by the LC response time (Zhang et al., 2014). Although, SLMs based on MEMS technology can run at high framerates in the kilohertz range (Duerr et al., 2021), so far these modulators are maximum 4-bit displays, with most being binary (Choi et al., 2022; Kim et al., 2022; Lee et al., 2022). Even with high framerate modulators, it may be worthwhile to maintain the full temporal bandwidth, since the extra bandwidth can be used to address other holography limitations. For example, speckle can be reduced through temporal averaging (Choi et al., 2022; Kim et al., 2022; Lee et al., 2022), and limited etendue can be mitigated through pupil scanning (Jang et al., 2018; Kim et al., 2022). Spatial MultiplexAn alternate approach is spatial multiplexing, which maintains the native SLM framerate by using different regions of the SLM for each color. Most prior works in this area use three separate SLMs and an array of optics to combine the wavefronts (Nakayama et al., 2010; Shiraki et al., 2009; Yaras et al., 2009). Although this method produces high quality holograms, the resulting systems are bulky, expensive, and require precise alignment, making them poorly suited for near-eye displays. Spatial multiplexing can also be implemented with a single SLM split into sub-regions (Makowski et al., 2011, 2009); while less expensive, this approach still requires bulky combining optics and sacrifices space-bandwidth product (SBP), also known as etendue. Etendue is already a limiting factor in holographic displays (Kuo et al., 2020), and further reduction is undesirable as it limits the range of viewing angles or display field-of-view. Frequency Multiplex.Rather than split the physical extent of the SLM into regions, frequency multiplexing assigns each color a different region in the frequency domain, and the colors are separated with a physical color filter at the Fourier plane of a 4\(f\) system (Lin et al., 2019; Lin and Kim, 2017; Makowski et al., 2010). A variation on this idea uses different angles of illumination for each color so that the physical filter in Fourier space is not color-specific (Xue et al., 2014). Frequency multiplexing can also be implemented with white light illumination, which reduces speckle noise at the cost of resolution (Kozacki and Chlipala, 2016; Yang et al., 2019). However, all of these techniques involve filtering in Fourier space, which sacrifices system etendue and requires a bulky 4\(f\) system. Depth Division and Bit Division for Simultaneous ColorThe prior methods most closely related to our work also use simultaneous RGB illumination over the SLM, maintain the full SLM etendue, and don't require a bulky 4\(f\) system (Pi et al., 2022). We refer to the first method as _depth division multiplexing_ which takes advantage of the ambiguity between color and propagation distance (explained in detail in Sec. 3.1) and assigns each color a different depth (Makowski et al., 2010, 2008). After optimizing with a single color for the correct multiplane image, the authors show they can form a full color 2D hologram when illuminating in RGB. However, this approach does not account for wavelength dependence of the SLM response, and since it explicitly defines content at multiple planes, it translates poorly to 3D. Another similar approach is _bit division multiplexing_, which takes advantage of the extended phase range of LCoS SLMs (Jesacher et al., 2014). The authors calibrate an SLM lookup-table consisting of phase-value triplets (for RGB) as a function of digital SLM input, and they note that SLMs with extended phase range (up to 10\(\pi\)) can create substantial diversity in the calibrated phase triplets. After pre-optimizing a phase pattern for each color separately, the lookup-table is used on a per-pixel basis to find the digital input that best matches the desired phase for all colors. In our approach, we also use an extended SLM phase range for the same reason, but rather than using a two-step process, we directly optimize the output hologram. This flexible framework also allows us to incorporate a perceptual loss function to further improve perceived image quality. Algorithms for Hologram GenerationOur work builds on a body of literature applying iterative optimization algorithms to holographic displays. Perhaps most popular is the Gerchberg-Saxton (GS) method (Gerchberg, 1972), which is effective and easy to implement, but does not have an explicitly defined loss function, making it challenging to adapt to specific applications. Zhang et al. (2017) and Chakravarthula et al. (2019) were the first to explicitly formulate the hologram generation problem in an optimization framework. This framework has been very powerful, enabling custom loss functions (Choi et al., 2022) and flexible adaptation to new optical configurations (Choi et al., 2021; Gopakumar et al., 2021). In particular, perceptual loss functions can improve the perceived image by taking aspects of human vision into account, such as human visual acuity (Kuo et al., 2020), foveated vision (Walton et al., 2022), and sensitivity to spatial frequencies during accommodation (Kim et al., 2022). Like these prior works, we use an optimization-based framework which we adapt to account for the wavelength-dependence of the SLM; this also enables our new perceptual loss function for color, which is based on visual acuity difference between chrominance and luminance channels. Camera-Calibration of Holographic DisplaysWhen the model used for hologram generation does not match the physical system, deviations cause artifacts in the experimental holograms. Recently, several papers have addressed this issue using measurements from a camera in the system for calibration. Peng et al. (2020) proposed using feedback from the camera to update the SLM pattern for a particular image; although a single image can be improved, it does not extend to new content. A more flexible approach uses pairs of SLM patterns and camera captures to estimate the learnable parameters in a model, which is then used for offline hologram generation. Learnable parameters can be physically-based (Chakrabarti, 2016; Kavakli et al., 2022; Peng et al., 2020), black box CNNs (Choi et al., 2021), or a combination of both (Choi et al., 2022). The choice of learnable parameters effects the ability of the model to match the physical system; we introduce a new parameter for modeling SLM cross talk and tailor the CNN architecture for higher diffraction orders from the SLM. ## 3. Simultaneous Color Holography A holographic image is created by a spatially coherent illumination source incident on an SLM. The SLM imparts a phase delay on the electric field; after light propagates some distance, the intensity of the electric field forms an image. Our goal in this work is to compute a single SLM pattern that simultaneously creates a three color RGB hologram. For instance, when the SLM is illuminated with a red source, the SLM forms a hologram of the red channel of an image; with a green source the same SLM pattern forms the green channel; and with the blue source it creates the blue channel. We propose a flexible optimization-based framework (Fig. 2) for generating simultaneous color holograms. We start with a generic model for estimating the hologram from the digital SLM pattern, \(s\), as a function of illumination wavelength, \(\lambda\): \[g_{\lambda} =e^{i\phi_{\lambda}(s)} \tag{1}\] \[I_{\mathcal{L},\lambda} =\left|f_{\text{prop}}\left(g_{\lambda},z,\lambda\right)\right|^{2}. \tag{2}\] Here, \(\phi_{\lambda}\) is a wavelength-dependent function that converts the 8 bit digital SLM pattern to a phase delay, \(g_{\lambda}\) is the electric field coming off the SLM, \(f_{\text{prop}}\) represents propagation of the electric field, and \(I_{\mathcal{L},\lambda}\) is the intensity a distance \(z\) from the SLM. To calculate the SLM pattern, \(s\), we can solve the following optimization problem \[\operatorname*{argmin}_{s}\sum_{z}\mathcal{L}\left(\hat{I}_{z, \lambda_{r}},I_{z,\lambda_{r}}\right)+\mathcal{L}\left(\hat{I}_{z,\lambda_{g} },I_{z,\lambda_{g}}\right)+\mathcal{L}\left(\hat{I}_{z,\lambda_{b}},I_{z, \lambda_{b}}\right), \tag{3}\] where \(\hat{I}\) is the target image, \(\mathcal{L}\) is a pixel-wise loss function such as mean-square error, and \(\lambda_{r},\lambda_{g},\lambda_{b}\) are the wavelengths corresponding to red, green, and blue respectively. Since the model is differentiable, we solve Eq. 3 with gradient descent. ### Color-Depth Ambiguity A common model for propagating electric fields is Fresnel propagation1(Goodman, 2005), which can be written in Fourier space as Footnote 1: Fresnel propagation is the paraxial approximation to the popular angular spectrum method (ASM). Since most commercial slams have pixel pitch greater than \(3\,\mathrm{\SIUnitSymbolMicro m}\), resulting in a maximum diffraction angle under \(5^{\circ}\) (well within the small angle approximation), Fresnel and ASM are almost identical for holography. \[f_{\text{fresnel}}(g,z,\lambda) =\mathcal{F}^{-1}\left\{\mathcal{F}\{g\}\cdot H(z,\lambda)\right\} \tag{4}\] \[H(z,\lambda) =\exp\left(i\pi z\left(f_{x}^{2}+f_{g}^{2}\right)\right) \tag{5}\] Figure 2. **Hologram optimization framework.** This figure illustrates the three key components of the simultaneous color optimization framework: an SLM model, a propagation model, and a perceptual loss function. The SLM model maps voltage values to a complex field using a learned cross-talk kernel and a linear lookup table. The complex wavefront from the SLM is then propagated to the sensor plane using a modified version of the model proposed by Gopakumar et al. (2021), which separates the zeroth and first diffraction orders and combines them through a U-Net. The output is then fed into the perceptual loss function, and gradients are calculated using Pytorch’s autograd implementation. The SLM voltages are then updated using these gradients. Rubik’s cube source image by lwan Gabovitch (CC BY 2.0). where \(\mathcal{F}\) is a 2D Fourier transform, \(H\) is the Fresnel propagation kernel, and \(f_{x}\), \(f_{y}\) are the spatial frequency coordinates. In Eq. 5, note that \(\lambda\) and \(x\) appear together, creating an ambiguity between wavelength and propagation distance. To see how this ambiguity affects color holograms, consider the case where \(\phi_{\lambda}\) in Eq. 1 is independent of wavelength (\(\phi_{\lambda}=\phi\)). For example, this would be the case if the SLM had a linear phase range from 0 to \(2\pi\) at every wavelength. Although this is unrealistic for most off-the-shelf SLMs, it is a useful thought experiment. Note that if \(\phi\) is wavelength-independent, then so is the electric field off the SLM (\(g_{\lambda}=g\)). In this scenario, assuming \(f_{\text{prop}}=f_{\text{fresnel}}\), the Fresnel kernel is the only part of the model affected by wavelength. Now assume that the SLM forms an image at distance \(z_{0}\) under red illumination. From the ambiguity in the Fresnel kernel, we have the following equivalence: \[H(z_{0},\lambda_{r})=H\left(\frac{\lambda_{g}}{\lambda_{r}}z_{0},\lambda_{g} \right)=H\left(\frac{\lambda_{b}}{\lambda_{r}}z_{0},\lambda_{b}\right). \tag{6}\] This means the _same_ image formed in red at \(z_{0}\) will also appear at \(z=z_{0}\lambda_{g}/\lambda_{r}\) when the SLM is illuminated with green and at \(z=z_{0}\lambda_{b}/\lambda_{r}\) when the SLM is illuminated with blue. We refer to these additional copies as "depth replicas," and this phenomena is depicted in Fig. 3. Note that depth replicas do not appear in sequential color holography, since the SLM pattern optimized for red is never illuminated with the other wavelengths. If we only care about the hologram at the target plane \(z_{0}\), then the depth replicas are not an issue, and in fact, we can take advantage of the situation for hologram generation: The SLM pattern for an RGB hologram at \(z_{0}\) is equivalent to the pattern that generates a three-plane red hologram where the RGB channels of the target are each at a different depth (\(z0\), \(z_{0}\lambda_{r}/\lambda_{g}\), and \(z_{0}\lambda_{r}/\lambda_{b}\) for RGB respectively). This is the basis of the depth division multiplexing approach of Makowski et al. (2010, 2008), where the authors optimize for this three-plane hologram in red, then illuminate in RGB. Although this makes the assumption that \(\phi\) does not depend on \(\lambda\), this connection between simultaneous color and multi-plane holography suggests simultaneous color should be possible for a single plane, since multi-plane holography has been successfully demonstrated in prior work. However, the ultimate goal of holography is to create 3D imagery, and the depth replicas could prevent us from placing content arbitrarily over the 3D volume. In addition, in-focus images can appear at depths that should be out-of-focus, which may prevent the hologram from successfully driving accommodation (Kim et al., 2022). We propose taking advantage of SLMs with extended phase range to mitigate the effects of depth replicas. ### SLM Extended Phase Range In general, the phase \(\phi_{\lambda}\) of the light depends on its wavelength, which was not considered in Sec. 3.1. Perhaps the most popular SLM technology today is LCoS, in which rotation of birefringent LC molecules causes a change in refractive index. The phase of light traveling through the LC layer is delayed by \[\phi_{\lambda}=\frac{2\pi d}{\lambda}n(s,\lambda), \tag{7}\] where \(d\) is the thickness of the LC layer, and its refractive index, \(n\), is controlled with the digital input \(s\). \(n\) also depends on \(\lambda\) due to dispersion (Jesacher et al., 2014). The wavelength dependence of \(\phi_{\lambda}\) presents an opportunity to reduce or remove the depth replicas. Even if the propagation kernel \(H\) is the same for several \((\lambda,z)\) pairs, if the phase, and therefore the electric field off the SLM, changes with \(\lambda\), then the output image intensity at the replica plane will also be different. As the wavelength-dependence of \(\phi_{\lambda}\) increases, the replicas are diminished. We can quantify the degree of dependence on \(\lambda\) by looking at the derivative \(d\phi/d\lambda\) which informs us that larger \(n\) will give \(\lambda\) more influence on the SLM phase. However, the final image intensity depends only on relative phase, not absolute phase; therefore, for the output image to have a stronger dependence on \(\lambda\), we desire larger \(\Delta n=n_{\text{max}}-n_{\text{min}}\). In addition, \(d\phi/d\lambda\) increases with \(-dn/d\lambda\), suggesting that more dispersion is helpful for simultaneous color. Although \(d\phi/d\lambda\) also depends on the absolute value of \(\lambda\), we have minimal control over this parameter since there are limited wavelengths corresponding to RGB. In summary, this means we can reduce depth replicas in simultaneous color with larger phase range on the SLM and higher dispersion. However, there is a trade-off: As the range of phase increases, the limitations of the bit depth of the SLM become more noticeable, leading to increased quantization errors. We simulate the effect of quantization on hologram quality and find that PSNR and SSIM are almost constant for 6 bits and above. This suggests that each \(2\pi\) range should have at least 6 bits of granularity. Therefore, we think that using a phase range of around \(8\pi\) for an 8-bit SLM will Fig. 3. **Extended phase range reduces depth replicas in simulation.** (A) Using an SLM with a uniform \(2\pi\) phase range across all channels leads to strong depth replicas (top row), which reduce image quality at the target plane compared to the target (bottom row) and add in-focus content at depths that should be defocused. By using the extended phase Holecy Pluto-2.1-Vis-016 SLM (with Red: \(2.4\pi\), Green: \(5.9\pi\), Blue: \(7.4\pi\) phase ranges), depth replicas are significantly reduced (middle row), improving the quality of target plane holograms and creating defocused content at other depths. (B) Schematic illustrating the positions of the replicate planes and target plane. Note, this simulation was generated using RGB images and three color channels, but only the green and blue channels are displayed for clarity. (Rubik’s cube source image by Iwan Gabovitch (CC BY 2.0). be the best balance between replica reduction and maintaining accuracy for hologram generation. Figure 3 simulates the effect of extended phase range on depth replica removal. While holograms were calculated on full color images, only two color channels are shown for simplicity. In the first row of Fig. 3, we simulate an SLM with no wavelength dependence to \(\phi\) (i.e. 0 - 2\(\pi\) phase range for each color). Consequently, perfect copies appear at the replica planes. In the second row, we simulate using the specifications from an extended phase range SLM (Holoeye Pluto-2.1-Vis-016), which has 2.4\(\pi\) range in red, 5.9\(\pi\) range in green, and 7.4\(\pi\) range in blue demonstrating that replicas are substantially diminished with an extended phase range. By reducing the depth replicas, the amount of high frequency out of focus light at the sensor plane is reduced leading to improved hologram quality. ### Perceptual Loss Function Creating an RGB hologram with a single SLM pattern is an overdetermined problem as there are 3\(\times\) more output pixels than degrees of freedom of the SLM. As a result, it may not be possible to exactly match the full RGB image, which can result in color deviations and de-saturation. To address this, we take advantage of color perception in human vision. There's evidence that the human visual systems converts RGB images into a luminance channel (a grayscale image) and two chrominance channels, which contain information about the color [23]. The visual system is only sensitive to high resolution features in the luminance channel, so the chrominance channels can be lower resolution with minimal impact on the perceived image [24]. This observation is used in JPEG compression [25] and subpixel rendering [15], but to our knowledge, it has never been applied to holographic displays. By allowing the unperceived high frequency chrominance and extremely high frequency luminance features to be unconstrained, we can better use the the degrees of freedom on the SLM to faithfully represent the rest of the image. Our flexible optimization framework allows us to easily change the RGB loss function in Eq. 3 to a perceptual loss. For each depth, we transform the RGB intensities of both \(\hat{I}\) (the target image) and \(I\) (the simulated image from the SLM) into opponent color space as follows: \[O_{1} =0.299\cdot I_{\lambda_{r}}+0.587\cdot I_{\lambda_{g}}+0.114\cdot I _{\lambda_{b}}\] \[O_{2} =I_{\lambda_{r}}-I_{\lambda_{g}}\] \[O_{3} =I_{\lambda_{b}}-I_{\lambda_{r}}-I_{\lambda_{g}} \tag{8}\] where \(O_{1}\) is the luminance channel, and \(O_{2}\), \(O_{3}\) are the red-green and blue-yellow chrominance channels, respectively. We can then update Eq. 3 to \[\begin{split}\operatorname*{argmin}_{s}\sum_{z}\Big{[}& \mathcal{L}\left(\hat{O}_{1}*k_{1},O_{1}*k_{1}\right)+\mathcal{L}\left(\hat{O }_{2}*k_{2},O_{2}*k_{2}\right)+\\ &\mathcal{L}\left(\hat{O}_{3}*k_{3},O_{3}*k_{3}\right)\Big{]}, \end{split} \tag{9}\] where \(*\) represents a 2D convolution with a low pass filter (\(k_{1}\ldots k_{3}\)) for each channel in opponent color space. \(\hat{O}_{i}\) and \(O_{i}\) are the \(i\)-th channel in opponent color space of \(\hat{I}\) and \(I\), respectively. In order to mimic the contrast sensitivity functions of the human visual system, we implement filters in the Fourier domain by applying a low-pass filter of 45% of the width of Fourier space to the chrominance channels (\(O_{2}\), \(O_{3}\)) and a filter of 75% of the width of Fourier space to the luminance channel (\(O_{1}\)). These filter widths were heuristically determined. By de-prioritizing high frequencies in chrominance and extremely high frequencies in luminance, the optimizer is able to better match the low frequency color. This low frequency color is what is perceivable by the human visual system. Figure 4 depicts the hologram quality improvement by optimizing with our perceptual loss function. The first column of Fig. 4 shows the perceptually filtered versions of simulated holograms generated using an RGB loss function (Fig 4A) and our perceptual loss function (Fig 4B). The second column displays the original target image (Fig 4C) and the perceptually filtered target image (Fig 4D). It can be observed that the two targets are indistinguishable, indicating that our perceptual filter choices align well with the human visual system. The PSNR and SSIM values are higher for the perceptually optimized hologram, and it also appears less noisy and with better color fidelity. This suggests that the loss function has effectively shifted most of the error into imperceptible regions of the opponent color space. We see an average PSNR increase of 6.4 dB and average increase of 0.266 in SSIM across a test set of 294 images. ### Simulation Comparisons In Figure 5 we compare the performance of our method to the depth and bit division approach to simultaneous color holography. Depth Figure 4. **Perceptual loss improves color fidelity and reduces noise in simulation**. The first column of this figure depicts simulated holograms that were optimized with an RGB loss function (A) and our perceptual loss function (B). The same filters for the perceptual loss function then were applied to both of these simulated holograms as well as the target image. Image metrics were calculated between the filtered holograms and the filtered target image (D). All image metrics are better for the perceptually optimized hologram (B). One should also note that the filtered target (D) and original target (C) are indistinguishable suggesting our perceptual loss function only removes information imperceptible by the human visual system as intended. and bit division use only a single SLM, make use of the full space-time-bandwidth product of the SLM, and contain no bulky optics or filters making these methods the most similar to our method. The holograms simulated with depth and bit division are much noisier and have lower color fidelity than our proposed method. The depth division simulated hologram has the worst color fidelity due to to the replica planes discussed in Sec. 3.1 contributing defocused light at the target plane. Our method uses a perceptual loss function and the HOASM outlined by Gopakumar et al. (2021) to directly optimize the simultaneous color hologram, while comparison methods optimize indirectly. This direct approach produces less noisy holograms with better color fidelity. ## 4. Camera-calibrated model We've demonstrated that our algorithm can generate simultaneous color holograms in simulation. However, experimental holograms frequently do not match the quality of simulations due to mismatch between the physical system and the model used in optimization (Eqs. 1, 2). Therefore, to demonstrate simultaneous color experimentally, we need to calibrate the model to the experimental system. To do this, we design a model based on our understanding of the system's physics, but we include several learnable parameters representing unknown elements. To fit the parameters, we capture a dataset of SLM patterns and camera captures and use gradient descent to estimate the learnable parameters based on the dataset. Next we explain the model which is summarized in Fig. 2. ### Learnable Parameters for Offline Calibration _Lookup Table._ A key element in our optimization is \(\phi_{\lambda}\) which converts the digital SLM input into the phase coming off the SLM, and it's important that this function accurately matches the behavior of the real SLM. Many commercial SLMs ship with a lookup-table (LUT) describing \(\phi_{\lambda}\); however, the manufacturer LUT is generally only calibrated at a few discrete wavelengths. These wavelengths may not be accurate for the source used in the experiment. Therefore, we learn a LUT for each color channel as part of the model. Based on a pre-calibration of the LUT using the approach of Yang et al. (2015), we observe the LUT is close to linear; we therefore parameterize the LUT with a linear model to encourage physically realistic solutions. _SLM Crosstalk._ SLMs are usually modeled as having a constant phase over each pixel with sharp transitions at boundaries. However, in LCoS SLMs, elastic forces in the LC layer prevent sudden spatial variations, and the electric field that drives the pixels changes gradually over space. As a result, LCoS SLMs suffer from crosstalk, also called field-fringing, in which the phase is blurred (Aper et al., 2004; Moser et al., 2019; Persson et al., 2012). We model crosstalk with a convolution on the SLM phase. Combined with our linear LUT described above, we can describe the phase off the SLM as \[\phi_{\lambda}(s)=k_{\text{xt}}*(a_{1}\cdot s+a_{2}) \tag{10}\] where \(a_{1}\), \(a_{2}\) are the learn parameters of the LUT, and \(k_{\text{xt}}\) is a learned \(5\times 5\) convolution kernel representing crosstalk. Separate values of these parameters are learned for each color channel. _Propagation with Higher Diffraction Orders._ The discrete pixel structure on the SLM creates higher diffraction orders that are not modeled well with ASM or Fresnel propagation. A physical aperture at the Fourier plane of the SLM can be used to block higher orders. However, accessing the Fourier plane requires a \(4f\) system, which adds significant size to the optical system, reducing the practicality for head-mounted displays. Therefore, we chose to avoid additional lenses after the SLM and instead account for higher orders in the propagation model. We adapt the higher-order angular spectrum model (HOASM) of Gopakumar et al. (2021). The zero order diffraction, \(G(f_{\text{c}},f_{\text{g}})\), and first order diffraction, \(G_{\text{1st order}}\), patterns are propagated with ASM to the plane of interest independently. Then the propagated fields are stacked and passed into a U-net, which combines the zero and first orders and returns Figure 5. **Comparison of bit division, depth division and our method of simultaneous color holography in simulation.** Bit division (Col. 1) is noisier than our method but achieves comparable color fidelity, although more washed out. The depth division method (Col. 2) is also noisier than our method and has inferior color fidelity. Our method matches the target image well. Our method uses our perceptual loss function and a high order angular spectrum propagation model with no learned components. Further implementation details for each method are available in the supplement. the image intensity: \[f_{\text{ASM}}(G,z) =\mathcal{F}^{-1}\left\{G\cdot H_{\text{ASM}}(z)\right\} \tag{11}\] \[I_{z} =\text{Unet}\left(f_{\text{ASM}}(G,z),\ f_{\text{ASM}}(G_{\text{ 1st order}},z)\right), \tag{12}\] where \(H_{\text{ASM}}(z)\) is the ASM kernel. The U-Net architecture is detailed in the supplement; a separate U-net for each color is learned from the data. The U-Net helps to address any unmodeled aspects of the system that may affect the final hologram quality such as source polarization, SLM curvature, and beam profiles, and the U-net better models superposition of higher orders, allowing for more accurate compensation in SLM pattern optimization. Figure 8 compares ASM, HOASM, and our modified version with the U-Net. ## 5. Implementation Experimental SetupOur system starts with a fiber coupled RGB source, collimated with a 400 mm lens. The beam is aligned using two mirrors, passes through a linear polarizer and beamsplitter, reflects off the SLM (Holoeye-2.1-Vis-016), and passes through the beamsplitter a second time before directly hitting the color camera sensor with Bayer filter (FLIR GS3-U3-123S6C). As seen in Fig. 9, there's no bulky \(4f\) system between the SLM and camera sensor, which allows the setup to be compact, but requires modeling of higher diffraction orders. The camera sensor is on a linear motion stage, enabling a range of propagation distances from \(z=80\) mm to \(z=130\) mm. For our source, we use a superluminescent light emitting diode (SLED, Exalos EXC250011-00) rather than a laser due to its lower coherence, which has been demonstrated to reduce speckle in holographic displays (Deng and Chu, 2017). Although previous work showed state-of-the-art image quality by modeling the larger bandwidth of the SLED as a summation of coherent sources (Peng et al., 2021), we found the computational cost to be prohibitively high for our application due to GPU memory constraints. We achieved sufficient image quality while assuming a fully coherent model, potentially due to the U-net which is capable of simulating the additional blur we expect from a partially coherent source. Calibration ProcedureWe fit the learned parameters in our model (Eqs. 10 - 12) using a dataset captured on the experimental system. We pre-calculate 882 SLM patterns from a personally collected dataset of images using a naive ASM propagation model. Each SLM pattern is captured in 5 mm increments from \(z=90\) mm to 120 mm, resulting in a total of 6174 paired entries. The raw camera data is debayered and an affine transform is applied to align the image with the SLM (see Supplement for details). Model fitting is implemented in Pytorch using an L1 loss function between the model output and camera capture. To account for the camera color balance, we additionally learn a \(3\times 3\) color calibration matrix from the RGB simulated intensities to the captured color image. We train until convergence, which is typically reached between 50 and 100 epochs (2-3 days on Nvidia A6000 GPU). Hologram GenerationAfter training, we can generate holograms by solving Eq. 9 using the trained model for \(I_{z,\lambda}\), implemented with Pytorch's native auto-differentiation. The SLM pattern, \(s\), is constrained to the range where the LUT is valid (for example, 0 - 255); the values outside that range are wrapped after every optimization step. On the Nvidia A6000 GPU, it takes about two minutes to optimize a 2D hologram. Computation time for the optimization of a 3D hologram scales proportional to the number of depth planes. By employing a perceptual loss function, we have successfully addressed the difficult challenge of simultaneous color holography, as validated by experimental testing in 2D and 3D. Our work brings us closer to creating holographic near-eye displays. Figure 6: **Experimentally captured 2D holograms.** This figure depicts experimentally captured holograms at a depth of \(120\,\mathrm{mm}\). Row one contains the experimentally captured images. Row two is the simulation output of the optimized SLM pattern. Row three contains the target images. While most the of captured holograms have good color fidelity, our method is least effective on highly saturated images with low texture, such as the cat paws in column 4, which represents a limitation or our method (see Sec. 7). Figure 7: **Experimentally captured focal stack.** This figure depicts a focal stack captured from \(90\,\mathrm{mm}\) to \(120\,\mathrm{mm}\) in \(10\,\mathrm{mm}\) increments. Row one contains the experimentally captured images. Row two is the simulation output of the optimized SLM pattern. Row three contains the target images. [MISSING_PAGE_POST]
2307.03374
STG-MTL: Scalable Task Grouping for Multi-Task Learning Using Data Map
Multi-Task Learning (MTL) is a powerful technique that has gained popularity due to its performance improvement over traditional Single-Task Learning (STL). However, MTL is often challenging because there is an exponential number of possible task groupings, which can make it difficult to choose the best one because some groupings might produce performance degradation due to negative interference between tasks. That is why existing solutions are severely suffering from scalability issues, limiting any practical application. In our paper, we propose a new data-driven method that addresses these challenges and provides a scalable and modular solution for classification task grouping based on a re-proposed data-driven features, Data Maps, which capture the training dynamics for each classification task during the MTL training. Through a theoretical comparison with other techniques, we manage to show that our approach has the superior scalability. Our experiments show a better performance and verify the method's effectiveness, even on an unprecedented number of tasks (up to 100 tasks on CIFAR100). Being the first to work on such number of tasks, our comparisons on the resulting grouping shows similar grouping to the mentioned in the dataset, CIFAR100. Finally, we provide a modular implementation for easier integration and testing, with examples from multiple datasets and tasks.
Ammar Sherif, Abubakar Abid, Mustafa Elattar, Mohamed ElHelw
2023-07-07T03:54:26Z
http://arxiv.org/abs/2307.03374v2
# STG-MTL: Scalable Task Grouping ###### Abstract Multi-Task Learning (MTL) is a powerful technique that has gained popularity due to its performance improvement over traditional Single-Task Learning (STL). However, MTL is often challenging because there is an exponential number of possible task groupings, which can make it difficult to choose the best one, and some groupings might produce performance degradation due to negative interference between tasks. Furthermore, existing solutions are severely suffering from scalability issues, limiting any practical application. In our paper, we propose a new data-driven method that addresses these challenges and provides a scalable and modular solution for classification task grouping based on hand-crafted features, specifically Data Maps, which capture the training behavior for each classification task during the MTL training. We experiment with the method demonstrating its effectiveness, even on an unprecedented number of tasks (up to 100). Machine Learning, ICML ## 1 Introduction Multi-Task Learning (MTL) has emerged as a powerful technique in deep learning (Zhang and Yang, 2022; Crawshaw, 2020) that allows for joint training of multiple related tasks, leading to improved model performance compared to traditional Single-Task Learning (STL). By leveraging shared representations and knowledge across tasks, MTL enhances generalization and mitigates overfitting. Furthermore, MTL promotes faster learning of related tasks and alleviates the computational requirements of deep learning, making it particularly valuable in scenarios with limited task-specific data. That is why MTL has gained significant attention in various domains, including computer vision (Fan et al., 2017; Misra et al., 2016; Standley et al., 2020), natural language processing (Zhang et al., 2022; Peng et al., 2020; Jin et al., 2020; Bickel et al., 2008), speech recognition (Huang et al., 2022; Zhang et al., 2019), and healthcare (Peng et al., 2020; Bao et al., 2022; Islam et al., 2021), and has shown promising results in improving accuracy, robustness, and efficiency. However, effectively harnessing the potential of MTL poses several challenges, including the identification of optimal task groupings (Song et al., 2022; Fifty et al., 2021; Standley et al., 2020) and the management of negative interference between tasks (Sener and Koltun, 2018; Wu et al., 2020; Maninis et al., 2019). The task grouping problem in MTL is particularly challenging due to the exponential number of possible task combinations (Aribandi et al., 2021; Fifty et al., 2021; Standley et al., 2020; Song et al., 2022). What makes it worse for the exhaustive search is that each trial involves a complete training and evaluation procedure, leading to computational and optimization burden. Moreover, inappropriate task groupings may result in performance degradation due to negative transfer between tasks (Sener and Koltun, 2018; Wu et al., 2020; Maninis et al., 2019). Existing solutions have struggled to address these challenges, often suffering from scalability and modularity issues, making their practical application in real-world scenarios nearly infeasible. In this paper, we propose a novel data-driven method for task grouping in MTL for classification tasks, which overcomes the scalability and modularity limitations. Our method utilizes the concept of Data Maps (Swayamdipta et al., 2020), hand-crafted features that capture the training behavior of each classification task during MTL training. By analyzing these data maps, we can identify task groupings, both hard and soft ones, that promote positive transfer and mitigate negative interference as much as possible. We demonstrate the effectiveness of our method through extensive experimentation, including experiments on an unprecedented number of tasks, scaling up to 100 tasks to emphasize the practicality of our approach. The contributions of this paper can be summarized as follows: * We propose a novel data-driven method for task grouping in MTL, addressing the challenges of scalability and modularity. * We propose a mechanism that utilizes our soft-grouping results, enabling model specialization via loss weighting. * We conduct extensive experiments, demonstrating the effectiveness of our method, even on a large number of tasks (scaling up to 100 classification tasks). ## 2 Related Work MTL has been extensively studied to leverage the benefits of information sharing among related tasks, which can serve as an inductive bias to improve modeling performance (Caruana, 1997; Zhang and Yang, 2022). Another perspective on MTL is that it enables more efficient utilization of the model capacity by focusing on learning relevant features and reducing the impact of irrelevant signals, which contributes to overfitting, leading to better generalization. However, when tasks lack shared information, they compete for the limited model capacity, resulting in performance degradation (Sener and Koltun, 2018; Wu et al., 2020; Maninis et al., 2019). To address this challenge, task grouping has emerged as a promising solution to identify subsets of tasks that can be trained together, avoiding negative interference and promoting improved performance. Traditionally, the decision of task grouping has been approached through costly cross-validation techniques or human expert knowledge (Zhang and Yang, 2022). However, these methods have limitations when applied to different problem domains and may not scale well. Some attempts have been made to approach the problem differently enabling the models to automate the search over which parameters to share among particular tasks (Zhang et al., 2022; Misra et al., 2016). Methods such as Neural Architecture Search (Liu et al., 2018; Huang et al., 2018; Chen et al., 2023; Zhang et al., 2022; Sun et al., 2020; Vandenhende et al., 2019), Soft-Parameter Sharing (Ruder et al., 2019; Long et al., 2017; Misra et al., 2016), and asymmetric information transfer (Lee et al., 2016, 2018; Huang et al., 2022) have been developed. However, these models often exhibit poor generalization and struggle to perform well on diverse tasks and domains. Besides, they often require a large model capacity and do not thus scale well with a large number of tasks. Therefore, gradient-based approaches (Fifty et al., 2021; Strezoski et al., 2019) have also been explored to determine task grouping in advance. The Task Affinity Grouping (TAG) approach (Fifty et al., 2021), which leverages gradients to determine task similarity, is an example of such an approach. Nevertheless, it has complex training paradigm and requires \(\Theta(N^{2})\) more forward and backward passes to compute the inter task affinities, putting an issue with scalability even if we enhance the solution's modularity. Another method, called Higher-Order Approximation (HOA) (Standley et al., 2020), reduces the exponential number of MTL training, from the exhaustive search, by considering only the quadratic pairs of task combinations. However, even with such relaxation, the scalability of HOA remains limited, particularly when dealing with a large number of tasks. The task grouping problem has been addressed in recent studies through a Meta-Learning approach (Song et al., 2022), aiming to create a meta-learner that can estimate task grouping gains. Nevertheless, the computational demands of this approach pose practical challenges for real-world applications; it requires training MTL networks for every chosen task combination in the training set for multiple iterations. It furthermore outputs all the possible gains of every task combination, whose numbers grow exponentially, and runs a search algorithm over these exponentially growing gains to find the optimal grouping. As a result, the scalability of this solution is severely limited, making it less feasible for a larger number of tasks. ## 3 Task Clustering using Data Maps Now, we elaborate in the components in our method in the next sections. We start with stating the notations we will use along with our MTL architecture we are using in our experiments in Section 3.1. Then, we move on to illustrate the data maps, which is crucial component of our method in Section 3.2. In Section 3.3, we talk regarding the approaches we use to cluster the tasks. We also introduce our evaluation mechanism of our task grouping in Section 3.4. Finally, we conclude this part with a simple theoretical comparison of our method and the literature from the perspective of scalability and modularity in Section 3.5. Figure 1 provides an overview of our method. ### Preliminaries **Notations** In our paper, we use the following notations consistently. The set of all tasks is denoted as \(T=\{T_{1},\dots,T_{n}\}\), where \(n\) represents the number of tasks and \(|T|=n\). The total number of training data points is denoted as \(N\). We calculate the data maps at specific epochs, and the set of epochs is represented as \(E=\{E_{1},\dots,E_{k}\}\), where \(E_{i}\) corresponds to the \(i^{th}\) epoch. The task clusters are denoted by \(C=\{C_{1},\dots,C_{m}\}\), and each cluster \(C_{i}\) has an associated centroid \(c_{i}\). The participation of each task \(i\) in cluster \(j\) is represented by \(w_{i,j}\), with the constraint that \(\sum_{j=0}^{|C|}w_{i,j}=1\), indicating the percentage of membership; \(W_{i}\) is the weight vector of the tasks in cluster \(j\). The values of \(w_{i,j}\) range from 0 to \(1\), where \(1\) signifies full membership and 0 indicates no membership. **MTL Architecture** The MTL procedure for a given task combination, consisting of \(\tau\) tasks denoted as \(\{T_{a_{1}},\dots,T_{a_{\tau}}\}\), is defined as training with a joint objective for these tasks (Equation 1) where \(L\) is the accumulated loss value of the cluster, \(L_{k}\) is the loss of the \(k^{th}\) task, and \(w_{k}\in[0,1]\) is an optional task weight of the \(k^{th}\) task. \[L=\sum_{k=1}^{\tau}w_{k}\cdot L_{k} \tag{1}\] Following the previous approaches (Fifty et al., 2021; Standley et al., 2020; Song et al., 2022), we utilize a commonly employed hard-sharing multi-head architecture (Figure 1) for all our MTL experiments, where a single feature extractor is used to obtain shared representations, and separate task-specific heads are employed to output the result. Additionally, for all the experiments, we maintain the same data splits, via prior seeding, and keep the optimization algorithm and other hyperparameters fixed; this is to make sure any variability in the performance is only attributed to the task grouping and the corresponding weights if any. ### Data Maps as Task Features Data Maps (Swayamdipta et al., 2020), originally developed as a model-based tool for characterizing and diagnosing NLP datasets, serve as a valuable component in our approach. They leverage the model behavior concerning individual data point instances of the training set for each task. In our work, we employ Data Maps as task features due to their simplicity, scalability, and ability to extract them on the fly without prior knowledge of the model architecture, thus enhancing the modularity of our approach. The concept behind Data Maps revolves briefly around extracting two essential values for each data point: the _model confidence_ (\(\mu\)) of the true class, which is the average probability of the true class over some epochs, and the _variability_ (\(\sigma\)) of this confidence, which is the standard deviation of the true class probabilities over the same epochs. For a particular task, the data map shape is \((N,2)\) where \(N\) is the training size. Figure 2 shows an example of the resulting Data Map for an example task extracted from CIFAR10 dataset (Krizhevsky et al., 2009). Because their information is very task-dependent, we thought they can serve as task descriptors. To further enhance the expressiveness of the extracted features, we also extract data maps at various epochs, allowing us to gain insights into their evolution over time; the resulting shape in such case is \((|T|,|E|,N,2)\). Therefore, by analyzing their characteristics, over the different epochs during training, we can capture crucial information about the relatedness of each task. In the extraction of data maps, we employ two approaches. The first approach involves building a single MTL model that incorporates all tasks and extracting the data maps directly from this unified model. Alternatively, we utilize the second approach, where individual models are constructed for each task, resulting in multiple STLs, and merging the data maps obtained from each model. Our results are primarily based on the first approach, as it offers the advantage Figure 1: **Overview** of our method to cluster the tasks using Data Maps. \((1)\) we use a single Multi-head Multi-Task Learning architecture to jointly train all the tasks. Each head is task-specific layers. \((2)\) we extract the data maps of all the tasks across the epochs in \(E\). \((3)\) we use the data maps to cluster the tasks using kmeans and generate the memberships according to Equation 3. \((4)\) to evaluate our clustering results, we train \(m\) models where each model represents a cluster focusing on particular tasks using the memberships as loss weights. of a single training procedure, simplifying computational complexity, and streamlining experimentation, while having the same qualitative results as the STL. ### Task Clustering With the extracted data maps in hand, our next step is to group the tasks into clusters based on their similarity. We propose three distinct approaches for task clustering: soft clustering, hard clustering, and point-based soft clustering. In both hard and soft clustering, we represent each task as a vector by concatenating the corresponding data maps. In the case of hard clustering, we employ the k-means algorithm (Lloyd, 1982) to cluster these task vectors, aiming to identify distinct clusters of tasks. To introduce a more nuanced representation of task similarities, we incorporate a modified version of the fuzzification step (Equation 3), from (Bezdek et al., 1984), into our approach, which enables soft clustering, where \(x_{i}\) represents the \(i^{th}\) task vector of the corresponding data maps and \(F>1\) represents the fuzzification index. This fuzzification process assigns soft memberships to tasks, allowing for more flexible and comprehensive clustering results. We predominantly rely on the soft clustering approach due to its effectiveness and reliability. \[w_{i,j} =\frac{1}{\sum_{k=1}^{|T|}\left(\frac{\|x_{i}-c_{j}\|}{\|x_{i}-c_ {k}\|}\right)^{\frac{2}{p-1}}} \tag{2}\] \[=\frac{\|x_{i}-c_{j}\|^{\frac{-2}{p-1}}}{\sum_{k=1}^{|T|}\left(\| x_{i}-c_{k}\|\right)^{\frac{p-2}{p-1}}} \tag{3}\] In early experiments, though, we also explored a point-based clustering approach to determine the participation membership of tasks. This approach involves clustering each instance point per task within each data map. Each data point then serves as a vote for its corresponding task, and the participation membership of each task is calculated based on the percentage of data points within the cluster (Equation 4) where \(d^{i}_{k,e}\) represents the \(k^{th}\) data point of the data map taken at epoch \(e\) for task \(i\). \[w_{i,j}=\frac{\sum_{k=1}^{N}\sum_{e=1}^{|E|}[d^{i}_{k,e}\in C_{j}]}{|E|\cdot N} \tag{4}\] However, we do not heavily rely on this point-based approach in our method. This is because it treats the data points within a data map in isolation, failing to capture the abstract behavior specific to each task and overlooking the evolution of data maps over different epochs. ### Model Specialization through Loss Weighting In order to assess the effectiveness of our task grouping results, we use loss weighting as a method of model specialization. We construct MTL models that are tailored to specialize in specific sets of tasks based on the membership weights obtained from soft clustering results. For each cluster, we build an individual MTL model that focuses on the tasks assigned to that cluster according to their corresponding weights (Equation 1). To evaluate the performance of our solution, we apply the weighted average of the models' outputs according to the membership weights as in Equation 5, where \(O\) is our final output, \(O_{k}\) is the output according to the \(k^{th}\) cluster, and \(W_{k}\) is the weight vector of the Figure 3: The procedure to use our specialized trained models to infer the results Figure 2: An example of a generated data map for the “Living being” task after 21 epochs of co-training on 15 tasks of G2 (Section 4.1) cluster. Figure 3 provides an overview of these operations while inferring the output. By comparing the resulting values with both STL and traditional MTL schemes, we can gain insights into the benefits and improvements brought by our task grouping approach. \[O=\sum_{k=1}^{m}W_{k}\cdot O_{k} \tag{5}\] ### Theoretical Scalability Comparison In the theoretical scalability comparison, we evaluate our method against existing literature, focusing on the number of models required for clustering. Table 1 presents the comparison, where lower numbers indicate better scalability. Our approach stands out with excellent scalability, as it only necessitates training a single MTL model to extract data maps and perform clustering, or \(\mathcal{O}(N)\) if we consider extracting data maps from STL models. This offers the most promising scalability potential for a larger number of tasks. That is why we can scale our experiments to a very large number of tasks as in Section 4. Notice TAG requires one single MTL training, yet this is a customized training procedure where each epoch is effectively processed \(\Theta(N^{2})\), \(\binom{N}{2}\) in particular, times to compute the inter-task affinities pairs, which like the other methods limits its scalability. Therefore, one MTL training of TAG utilizes the same compute of \(\Theta(N^{2})\) MTL models trained within the other methods normally. Furthermore, our method's data map computation is performed on the fly, making it both model and task agnostic. This feature enhances the modularity of our approach, enabling effortless adaptation to different model architectures and tasks without manual intervention. ## 4 Experiments In this section, we present a comprehensive overview of our experiments, focusing on assessing the effectiveness of our method and presenting the corresponding results. Section 4.1 outlines the specifics of the datasets utilized in our experimentation, as well as the tasks employed. In Section 4.2, we delve into the details of the model architecture and the hyperparameters used during experimentation. The outcomes of the soft clustering of tasks are presented in Section 4.3, where we highlight the effectiveness of our approach in grouping related tasks. Finally, in Section 4.4, we evaluate the quality of the obtained clustering results comparing them to STL and MTL results. ### Datasets and Tasks Our task generation is based on the CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We define three groups of tasks for our experiments. In Group 1 (**G1**), we include binary classification tasks that determine whether an image belongs to a specific label in CIFAR10 or not. G1 consists of 10 tasks: {airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck}. Group 2 (**G2**) expands on G1 by introducing additional tasks on CIFAR10. These tasks include {Living being, Odd-numbered, Downside, Not living being, random}. The "Living being" task aims to detect whether an image contains a living being, which includes images with the labels {bird, cat, deer, dog, frog, horse}. Similarly, the "Not living being" task focuses on identifying non-living beings; these are { airplane, automobile, ship, truck} classes in CIFAR10. Notably, "Living being" and "Not living being" are intentionally designed to be similar tasks for testing purposes. The "Odd-numbered" task identifies whether the label of a CIFAR10 image is odd or not, encompassing {automobile, cat, dog, horse, truck} classes. Additionally, we flip half of CIFAR10 images and create a task to train the model to recognize vertically flipped images, the "Downside" task. Lastly, the "random" task assigns random binary labels to the entire dataset with a predefined seed for consistency and reproducibility. It is worth mentioning that while the original tasks in G1 are imbalanced, the extra tasks in G2 are all balanced ones. Group 3 (**G3**), similar to G1, consists of 100 binary classification tasks using the CIFAR100 labels. We also utilize the \(20\) super labels of CIFAR100 as our ground truth for task clustering evaluation. It is worth mentioning that CIFAR100 super labels are not intended for task grouping, so they are not grouped based on visual similarities like our method's objective. Instead, they are mostly clustered semantically, even though there are some exceptions like mushrooms and the classes of vehicles 1 and 2. Still, we think they serve as an informative indicator of the effectiveness of our approach, especially in the visually coherent superclasses. ### Model Architectures and Hyper-Parameters For all our experiments, we adopt the RESNET18 architecture (He et al., 2016) as our base model. Our method is \begin{table} \begin{tabular}{l c} \hline \hline Method & Effective MTL models (\(\downarrow\)) \\ \hline Exhaustive Search & \(\Theta\left(2^{N}\right)\) \\ HOA & \(\Theta\left(\binom{N}{2}\right)=\Theta\left(N^{2}\right)\) \\ TAG & \(\Theta\left(N^{2}\right)\) \\ MTG-Net & \(\Theta\left(N\cdot K\right)\) \\ \hline STG-MTL & \(\Theta\left(1\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of asymptotic growth of the equivalently trained number of MTL models to get task grouping of various methods and ours model-agnostic, so we have also experimented with models with much less complexity, yet we use RESNET18 considering it has moderate model capacity. Furthermore, we utilize it without any pre-training, ensuring that the model starts from scratch for each task grouping scenario. The last fully connected layer of RESNET18 serves as the task heads, with the number of output neurons corresponding to the number of tasks. Each neuron in the task heads represents a specific classification task. Throughout our experiments, the rest of the network, excluding the task heads, is shared among all tasks. Also, we train the model for \(50\) epochs in all our experiments: to extract data maps and to evaluate the models. Additionally, in our clustering process, we primarily set the fuzzification index (\(F\)) to \(2\), unless explicitly mentioned otherwise. The fuzzification index controls the level of fuzziness in the soft clustering algorithm, so increasing it produce softer decisions. In terms of the loss function, we utilize Binary Cross Entropy as the binary classification loss for our tasks. However, to address the issue of task imbalance, we incorporate a penalty on positive instances for each task. By applying this penalty, we ensure that the model pays more attention to the minority label during training, thereby mitigating the impact of the imbalance and promoting better overall performance. Finally, it worth mentioning that we do not perform any kind of tuning to any model. We use the same basic settings in all of our experiments. ### Task Clustering Results Results of our task clustering experiments are presented for all groups. We initially experimented on G2, generating their data maps as described in Section 3.2 and Clustering them as in Section 3.3, as depicted in Figure 4. Notably, our method successfully clustered the "random" task separately, indicating its dissimilarity to the other tasks. Furthermore, throughout all our experiments, the tasks "Living being" and "Not living being" consistently exhibited the same membership distribution, which is reasonable considering their equivalence. Moreover, when focusing solely on the first 10 tasks from G2 without any additional tasks, our clustering algorithms demonstrated some semantic clustering capabilities, as shown in Figure 3(b). The algorithm successfully grouped images of living beings, including {bird, cat, deer, dog, frog, horse}, while another group consisted of images of non-living beings such as {airplane, automobile, ship, truck}. Nevertheless, this might be due to the impact of the "Living being" and "Not living being" tasks; we therefore conducted a similar experiment on G1, generating their data maps and clustering the tasks, without any extra tasks. As illustrated in Figure 5, even without additional tasks, our method performed the same reasonable clustering for G1, grouping living beings together and non-living things together (a)a. Additionally, Figure 4(b) demonstrates the clustering using three clusters, revealing that the living being cluster was divided into two groups: cluster \(1\) and cluster \(2\). Cluster \(1\) predominantly contained quadruped animals {cat, dog, horse}, while cluster \(2\) included {bird, frog, deer} that represented the other living creatures except for the deer. These results showcase the effectiveness of our clustering algorithm in capturing semantic similarities among tasks based on the visual data leading to meaningful task groupings. In addition to our experiments on G1 and G2, we conducted a comprehensive evaluation of an unprecedented number of tasks, specifically \(100\) tasks from CIFAR100, in G3. As part of this evaluation, we compared our task clustering results against the predefined superclasses provided by CIFAR100. It is important to note that the superclasses in CIFAR100 primarily rely on semantic relationships as illustrated in Figure 4: Task grouping of G2 with \(F=2\) Figure 5: Task grouping of G1 with \(F=2\) Section 4.1. That is why we focus on coherent superclasses like people and flowers, as examples. In Figure 6, we showcase an example of the clustering results for a group of super tasks. It is noteworthy that our method successfully clusters certain groups of tasks in alignment with the predefined CIFAR100 superclasses, as illustrated in Figure 5(a). However, it is important to acknowledge that there are cases where the clustering may not be perfect, as depicted in Figure 5(b); we think this is primarily because our method focus one visual similarities, which is exploited during training rather than semantics. Nevertheless, even in such instances, our clustering algorithm manages to allocate significant weights of all tasks into distinctive clusters, such as clusters \(0\) and \(8\) in Figure 5(b). Notably, in cluster \(8\), the participation percentages of the tasks {orchid, poppy, rose, tulip} are the \(2^{nd}\) highest across all clusters, indicating a close relationship with the missclassified task sunflower, yet our method suggests that the other four tasks are more visually related. We further discuss all the clustering result details of the 100 Tasks in Appendix A. ### Evaluation Analysis To further validate the effectiveness of our method, we conducted a comprehensive evaluation as described in Section 3.4 on all task groups. Figure 7 presents the average F1 score for both the training and test sets of all the three sets of tasks. Our method is denoted by STG-MTL xxC (F=2) where xx represents the number of clusters. The MTL curve represents the results obtained from training an MTL model on all tasks without any grouping, while the STL curve represents the results obtained by training separate STL models for each task and merging their outputs. We compare the performance of our method against the MTL and STL approaches in both G1 & G2 and against the MTL approach only in G3 because the STL performance is poorer than the MTL, as it overfits. Overall, our method consistently outperforms both the MTL and STL approaches, indicating that the task grouping provides valuable information for improving task performance. Notably, although our method tends to overfit and achieves excellent training performance, it also achieves the best performance on the test set. This suggests that if the models were further fine-tuned, even greater gains could be achieved, yet we refrain from tuning any of the models in this study to guarantee fairness in comparison. ## 5 Conclusion and Future Work In conclusion, we have presented STG-MTL, which is a novel scalable approach for task grouping in multi-task learning (MTL) settings. Our method utilizes data maps (Swayamdipta et al., 2020) to identify task similarities and group them accordingly. We showed is superior scalability theoretically in comparison to TAG (Fifty et al., 2021), HOA (Standley et al., 2020), and MTG-Net (Song et al., 2022). We have also demonstrated the effectiveness of our method through our experiments on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009), where we pushed the boundaries by experimenting with **100 tasks**, which has never been done before in the literature proving its scalability. We have also compared our clustering results Figure 6: Task grouping of G3 (100 Tasks of CIFAR100) into \(20\) clusters with \(F=2\) Figure 7: Average F1 Scores of all the Groups on both the training and test sets against the predefined superclasses in CIFAR100, further validating the effectiveness of our approach. Furthermore, our method outperformed traditional MTL and single-task learning (STL) approaches, showcasing the quality of task grouping and its ability to improve multi-task learning performance. For future work, we plan to expand the scope of our experiments by including a wider range of datasets and task types, enabling a more comprehensive evaluation of our approach's effectiveness and applicability. Furthermore, as our Data Maps are currently limited to classification tasks, we aim to explore their generalization to other task types, such as regression. Additionally, we hope our research could open a new research direction in the MTL community to explore the development of new features that can capture the training dynamics efficiently, other than data maps. By advancing this research direction, we can unlock new possibilities for enhancing performance and driving further advancements in the field of MTL. **Acknowledgments** We gratefully thank the Fatima Fellowship1 for supporting this research especially during its early stages. Footnote 1: [https://www.fatimafellowship.com/](https://www.fatimafellowship.com/)
2301.01835
Deep Statistical Solver for Distribution System State Estimation
Implementing accurate Distribution System State Estimation (DSSE) faces several challenges, among which the lack of observability and the high density of the distribution system. While data-driven alternatives based on Machine Learning models could be a choice, they suffer in DSSE because of the lack of labeled data. In fact, measurements in the distribution system are often noisy, corrupted, and unavailable. To address these issues, we propose the Deep Statistical Solver for Distribution System State Estimation (DSS$^2$), a deep learning model based on graph neural networks (GNNs) that accounts for the network structure of the distribution system and for the physical governing power flow equations. DSS$^2$ leverages hypergraphs to represent the heterogeneous components of the distribution systems and updates their latent representations via a node-centric message-passing scheme. A weakly supervised learning approach is put forth to train the DSS$^2$ in a learning-to-optimize fashion w.r.t. the Weighted Least Squares loss with noisy measurements and pseudomeasurements. By enforcing the GNN output into the power flow equations and the latter into the loss function, we force the DSS$^2$ to respect the physics of the distribution system. This strategy enables learning from noisy measurements, acting as an implicit denoiser, and alleviating the need for ideal labeled data. Extensive experiments with case studies on the IEEE 14-bus, 70-bus, and 179-bus networks showed the DSS$^2$ outperforms by a margin the conventional Weighted Least Squares algorithm in accuracy, convergence, and computational time, while being more robust to noisy, erroneous, and missing measurements. The DSS$^2$ achieves a competing, yet lower, performance compared with the supervised models that rely on the unrealistic assumption of having all the true labels.
Benjamin Habib, Elvin Isufi, Ward van Breda, Arjen Jongepier, Jochen L. Cremer
2023-01-04T22:04:28Z
http://arxiv.org/abs/2301.01835v3
# Deep Statistical Solver for Distribution System State Estimation ###### Abstract Implementing accurate Distribution System State Estimation (DSSE) faces several challenges, among which the lack of observability and the high density of the distribution system. While data-driven alternatives based on Machine Learning models could be a choice, they suffer in DSSE because of the lack of labeled data. In fact, measurements in the distribution system are often noisy, corrupted, and unavailable. To address these issues, we propose the Deep Statistical Solver for Distribution System State Estimation (DSS2), a deep learning model based on graph neural networks (GNNs) that accounts for the network structure of the distribution system and for the physical governing power flow equations of the problem. Footnote 2: footnotemark: DSS2 leverages hypergraphs to represent the heterogeneous components of the distribution systems and updates their latent representations via a node-centric message-passing scheme. A weakly supervised learning approach is put forth to train the DSS2 in a learning-to-optimize fashion w.r.t. the Weighted Least Squares loss with noisy measurements and pseudomesuerments. By enforcing the GNN output into the power flow equations and the latter into the loss function, we force the DSS2 to respect the physics of the distribution system. This strategy enables learning from noisy measurements, acting as an implicit denoiser, and alleviating the need for ideal labeled data. Footnote 2: footnotemark: Extensive experiments with case studies on the IEEE 14-bus, 70-bus, and 179-bus networks showed the DSS2 outperforms by a margin the conventional Weighted Least Squares algorithm in accuracy, convergence, and computational time, while being more robust to noisy, erroneous, and missing measurements. The DSS2 achieves a competing, yet lower, performance compared with the supervised models that rely on the unrealistic assumption of having all the true labels. We believe these findings are of high scientific and practical importance as they show that the physicalities of the distribution systems, both from a network representation and from the governing power flow equations are crucial in data-driven solutions. State Estimation, Distribution System, Deep Learning, Graph Neural Network, Physic-Informed Neural Network, weakly supervised learning ## I Introduction Distribution systems are taking a more active role in the energy transition. These active distribution systems require more extensive monitoring and control, which is possible by developing Distribution System State Estimation (DSSE) [1]. Currently, state estimation (SE) is mostly only possible in the transmission systems, and several challenges exist to extending SE to distribution systems successfully. First, conventional SE algorithms for transmission systems are challenging to adopt to distribution systems as the assumptions differ. Additionally, the distribution grid lacks real-time measurements. Conventional algorithms assume full observability of the grid with redundant measurements [2]. Pseudomesuerments forecasted values based on historical data can compensate for the lack of measurements, but they are often inaccurate and can impact the SE accuracy [3]. Also, the Weighted Least Squares (WLS) method used for SE is time-consuming and sensitive to data noise for large distribution systems [4]. Multi-area SE has been widely investigated to speed up the estimation process [5, 6], and has been extended to distribution systems [7]. However, the convergence and sensitivity issues remain, and division into multiple areas brings in communication and time-synchronization challenges. Different algorithms have been proposed to improve the robustness and convergence of SE, notably the branch-current WLS, the Least Absolute Value, and the Generalized Maximum Likelihood [4, 8]. Branch-current algorithms are more robust to parameter selection and uncertainty and are more suited for the weakly meshed and radial topologies in the distribution system [9]. Although, these algorithms suffer from the lack of qualitative measurements in wide distribution systems and the uncertainty of distributed loads and generators. Kalman Filters aim to improve speed and estimation performance under low observability. They are linked to the Forecast-Aided SE concept, where model-based approaches use the previous states as extra information to enhance accuracy and speed [4, 10, 11, 12, 13]. However, Kalman Filter SE is limited by the assumptions of system linearization and the Gaussian distribution of the measurements, which reduce its accuracy and robustness. Indeed, power systems are highly nonlinear, and measurements can show a non-Gaussian distribution [14]. Data-driven techniques showed promising results in performing fast DSSE without the above-mentioned assumptions. Deep learning models showed remarkable results to fit data for the SE task [15, 16, 17, 18]. This _supervised learning_ approach trains neural networks to fit _labels_, which are the grid's state variables. These labels are usually provided from simulations, as getting them from the grid is often impossible. As such, these approaches suffer from the scarcity of real labelled data and supervised learning is only possible using simulation data. Therefore, the models can only fit simulators exclusively and not real systems. Even though some approaches try to improve this technique by introducing some inductive bias [19] and physic-awareness [20], they all require extensive supervised learning using large labelled databases [4]. Combining model-based and data-driven approaches is a promising research direction to overcome the limitations of the model-based techniques with data-driven tools [11]. This approach is considered in [21] to combine the efficiency of Kalman Filters with the robustness of supervised Deep Learning architectures. It showed interesting results in low-dimensional problems; however, it suffers from high-dimensional problems due to the need for labelled data and unstable training. In the field of 'hybrid' approaches, [22] combines data with physics and develops a model-specific deep neural network (DNN) by unrolling a SE solver to enhance estimation performance and alleviate computation expenses. However, the model is trained with fully labeled data, and the physic-awareness of the approach is limited and does not include the structure of the system. Graph Neural Networks (GNNs) are a particular family of deep learning models that use the underlying network structure as an inductive bias [23, 24] to tackle the curse of dimensionality and reduce the data demand. GNNs have also shown robustness to perturbations in the network topology [25, 26, 27], which makes them appealing data-driven alternatives for the DSSE task. GNNs are investigated for power system applications, where the electrical lines correspond to the graph's edges and the buses correspond to the graph's nodes [28], and the data varies for the specific application. GNNs are also investigated for SE, trained in a supervised way requiring labels [29]. However, the heterogeneous components of power systems can not be modelled accurately through simple graphs, and GNNs for power system applications are intrinsically limited by the lack of expressivity in the graph model. Interestingly, however, [29] demonstrated that GNNs are more robust to noise than other neural network models. In this paper, we propose the _Deep Statistical Solver for Distribution System State Estimation (DSS\({}^{2}\))_, a GNN model based on the Deep Statistical Solver architecture [30] specialized for optimization tasks on power systems. The model is trained in _weakly supervision_ manner to tackle the issues of data scarcity and inaccurate labeling [31]. The success of such weak supervision is conveyed by considering the physical laws of the power flow equations in the training loss function, rendering labels obsolete. Specific contributions include: 1. DSS\({}^{2}\), the Deep Statistical Solver model for accurate data-driven DSSE using a weakly supervised approach. 2. adding physical constraints as penalization to the loss function, enhancing the model's performance. 3. the innovative use of _weakly supervision_ in the context of data-driven DSSE, which leverages the power flow equations to restrict the model's search for the mapping function, hence reducing the data demand and improving robustness to inaccurate measurements. We validate the model using various case studies on the IEEE 14-bus, 70-bus, and 179-bus systems and compare it to the WLS algorithm baseline and other Deep Learning architectures. The proposed DSS\({}^{2}\) is up to \(15\) times faster, \(4\) times more accurate, and more robust than the standard WLS algorithm. Our model also outperforms supervised learning approaches, being \(10\) times more accurate in line-loading estimation while alleviating the need for labelled data. Interestingly, our approach is better for larger networks as the GNN learns in the neighborhood of buses, and the larger the power network, the more data to learn from. This paper presents the methodology in Sections II and III, introducing the Deep Statistical Solver model and extending its usage to DSS, respectively. Sec. IV are the case studies and compares the performances to the baseline algorithm and other data-driven models. Sec. V concludes this work. ## II Proposed approach for State Estimation ### _Conventional problem formulation_ The state estimation problem aims at finding the state vector \(\mathbf{x}\) based on a noisy measurement vector \(\mathbf{z}\). Conventionally, we consider the voltage amplitude and angle at every grid bus as state variables, and \(\mathbf{z}\) can include any measurement type: \[\mathbf{x} =[V_{0},V_{1},\cdots,V_{n-1},\varphi_{0}=0.,\varphi_{1},\cdots, \varphi_{n-1}]\in\mathbb{R}^{2n\times 1} \tag{1a}\] \[\mathbf{z} =[z_{0},z_{1},\cdots,z_{m-1}]\in\mathbb{R}^{m\times 1}, \tag{1b}\] where we consider \(n\) buses and \(m\) measurements. \(V_{i}\) is the voltage amplitude at bus \(i\), and \(\varphi_{i}\) the voltage phase angle. We have \(2n-1\) state variables, as \(\varphi_{0}\) is set to \(0\) by the slack bus convention. Linking the measurement vector \(\mathbf{z}\) to the state vector \(\mathbf{x}\), we define a measurement function \(\mathbf{h}(\mathbf{x})\): \[\mathbf{z}=\mathbf{h}(\mathbf{x})+\mathbf{\epsilon}, \tag{2}\] where \(\mathbf{\epsilon}\in\mathbb{R}^{m\times 1}\) is the measurement noise vector, and \(\mathbf{h}(\mathbf{x})\) are the power flow equations shown in Eq. (3) [32]. \[\mathbf{h}(\mathbf{x})=\begin{cases}\begin{aligned} V_{i}&=V_{i} \\ \varphi_{i}&=\varphi_{i}\\ P_{ij_{-}}&=-V_{i}V_{j}\left[\mathbb{R}(Y_{ij})\cos(\Delta\varphi_{ ij})+\mathbb{I}(Y_{ij})\sin(\Delta\varphi_{ij})\right]\\ &\quad+V_{i}^{2}\left[\mathbb{R}(Y_{ij})+\frac{\mathbb{R}(Y_{ij})}{2 }\right]\\ P_{ij_{+-}}&=V_{i}V_{j}\left[-\mathbb{R}(Y_{ij})\cos(\Delta \varphi_{ij})+\mathbb{I}(Y_{ij})\sin(\Delta\varphi_{ij})\right]\\ &\quad+V_{j}^{2}\left[\mathbb{R}(Y_{ij})+\frac{\mathbb{R}(Y_{ij})}{ 2}\right]\\ Q_{ij_{-}}&=V_{i}V_{j}\left[-\mathbb{R}(Y_{ij})\sin(\Delta \varphi_{ij})+\mathbb{I}(Y_{ij})\cos(\Delta\varphi_{ij})\right]\\ &\quad-V_{i}^{2}\left[\mathbb{I}(Y_{ij})+\frac{\mathbb{I}(Y_{ij})}{ 2}\right]\\ &\quad-V_{j}^{2}\left[\mathbb{I}(Y_{ij})+\frac{\mathbb{I}(Y_{ij})}{ 2}\right]\\ I_{ij_{+-}}&=\frac{|P_{ij_{-}}-jQ_{ij_{+-}}|}{\sqrt{3}V_{i}e^{-2 \varphi_{i}}|}=\frac{|P_{ij_{+-}}-jQ_{ij_{-}}|}{\sqrt{3}V_{i}}\\ I_{ij_{+-}}&=\frac{|P_{ij_{-}}-jQ_{ij_{+-}}|}{\sqrt{3}V_{j}e^{-2 \varphi_{j}}|}=\frac{|P_{ij_{+-}}-jQ_{ij_{-}}|}{\sqrt{3}V_{j}}\\ P_{i}&=-\sum\limits_{j\in\mathcal{N}(i)}P_{ij_{+-}}+P_{ij_{-}}\\ Q_{i}&=-\sum\limits_{j\in\mathcal{N}(i)}Q_{ij_{+-}}+Q_{ij_{-}}\\ \end{aligned}\end{cases} \tag{3}\] In this measurement function, Eq. (3), \(\Delta\varphi_{ij}=\varphi_{i}-\varphi_{j}+\phi_{ij}\) is the voltage angle difference across the line that connects bus \(i\) to bus \(j\), \(\phi_{ij}\) is the shift angle of the transformer if any, and \(Y_{ij}\) and \(Y_{s_{ij}}\) are respectively the line and shunt admittance of the line between bus \(i\) and bus \(j\). Measuring flows at bus \(i\), we have \(P_{ij_{-}}\) and \(Q_{ij_{-}}\) as the active and reactive power flow from bus \(i\) to bus \(j\), and \(P_{ij_{-}}\) and \(Q_{ij_{+-}}\) as the power flow from bus \(j\) to bus \(i\). Current flow \(I_{ij}\) follows the same convention. Finally, we derive the active and reactive power injections at bus \(i\), \(P_{i}\), and \(Q_{i}\) from the power flows. All these outputs are possible elements of \(\mathbf{z}\), depending on the measurement infrastructure. The measurement function \(\mathbf{h}(\mathbf{x})\) is nonlinear, and Eq. (2) includes the probabilistic noise vector \(\mathbf{\varepsilon}\). In SE, we are interested in finding the inverse relation \(\mathbf{h}^{-1}(\mathbf{z})\) to estimate the state vector \(\mathbf{x}\) while compensating the error \(\mathbf{\varepsilon}\). The conventional SE approach, shown in Fig. (a)a, uses the iterative Newton-Raphson algorithm to minimize a WLS objective function [32]. This technique uses the redundancy of measurements to provide an accurate estimation. However, the approach requires at least the same number of measurements as state variables, meaning \(m\geq 2n-1\), and the system needs to be fully observable. Moreover, matching this requirement but failing to provide enough redundancy highly impacts the estimation accuracy. In practice, \(m\approx 4n\) achieves satisfying results, which is impractical for the distribution system [33]. The iterative process may even diverge in case of poor observability or high noise level in the measurements [4]. Another approach to approximate \(\mathbf{h}^{-1}(\mathbf{z})\) is to train an Artificial Neural Network (ANN) to map this function. ANNs approximate functions using a series of nonlinear operations parameterized by their _trainable weights_\(\mathbf{\theta}\)[34]. These weights are estimated during the training phase to approximate the given relation. For the SE task, the model is trained to approximate the inverse relation \(\mathbf{h}^{-1}(\mathbf{z})\), considering the measurement vector \(\mathbf{z}\) as the input of the ANN and the state vector \(\mathbf{x}\) as its output. We use the state estimation's convention of \(\mathbf{x}\) as output and \(\mathbf{z}\) as input of the model.: \[f_{\mathbf{\theta}}\left(\mathbf{z}\right):=\mathbf{h}^{-1}(\mathbf{z})\rightarrow\mathbf{x} \tag{4}\] In a common supervised learning approach, the approach assigns a label vector \(\mathbf{y}\) as the _true value_ of \(\mathbf{x}\) for each measurement sample (one measurement vector \(\mathbf{z}\)). In the training process, shown in Fig. (b)b, the model fits the data using available labels as reference. This approach, although quite efficient, is impractical for DSSE due to the lack of labelled data. Instead, our contribution combines Deep Learning and WLS optimization to propose a weakly supervised learning approach, alleviating the need for labels. ### _Weakly supervised learning_ To develop a weakly supervised learning approach for the DSSE task, we select the conventional WLS loss as a target optimization problem to learn, as shown in Fig. (c)c. The WLS approach is widely used in the industry and has good accuracy when given enough redundancy in the measurements. We use it as a target problem for the H2MGNN model described in Sec. III-B, aiming to reach a similar performance while improving its numerical stability, computation time, robustness, and observability requirements. The training loss function to minimize becomes then: \[L(\mathbf{z},\mathbf{x})=\sum_{k\in\mathcal{M}}\frac{\left|z_{k}-h_{k}(\mathbf{x})\right| ^{2}}{\sigma_{k}^{2}} \tag{5}\] with \(\sigma_{k}\) the standard deviation of the measurement \(k\)'s uncertainty assumed as Gaussian distribution, and \(\mathcal{M}\) the measurements set. While SE aims at considering the actual noise \(\varphi\) of the measurement vector, a 'first guess' of this value is assumed. \(\sigma_{k}\). \(|\mathcal{M}|=m\) is the number of measurements where \(|\cdot|\) is the cardinality of a set. We assume uncorrelated measurements. This loss function includes the power flow equations through the measurement function \(\mathbf{h}(\mathbf{x})\) defined in (3). With such a loss function, we implement a weakly supervised learning approach where we use the input measurements \(\mathbf{z}\) as noisy, imperfect labels that the H2MGNN needs to fit through the power flow equations. Function (3) is differentiable w.r.t the output state variables, and the gradient can be expressed using the measurement Jacobian matrix \(\mathbf{H}(\mathbf{x})=\nabla\mathbf{h}(\mathbf{x})\). ### _Physical penalization terms in the loss function_ We propose aiding the WLS learning loss (5) with different penalization terms to reduce the number of local minima and 'guide' the outputs towards physically-feasible solutions. Considering stable networks, we add three terms to the loss: * Voltage level stability criteria: power systems ensure a voltage level between \(V^{LB}=95\%\) and \(V^{UB}=105\%\) per unit to remain stable. Therefore, a two-sided penalization \([V-V^{UB}]_{+}+[V^{LB}-V]_{+}\) is added to the loss function to enforce this criterion.1 Footnote 1: \([x]_{+}=max(0,x)\) * Phase angle stability criteria: large variations in phase angles are improbable in stable systems. For example, a phase angle difference of more than \(\Delta\varphi^{UB}=0.25\) rads between two neighbouring buses would characterize an unstable network. Therefore, we add a second two-sided penalization \([\Delta\varphi-\Delta\varphi^{UB}]_{+}+[-\Delta\varphi^{UB}-\Delta\varphi]_{+}\) to the loss function to constrain this phase angle difference to \(\Delta\varphi^{UB}=0.25\). Fig. 1: State estimation with (a) WLS with Newton-Raphson solver uses an initial guess \(\mathbf{x}_{0}\) of the state vector \(\mathbf{x}\) that iteratively updates \(\mathbf{x}^{6}\) until the tolerance \(\Delta\mathbf{x}>\epsilon\) or a maximum of iterations is reached, (b) supervised learning uses a label vector \(\mathbf{y}\) to train an ANN to fit the output \(\mathbf{x}\) to the input \(\mathbf{z}\) and (c) the proposed weakly supervised approach considers the power flow equations \(\mathbf{h}(\mathbf{x})\) (Eq. 3) to get the _estimated_ measurements and fits the output \(\mathbf{x}\) of the GNN model to the input \(\mathbf{z}\) without labels. The target optimization is similar to WLS. * Line loading stability criteria: power systems regulators ensure the network's security by applying safety margins to line loading. To keep the model output within a physical range, we apply a third penalization \([l-l^{UB}]_{+}\) on the line loading when the prediction gives a loading higher than \(l^{UB}=100\%\). Adding these terms to the loss function, the equation used in the training process becomes: \[\begin{split} L(\mathbf{z},\mathbf{x})&=\sum_{k\in\mathcal{ M}}\frac{|z_{k}-h_{k}(\mathbf{x})|^{2}}{\sigma_{k}^{2}}+\lambda_{0}[\lambda_{1}[V-V^{ UB}]_{+}\\ &+\lambda_{1}[V^{LB}-V]_{+}+\lambda_{2}[\Delta\varphi-\Delta \varphi^{UB}]_{+}\\ &+\lambda_{2}[-\Delta\varphi^{UB}-\Delta\varphi]_{+}+\lambda_{3}[l -l^{UB}]_{+}],\end{split} \tag{6}\] where \(\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{3}\) are hyperparameters set to balance the effect of each mathematical term during training. These terms penalize the model output towards physically plausible boundaries and avoid diverging toward local minima that are well beyond the physical margins of the system. ## III The Deep Statistical Solver model This section proposes the H2MG structure, the modelling of the heterogeneous components of the distribution grid, the Hyper-Heterogeneous Multi Graph Neural Network (H2MGNN) and how to learn the H2MGNN in weak supervision for DSS by applying Sec. II. ### _Hyper-Heterogeneous Multi Graph (H2MG)_ The H2MG uses hypergraphs to model power grids. Power grids are complex networks where different heterogeneous components are connected as shown in Fig. 1(a). Modelling power networks solely with vertices and edges, as done with standard graph models, Fig. 1(b), leads to information losses when merging grid components together into graph objects. More versatile modelling of such networks is possible using hypergraphs, Fig. 1(c), where each component can be modelled as a specific hyperedge which can mitigate the loss of information. The H2MG formalism is defined by: * Objects as hyperedges: every object in the network is modelled as a hyperedge that can connect to any number of vertices. This is shown in Fig. 2 where each component is modelled separately as a hyperedge: we represent lines and transformers as hyperedges connected to two vertices, whereas buses are modelled as hyperedges connected to one vertex. * Vertices as ports: vertices represent the interfaces between objects. In a hypergraph, vertices are _connection points_ between the components (the hyperedges). These connection points between components in a power system are the network's buses. Therefore, we model buses as both hyperedges as network components and vertices as network interfaces. * Hyper-Heterogeneous Multi Graph: the collection of hyperedges connected through vertices forms a hypergraph, and we call this hypergraph _heterogeneous_ if it contains multiple classes of objects. Hyperedges carry features and outputs, while vertices, as connection ports, do not carry input-output information. ### _H2MG Neural Network (H2MGNN)_ The H2MGNN is a GNN architecture that works with H2MG models. It uses a recursive process to learn information from the hypergraph and related features. It is a recurrent and residual GNN architecture, with trainable mappings implemented as standard ANNs and trained through standard backpropagation. As presented in Algorithm 1, we consider four types of variables: * Vertex latent variables, considering a vertex set \(\mathcal{V}\) corresponding to the _interface_ role of buses: \(\mathbf{h}_{i}^{v},\ \forall\,i\in\mathcal{V}\); * hyperedge latent variables, considering \(c\in\mathcal{C}\) as the objects' class, and \(e\in\mathcal{E}^{c}\) as the objects' hyperedge: \(\mathbf{h}_{e}^{c}\); * hyperedge inputs \(\mathbf{z}_{e}^{c}\); * hyperedge outputs \(\hat{\mathbf{x}}_{e}^{c}\). In our setting, the hyperedge index \(e\) refers to the object's connections: considering a vertex \(i\) and its neighbouring vertex \(j\), \(e=i\) for a bus, and \(e=ij\) for a line. In the initialization of Algorithm 1, the hyperparameter \(d\) sets the dimension of the latent variables. We initialize these latent variables with a flat start (zero values) and set predicted output variables to the initial values \(\hat{\mathbf{x}}_{e,0}^{c}\) dependent on the task. For DSSE, a common initialization is \(V_{i}=1\) p.u. and \(\varphi_{i}=0\)rad. Then, the H2MGNN algorithm recursively updates these variables in the system with trainable mappings \(\phi_{\theta}\). An iteration variable \(t\) is defined to weigh each iteration in the update process and \(T\) is the maximum number of iterations. At each iteration, latent variables are updated by an increment defined through the message-passing step similar to conventional GNN algorithms: \[\Delta\mathbf{h}_{i}^{v} =\sum_{(c,e)\in\mathcal{N}(\mathcal{V})}\phi_{\theta}^{c,\theta} \left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e}^{c},\hat{\mathbf{x}}_{e}^{c},\mathbf{z}_{ e}^{c}\right),\quad\forall i\in\mathcal{V} \tag{7a}\] \[\Delta\mathbf{h}_{e}^{c} =\phi_{\theta}^{c,h}\left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e}^ {c},\hat{\mathbf{x}}_{e}^{c},\mathbf{z}_{e}^{c}\right)\] (7b) \[\Delta\hat{\mathbf{x}}_{e}^{c} =\phi_{\theta}^{c,\psi}\left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e} ^{c},\hat{\mathbf{x}}_{e}^{c},\mathbf{z}_{e}^{c}\right) \tag{7c}\] Fig. 2: Modelling the network (a) with two generators, three loads, two lines, and two transformers as (b) a standard graph and (c) an H2MG. The standard graph (b) has vertices (\(\bullet\)) and edges (\(\blacktriangle\)) with their features represented as boxes. The H2MG (c) models the components, bus (\(\dashdot\)), line (\(\blacktriangle\)), and transformer (\(\dashdot\)) as hyperedges connected to any number of connection ports with their features. with \(\mathcal{N}(i)\) the set of hyperedges connected to vertex \(i\), and \(o\) the connection port of a hyperedge (if connected to multiple ports). The final output of the model is stored in the hyperedge outputs \(\hat{\mathbf{x}}_{e}^{c}\). ``` 1:procedure\(\mathbf{f}_{\theta}(\mathbf{z}_{c}^{c})\) 2:Initialization 3:for\(i\in\mathcal{V}\)do 4:\(\mathbf{h}_{i}^{v}\leftarrow\mathbf{0}^{d}\) 5:for\(c\in\mathcal{C}\)do 6:for\(e\in\mathcal{E}^{c}\)do 7:\(\hat{\mathbf{x}}_{e}^{c}\leftarrow\hat{\mathbf{x}}_{e,0}^{c}\) 8: 9:\(\triangleright\) Latent interaction 10:\(t\gets 0\) 11:while\(t<T\)do 12:for\(i\in\mathcal{V}\)do 13:\(\mathbf{h}_{i}^{v}\leftarrow\mathbf{h}_{i}^{v}+\frac{1}{T}\times\sum\phi_{\theta}^{c, o}\left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e}^{c},\hat{\mathbf{x}}_{e}^{c},z_{c}^{c}\right)\) 14:for\(c\in\mathcal{C}\)do 15:for\(e\in\mathcal{E}^{c}\)do 16:\(\mathbf{h}_{e}^{c}\leftarrow\mathbf{h}_{e}^{c}+\frac{1}{T_{1}}\times\phi_{\theta}^{c, h}\left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e}^{c},\hat{\mathbf{x}}_{e}^{c},z_{e}^{c}\right)\) 17:\(\hat{\mathbf{x}}_{e}^{c}\leftarrow\hat{\mathbf{x}}_{e}^{c}+\frac{1}{T}\times\phi_{ \theta}^{c,y}\left(\frac{t}{T},\mathbf{h}_{i}^{v},\mathbf{h}_{e}^{c},\hat{\mathbf{x}}_{e}^ {c},z_{e}^{c}\right)\) 18:\(t\gets t+1\) 19:return\(\hat{\mathbf{x}}_{e}^{c}\) ``` **Algorithm 1** H2MGNN algorithm ### _Proposed DSS\({}^{2}\) implementation_ As presented in Fig. 1c, we use the H2MGNN model to estimate the state variables \(\mathbf{x}\) from the measurements \(\mathbf{z}\) through Alg. 1 and train it through the weakly supervised approach with the WLS as target optimization. For the DSSE described in Sec. II, the DSS\({}^{2}\) model approximates the inverse of the measurement function Eq. (3) as Eq. (4). The input features follow the WLS algorithm where, for each measurement, we consider the two, the measured value and its uncertainty. We also add all other parameters as features needed to compute the measurement function Eq. (3) as _topology parameters_. The features and parameters assigned to each class of components are listed in Table I. The model's output is every bus's voltage amplitude and angle, as typical in SE. Finally, we add booleans to detail components: \(\mathbb{1}_{z}\) defines buses with zero-injection (no consumption or generation), \(\mathbb{1}_{s}\) defines slack buses, and \(\mathbb{1}_{cl}\) defines closed lines. These booleans simplify the model and provide more information about the network to the DSS\({}^{2}\) model. In other words, this simplification considers 'virtual measurements' to enforce zero power flow at buses without injection (\(\mathbb{1}_{z}\)), no power flow at the connected buses to an open line (\(1-\mathbb{1}_{cl}\)) and \(V_{s}=1\) p.u. and \(\phi_{s}=0\) rad at the slack bus \(s\) where \(s\) is the index of the vector \(\mathbb{1}_{s}\) that equals \(1\). ## IV Case studies Case studies have been undertaken to provide insights into the proposed approach and evidence of its efficacy. After stating the case studies setup and showing the efficiency of the proposed weakly-supervised learning approach, we analyse the performance of the DSS\({}^{2}\) exploring the trade-off of providing labels and accuracy, subsequently, investigating the accuracy, convergence and computational speed for larger networks. Finally, we investigate the performance of the proposed approaches for different measurement noise, when the measurements are disturbed, and when we have higher and lower load levels and renewable powers. ### _Test systems and setup_ We considered the 14-bus CIGRE MV distribution grid with PV and Wind distributed energy resources (DER) activated [35], the 70-bus Oberrhein MV/LV sub-grid, and the whole 179-bus Oberrhein grid from [33]. The networks are presented in Figure 3. The measurement locations for each network are shown in Fig. 3. These measurements \(\mathcal{M}\) either measure the power flow over lines or the voltages at buses and were assumed with different Gaussian noise, as further discussed. For each network, \(8640\) load samples were collected, equivalent to one year of hourly data. Each load scenario considers load levels of \(24\) consecutive samples discretized hourly for all loads in the network. These load scenarios resulted from a Monte Carlo sampling on standard load profiles taken from [36], considering a \(15\)% uncertainty. For each sample, in each scenario, the AC power flow computed the full _true_ state using PandaPower 2.9 [33] and Python 3.8. Subsequently, one sample's full _true_ state considered all loads and generators' active and reactive power levels, the bus voltage levels, phases, and line loadings. System operators do not have access to this full _true_ state; however, some key variables are provided by the measurements specified earlier. These observed variables were assumed corrupted with zero-mean Gaussian white noise at the measurement locations. Between \(0.5\)% and \(2\)% standard deviations were assumed for the voltage and current measurement noises, and between \(1\)% and \(5\)% for the active and reactive power measurement errors. Pseudomeasurements of power levels were considered at every (unobserved) bus using generic load and generation profiles taken from [36]. The dataset was split into train, validation, and test sets, following an \(80\)/\(10\)/\(10\) split. In supervised learning, the measurement vector \(\mathbf{z}\) at the measurement locations mentioned \begin{table} \begin{tabular}{c|c c} \hline \hline & Buses & Lines \\ \hline \multirow{3}{*}{\begin{tabular}{c} Topology \\ param. \\ \end{tabular} } & Bus port \(i\) & Line ports \((i,j)\) \\ & Zero-inj, bool. \(\mathbb{1}_{z}\) & Closed line bool. \(\mathbb{1}_{cl}\) \\ & Slack bool. \(\mathbb{1}_{s}\) & Phase-shift \(\phi_{ij}\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} Input \\ features \\ \end{tabular} } & Voltage mag.: \(V_{i}\) - \(\sigma_{V_{i}}\) & Active PF: \(P_{ij}\) - \(\sigma_{P_{ij}}\) \\ & Voltage mag.: \(V_{i}\) - \(\sigma_{V_{i}}\) & Reactive PF: \(Q_{ij}\) - \(\sigma_{Q_{ij}}\) \\ & Voltage angle: \(\varphi_{i}\) - \(\sigma_{\varphi_{i}}\) & Current magn.: \(l_{ij}\) - \(\sigma_{Iij}\) \\ & Active power inj.: \(P_{1}\) - \(\sigma_{P_{i}}\) & Line admittance: \(Y_{ij}\) \\ & Reactive power inj.: \(Q_{i}\) - \(\sigma_{Q_{i}}\) & Shunt admittance: \(Y_{visj}\) \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Output \\ features \\ \end{tabular} } & Voltage mag.: \(V_{i}\) & \\ & Voltage angle: \(\varphi_{i}\) & \\ \hline \hline \end{tabular} \end{table} TABLE I: Features and topology parameters of the H2MGNN (modelled as an H2MG). above represents the input to the model, and the full state represents the label \(\mathbf{y}\). Several baseline models were assumed as follows. The standard SE WLS algorithm [37], a standard ANN model trained with supervised learning, and the DSS\({}^{2}\) model but trained with supervised learning (referenced with sup. DSS\({}^{2}\)). The WLS algorithm from PandaPower 2.9 was used, and the deep learning models were implemented in Tensorflow 2.8 [38]. The ANN was designed with \(5\) layers of \(32\) hidden values, \(tanh\) activation functions and a Glorot normal initializer. ### _Efficiency of the weakly-supervised learning_ This section investigates the efficiency of the weakly supervised learning DSS\({}^{2}\) approach and hyperparameters that can impact the state estimation accuracy. The hyperparameters penalization factor \(\lambda=\lambda_{0}=\lambda_{1}=\lambda_{2}\), batch size, dropout rate \(r\), \(\ell_{2}\)-regularizer, and the number of iteration \(T\) were fixed. A grid search tuned the hyperparameters learning rate within the ranges \(\alpha\in\{0.001,0.002,\cdots,0.009,0.01\}\), layer dimension \(d\in\{8,16,24,32,40,48\}\), and layers number \(\in\{2,3,4,5\}\). The selected hyperparameter values are in Table II for each network. The efficiency of the learning approach is shown in Figures 3(a) and 3(b) when training on the \(14\)-bus network. The voltage and line loading estimation RMSE slowly decreased at each epoch, showing a learning curve through the power flow equations and using only the noisy measurements and pseudomeasurements. When training in weakly supervision, DSS\({}^{2}\) learned to minimize the different objectives using _noisy_ measurements as'reference values'. However, the computational time to train DSS\({}^{2}\) in weakly supervision was lower than to train in supervision. ### _Trade-off between accuracy and available labels_ This case study investigates the performance of the proposed weakly supervised DSS\({}^{2}\) model on the \(14\)-bus system compared to three baselines. The second column in Table III summarizes the results. The RMSE for voltages of the proposed weakly supervised DSS\({}^{2}\) was three times lower than the WLS, \(2.5\)% versus \(9.9\)%. In more detail, Figure 4(a) shows the voltage estimation RMSE per bus. The RMSE was lower than the \(0.5\%\) threshold for all buses, showing successful learning from voltage measurement data while handling measurements' noise. The difference in RMSE between the observed (buses \(1,8\), and \(12\)) and the unobserved buses are small, showing the capability of our DSS\({}^{2}\) model to extrapolate to all buses. The supervised models (ANN, sup. DSS\({}^{2}\)) estimated the voltage more accurately, as expected, as they learned from the ideal _true_ voltage data having an unfair, impractical advantage. The RMSE of line loading of the weakly supervised DSS\({}^{2}\) reaches performances equivalent to the WLS, outperforming the supervised models by a wide margin according to Table III. This observation offered insights. Supervised models poorly estimated _indirect_ values such as the line loading that were calculated using the power flow equations. The models only outputted the state variables and supervised models poorly considered the _coupling_ of the state variables in the estimations \begin{table} \begin{tabular}{c|c c c} \hline \hline Parameter & 14-bus & 70-bus & 179-bus \\ \hline Epochs & \(630\) & \(540\) & \(1020\) \\ \(\lambda\) & \(0.8\) & \(0.8\) & \(0.8\) \\ \(\alpha\) & \(0.006\) & \(0.006\) & \(0.006\) \\ batch size & \(64\) & \(64\) & \(64\) \\ \(d\) & \(40\) & \(40\) & \(40\) \\ layers & \(3\) & \(3\) & \(3\) \\ \(T\) & \(7\) & \(20\) & \(25\) \\ \(r\) & \(0.4\) & \(0.4\) & \(0.4\) \\ \(\ell_{2}\) & \(0.002\) & \(0.002\) & \(0.002\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Hyperparameter values of DSS\({}^{2}\) trained for three power networks. Fig. 4: Validation RMSEs of voltages (a) and line loading (b) during training. Fig. 3: Three test networks consisting out of trafos (=), lines (\(-\)), MV/LV buses ( * ) and HV buses ( * ). The state can be estimated using the set of power flow measurements (**) and voltage measurements ( * ). Relevant indices are indicated, and lines’ indices are underlined. Case studies on the 70-bus grid focus on buses indicated with \(\circ\). of line loading. However, the weakly supervised model learned directly through the power flow equations about the coupling with the effect of estimating line loading more accurately. In more detail, Figure (b)b shows the loading estimation error per line. The weakly supervised DSS\({}^{2}\) model had a very high accuracy on measured lines (lines \(0\) and \(10\)) and their extension (lines \(1\) and \(11\)). However, there was a clear drop in performance for the estimation of transformers' loading, shown at indexes \(12\) and \(13\). The simple modelling of transformers or slack may have led to this reduced accuracy as the transformers and lines were considered in the same class of models. As a result of this simple modelling, the H2MGNN considered the same mapping for these components, which may have reduced the accuracy of transformer estimations. ### _Convergence, accuracy and computation speed in large networks_ This case study investigates the performance of the proposed DSS\({}^{2}\) compared to the WLS in larger networks, the \(70\)-bus and \(179\)-bus networks, along three performance criteria: the convergence rate, the accuracy, and the computational time. The 2nd and 3rd columns in Table III summarize the results. When analysing the convergence rate, the DSS\({}^{2}\) always converges, and the WLS never converges in the \(179\)-bus network. The WLS was unstable in this large and noisy network, leading to these poor convergence rates. WLS' convergence issues with noisy measurements in large systems is already well-known [4, 10]. Many noisy measurements constrain the Newton-Raphson solver and can lead to divergence. More specifically, the WLS had issues in handling flow measurements. In response to these issues and to compare the accuracy and computational times of DSS\({}^{2}\) with WLS, only voltage measurements and pseudomeasurements were used in WLS to increase the convergence rate (WLS* in the table). This increased the convergence rate in the \(70\)-bus system but did not increase the convergence of the WLS in the \(179\)-bus. Therefore, in the \(179\)-bus system, the tolerance of the Newton-Raphson iterative process and the number of iterations were increased (WLS** in the table). Increasing these parameters increased the convergence rate at the cost of lower accuracy and slower processing. When analysing the accuracy, a key advantage of the DSS\({}^{2}\) becomes visible. DSS\({}^{2}\) outperformed the WLS in every metric in the two larger networks. The models based on GNN, such as DSS\({}^{2}\), learn from local operations (in the neighbourhood of buses) and extrapolate to other locations (to other neighbourhoods of buses). Therefore, the more buses and lines in the network, the more local operations to learn from that can further enhance the model's accuracy. Also, these networks have more static loads and less DER than the \(14\)-bus network, so the variation of voltage and line loading was smaller, and the estimated values from the DSS\({}^{2}\) become more accurate. Figures (a)a and (b)b compare the estimated voltage levels through a sampling period in the \(70\)-bus system for the measured bus \(34\) and the unmeasured remote bus \(223\), respectively. The accuracy of the DSS\({}^{2}\) model estimating the voltage in measured nodes through noisy measurements was high. However, the model lacked generalizability when estimating voltage in remote, unmeasured nodes. When analysing the computation times, in the last row of Table III, the DSS\({}^{2}\) increasingly outperformed WLS for larger networks. The computational time of the WLS and the DSS\({}^{2}\) increased from the \(70\)-bus network to the \(179\)-bus network by factors of \(10\) and \(2\), respectively. The DSS\({}^{2}\) scaled to a larger network \(5\)-fold better than the WLS algorithm. The WLS needed more iterations for this larger system until the Newton-Raphson converged, although the tolerance was Fig. 5: Comparing RMSE of estimating (a) voltage levels and (b) line loadings in the 14-bus network between WLS (\(\blacksquare\)), DSS\({}^{2}\) (\(\blacksquare\)), FFNN (\(\blacksquare\)) and sup. DSS\({}^{2}\) (\(\blacksquare\)). In (a), the dashed lines show acceptable state estimators based on the standard deviation. Below the dashed green is acceptable, and above red is unacceptable. In (b), indexes \(12\) and \(13\) are transformers. \begin{table} \begin{tabular}{c|c c c c|c c|c c} \hline & \multicolumn{4}{c|}{14-bus CIGRE} & \multicolumn{4}{c|}{70-bus Obernhein} & \multicolumn{2}{c}{179-bus Obernhein} \\ \hline Performance metric \(\backslash\) Model & WLS & ANN & sup. DSS\({}^{2}\) & DSS\({}^{2}\) & WLS & WLS* & DSS\({}^{2}\) & WLS* & DSS\({}^{2}\) \\ \hline Voltage RMSE \([10^{-3}]\) & \(9.9\,(0.45)\) & \(2.7\,(0.17)\) & \(\mathbf{2.5\,(0.11)}\) & \(3.4\,(0.18)\) & \(31\,(0.96)\) & \(5.9\,(0.18)\) & \(\mathbf{1.5\,(0.01)}\) & \(9.9\,(0.3)\) & \(\mathbf{2.3\,(0.01)}\) \\ Line loading RMSE \([\%]\) & \(\mathbf{3.4\,(0.05)}\) & \(42\,(0.83)\) & \(12.7\,(0.17)\) & \(3.8\,(0.05)\) & \(17\,(0.9)\) & \(15\,(0.8)\) & \(\mathbf{2.3\,(0.01)}\) & \(5.9\,(0.4)\) & \(\mathbf{3.4\,(0.01)}\) \\ Line \& rafos loading RMSE \([\%]\) & \(\mathbf{4.6\,(0.06)}\) & \(39\,(0.5)\) & \(14\,(0.15)\) & \(8\,(0.06)\) & \(39\,(2.7)\) & \(24\,(1.67)\) & \(\mathbf{2.5\,(0.01)}\) & \(4.2\,(0.2)\) & \(\mathbf{3.5\,(0.01)}\) \\ Convergence \([\%]\) & \(100\) & \(100\) & \(100\) & \(100\) & \(25\) & \(100\) & \(100\) & \(53\) & \(\mathbf{100}\) \\ Computational time \([\text{ms}]\) & \(86\,(41)\) & \(\mathbf{3.5\,(0.73)}\) & \(4.7\,(1.4)\) & \(5.5\,(4.5)\) & \(123\,(31)\) & \(161\,(40)\) & \(\mathbf{26\,(10.4)}\) & \(1212\,(424)\) & \(\mathbf{58\,(26.7)}\) \\ \hline \end{tabular} \end{table} TABLE III: Mean (standard deviation in parentheses) values of performance metrics in _default_ conditions. WLS* is the WLS algorithm without flow measurements, and WLS** with increased tolerance. The bold font shows the best model or algorithm for each metric. increased, which typically decreased the computational times. The DSS\({}^{2}\) also showed a lower variance in the computational times as it is not based on an iterative algorithm. ### _Measurement noise_ This case study compares the robustness to measurement noise of the DSS\({}^{2}\) to the WLS in the 70-bus network. The level of measurement noise refers to the standard deviation \(\sigma_{i}\) of the Gaussian noise added to the measurements. Three different levels of noise were considered. The _default_ level had 1% noise on ideal measurements of voltage and current, and 2% noise on the ideal measurements of active and reactive power; the _low_ level had \(0.5\)% and \(1\)% noise, and the _high_ level 3% and \(5\)%, respectively. At _high_ noise, Fig. 7 shows the RMSE of the DSS\({}^{2}\) was more than \(10\) times better than that of the WLS showing significantly higher robustness of DSS\({}^{2}\). DSS\({}^{2}\) had a similarly high accuracy at _low_ and _high_ noise as in the _default_ noise level. DSS\({}^{2}\) learned to process many noisy signals with different standard deviations within the high noise level ranges and GNN structures. The dropout step during training improved the capability of the DSS\({}^{2}\) model to handle stochasticity, including noise. Fig. 8 compares the voltage level estimation at _high_ measurement noise for the bus \(34\). The DSS\({}^{2}\) successfully cancelled the increased noise, whereas the WLS algorithm struggled to stay accurate. ### _Missing and erroneous measurements_ This case study investigates the impact of missing and erroneous measurements on the DSS\({}^{2}\) and the WLS algorithm at the \(70\)-bus network. Case (i) assumed a missing voltage measurement on bus \(39\) that was naively replaced with their historical mean value. Case (ii) assumed erroneous voltage measurements on buses \(39\), \(58\) and \(80\), and erroneous active power flow measurements in lines \(162\) and \(165\) with a higher deviation from the _true_ state values than the expected (standard) deviation. Case (iii) assumed missing voltage measurements on buses \(34\), \(39\) and \(80\) and erroneous voltage measurements on bus \(58\). Fig. 9 shows the results. The DSS\({}^{2}\) had high robustness to missing and erroneous measurements in all three cases, with a similar RMSE as the _default_ case (no missing or erroneous measurements). However, the erroneous measurement case (ii) impacted the WLS, showing an increase of around \(20\)% on relative voltage RMSE. Fig. 10 focuses on one bus with erroneous measurements, the bus \(34\) in case (iii). The measurement in bus \(34\) was missing for the whole sequence and was naively replaced by the empirical mean value (light blue). Fig. 8: Estimation of the voltage level at (a) bus \(34\) of the 70-bus network under _high_ noise level and across the sampling period, using WLS (\(\blacksquare\)), and DSS\({}^{2}\) (\(\blacksquare\)). True voltage (\(\blacksquare\)) and measurement (\(\blacksquare\)) as reference. Fig. 6: Estimation of the voltage level at (a) bus \(34\) and (b) bus \(223\) of the 70-bus network under normal conditions and across the sampling period, using WLS (\(\blacksquare\)), and DSS\({}^{2}\) (\(\blacksquare\)). True voltage (\(\blacksquare\)) and measurement (\(\blacksquare\)) as references. Fig. 7: Comparison of RMSE for the estimation of (a) voltage level and (b) line loading between the WLS (\(\blacksquare\)) and DSS\({}^{2}\) (\(\blacksquare\)) in the 70-bus network; considering different levels of measurement noise. Fig. 9: Comparison of RMSE for the estimation of (a) voltage level and (b) line loading between WLS (\(\blacksquare\)) and DSS\({}^{2}\) (\(\blacksquare\)) in the \(70\)-bus network considering (i) missing voltage measurement, (ii) erroneous voltage measurement and erroneous active power flow measurements, and (iii) missing voltage measurement and erroneous voltage measurements. A key insight of this analysis is that the DSS\({}^{2}\) was not impacted by this missed value and successfully provided an accurate estimation. Interestingly, the DSS\({}^{2}\) model was not trained to handle such events. However, using known patterns from neighbouring information, the DSS\({}^{2}\) remained accurate. Indeed, the GNN architecture increased the interpolation capabilities by incorporating the data symmetries w.r.t. the underlying graph. ### _Changes in power levels of load and renewables_ This case study investigates the generalization capability of the DSS\({}^{2}\) (and the WLS) to changes in levels of power in the loads and distributed generation compared to the training dataset. Three cases altered the power levels for the testing dataset on the \(70\)-bus network by: \begin{tabular}{l l} (\(-30\%,+30\%\)) & \(30\%\) decrease in generation, \(30\)\% increase in load, \\ (\(+25\%,+100\%\)) & \(25\)\% increase in generation, \(100\)\% increase in load to simulate a system near overload. \\ (\(-75\%,+60\%\)) & \(75\)\% decrease in generation, \(60\)\% increase in load to simulate more voltage deviations \\ \end{tabular} Note that the DSS\({}^{2}\) model was never trained on such cases; only _default_ power levels were used for training. Fig. 11 shows the results. In the case of a'small' load change (\(-30\%,+30\%\)), the DSS\({}^{2}\) showed good estimation performances with only a small increase in RMSE. However, in the cases (\(+25\%,+100\%\)) and (\(-75\%,+60\%\)) the RMSE significantly increased. The lines were highly loaded in the case (\(+25\%,+100\%\)). Hence, the loading estimation was highly impacted. In the case (\(-75\%,+60\%\)), the deviation in voltage was more harmful to the voltage estimation. These results explored the limitations of the changes in loading levels that the DSS\({}^{2}\) model could handle. Good results were perceived for changes in loads of around \(30\)% showing good generalization capability of the DSS\({}^{2}\) model to handle state estimation tasks under limited uncertain changes. However, the model became sensitive when the network was extremely loaded or under strong voltage deviations, and then, the model does not generalize well anymore to extreme conditions. ## V Discussion and conclusions This paper introduces the Deep Statistical Solver for Distribution System State Estimation. This Deep Learning architecture incorporates the power flow equations in the loss function for physics awareness. Our proposed DSS\({}^{2}\) approach uses the same objective function as the WLS, allowing to train of the model with a noisy and poorly labelled dataset. This approach is called weak supervision learning, and we combine it with a GNN architecture to enhance the learning from local patterns and the robustness of the model. A remarkable advantage of the DSS\({}^{2}\) is that the larger the power network, the better the performance. The DSS\({}^{2}\) is based on a GNN architecture that learns from local patterns (in the neighbourhood of buses). Hence, the larger the network, the more local patterns the GNN-based architecture can learn from. We consider this remarkable as conventional power system analysis, for example, for estimating the state, often scales poorly with network size, whereas DSS\({}^{2}\) showed the reverse effect. Another outstanding advantage is that through learning in the neighbourhood of buses, the DSS\({}^{2}\) model becomes robust and invariant to changes in individual values, such as missing, erroneous measurements. This is an important practical advantage over other conventional methods (and the studied supervised models) that depend on the accuracy of individual measurements. Our different case studies show that the DSS\({}^{2}\) is faster, more robust, and more scalable than WLS. Compared to supervised models, the weakly supervised DSS\({}^{2}\) shows equivalent speed and voltage accuracy while outperforming the supervised models in estimating indirect values such as line loading. We conclude that learning from the power flow equations and the neighbourhood are the strengths of DSS\({}^{2}\); these incorporate a coupling between voltage magnitudes and voltage angles to fit the measurements. Finally, the DSS\({}^{2}\) model does not require labels as the approach is weakly supervised learning from the power flow equations. This type of learning makes the DSS\({}^{2}\) model more practical than other ML-based approaches as labels are scarce. Our implementation of the DSS\({}^{2}\) has limitations. First of all, the penalization method used in training impacts the quality of estimation but does not ensure any guarantee of convergence during testing. Secondly, in our implementation of the H2MG architecture, the assumption to modelling transformers as lines may have particularly limited the accuracy of transformers' Fig. 11: Comparing RMSE for (a) voltage level and (b) line loading between the WLS (\(\blacksquare\)) and DSS\({}^{2}\) (\(\blacksquare\)) in the \(70\)-bus network considering from the _default_ three changes in generation and load: \(30\%\) decrease in generation and \(30\%\) increase in load, \(25\%\) increase in generation and \(100\%\) increase in load and \(75\%\) decrease in generation and \(60\%\) increase in load. Fig. 10: Estimation of voltage level at bus \(34\) of the \(70\)-bus network under missing measurement conditions and across the sampling period, using WLS (\(\blacksquare\)), and DSS\({}^{2}\) (\(\blacksquare\)). True voltage (\(\blacksquare\)) and measurement (\(\blacksquare\)) as reference. loading estimation. There, the model was 'forced' to learn a similar input-output mapping for lines and transformers that may reduce the expressivity of the model. Then, the DSS\({}^{2}\)'s estimation is impacted when the load power level in the network varies significantly. The generalization ability of the DSS\({}^{2}\) showed a limit of around \(30\)% load changes. The changes in measurements are encouraging but should be improved. Future work could investigate the types of measurements and meter placement decisions that would maximize the DSS\({}^{2}\) performances. Adding an algorithm that detects changes in the data could benefit quantifying the confidence of state estimations by DSS\({}^{2}\). Combining the DSS\({}^{2}\) for state estimation to a state-of-the-art anomaly detector could improve generalization. Finally, the network's model in the deep learning architecture could be improved. The proposed model is simple; however, the H2MGNN architecture allows for advanced modelling of components that can further increase the expressivity and performance of the DSS\({}^{2}\).
2307.14608
Smooth modules over the N=1 Bondi-Metzner-Sachs superalgebra
In this paper, we present a determinant formula for the contravariant form on Verma modules over the N=1 Bondi-Metzner-Sachs (BMS) superalgebra. This formula establishes a necessary and sufficient condition for the irreducibility of the Verma modules. We then introduce and characterize a class of simple smooth modules that generalize both Verma and Whittaker modules over the N=1 BMS superalgebra. We also utilize the Heisenberg-Clifford vertex superalgebra to construct a free field realization for the N=1 BMS superalgebra. This free field realization allows us to obtain a family of natural smooth modules over the N=1 BMS superalgebra, which includes Fock modules and certain Whittaker modules.
Dong Liu, Yufeng Pei, Limeng Xia, Kaiming Zhao
2023-07-27T03:47:58Z
http://arxiv.org/abs/2307.14608v1
# Smooth modules over the N=1 Bondi-Metzner-Sachs superalgebra ###### Abstract. In this paper, we present a determinant formula for the contravariant form on Verma modules over the N=1 Bondi-Metzner-Sachs (BMS) superalgebra. This formula establishes a necessary and sufficient condition for the irreducibility of the Verma modules. We then introduce and characterize a class of simple smooth modules that generalize both Verma and Whittaker modules over the N=1 BMS superalgebra. We also utilize the Heisenberg-Clifford vertex superalgebra to construct a free field realization for the N=1 BMS superalgebra. This free field realization allows us to obtain a family of natural smooth modules over the N=1 BMS superalgebra, which includes Fock modules and certain Whittaker modules. Key words and phrases:N=1 BMS superalgebra, Verma module, Whittaker module, smooth module, free field realization Mathematics Subject Classification: 17B65, 17B68, 17B69,17B70, 81R10. ## 1. Introduction The study of conformal field theory relies heavily on the Virasoro algebra, which is the central extension of the Witt algebra. The representation theory of this algebra is an active research area in mathematical and theoretical physics. In particular, simple Virasoro modules have been extensively explored in both mathematics and physics literature (see [20], [34], [35], [36] and references therein). Additionally, superconformal extensions of the Virasoro algebra known as super Virasoro algebras play an important role in superstring theory and superconformal field theory. Significant research has also been conducted on simple super Virasoro modules [18], [19], [29], [30], [37]. The Bondi-Metzner-Sachs (BMS) algebra was proposed to characterize the infinite-dimensional symmetries of asymptotically flat spacetimes at null infinity and was initially discovered in [8], [40], [41]. The BMS algebra can be also obtained by taking a particular contraction of two copies of the Virasoro algebra. Notably, the BMS algebra appeared in the Galilean conformal field [2], the classification of simple vertex operator algebras of moonshine type [44] and two-dimensional statistical systems [17]. Representations of the BMS algebra have been extensively discussed in references such as [1], [3], [6], [7], [21], [23], [22], [38], [39], [44]. The N=1 BMS superalgebra \(\mathfrak{g}\), a minimal supersymmetric extension of the BMS algebra, plays a key role in describing asymptotic supergravity in three-dimensional flat spacetime [5]. Simple \(\mathfrak{g}\)-modules with finite-dimensional weight spaces were classified in [13], and Whittaker \(\mathfrak{g}\)-modules were introduced and studied in [14]. A class of simple non-weight \(\mathfrak{g}\)-modules was constructed and classified in [11]. Recent progress includes a free field realization of \(\mathfrak{g}\) using the \(\beta\)-\(\gamma\) and \(b\)-\(c\) ghost systems [4]. The study of smooth modules is central to representation theory for \(\mathbb{Z}\) or \(\frac{1}{2}\mathbb{Z}\)-graded Lie superalgebras, due to connections with vertex operator superalgebras [27], [28]. However, completely classifying smooth modules remains an open challenge. Simple smooth modules have been classified for the Virasoro algebra and the N=1 Neveu-Schwarz algebra [35], [29]. Partial classification results also exist for other Lie (super)algebras, including the N=1 Ramond algebra [9], the twisted Heisenberg-Virasoro algebra [10], [15], [42], the mirror Heisenberg-Virasoro algebra [32], [42], the Ovsienko-Roger algebra [31], the planar Galilean conformal algebra [16], [12], the Fermion-Virasoro algebra [43], and the superconformal current algebra [33]. Motivated by these advances, we systematically study smooth modules over the N=1 BMS superalgebra, particularly including Verma, Whittaker, and Fock modules. First, using an anti-involution of \(\mathfrak{g}\), we define a contravariant symmetric bilinear form on Verma modules over \(\mathfrak{g}\). We compute the Gram matrix to obtain a determinant formula and determine necessary and sufficient conditions for simplicity of Verma modules (Theorem 3.2). Surprisingly, these computations are much simpler than the Kac determinant formula for the Virasoro algebra [25], despite \(\mathfrak{g}\) being more complicated. Clearly, the Verma modules and the Whittaker modules [14] are very special cases of smooth modules. It is natural to consider studying general smooth modules over the \(N=1\) BMS superalgebra \(\mathfrak{g}\). In contrast to Verma and Whittaker modules, the structure of general smooth \(\mathfrak{g}\)-modules is more complex. In this paper, we construct (Theorem 4.1) and classify simple smooth \(\mathfrak{g}\)-modules under certain conditions (Theorem 4.5). We establish a one-to-one correspondence between simple smooth \(\mathfrak{g}\)-modules and simple modules of a family of finite-dimensional solvable Lie superalgebras associated with \(\mathfrak{g}\). Unlike the smooth module classification for the Virasoro algebra [35] and the \(N=1\) Neveu-Schwarz algebra [30], our classification result requires additional injectivity conditions. Removing these conditions to obtain a complete smooth \(\mathfrak{g}\)-module classification remains an open problem. Finally, we show that there is an isomorphism between the categories of smooth \(\mathfrak{g}\)-modules and N=1 BMS vertex superalgebra modules. This allows employing powerful vertex algebra tools to study smooth \(\mathfrak{g}\)-modules. We demonstrate a nontrivial homomorphism from the N=1 BMS vertex algebra to the Heisenberg-Clifford vertex algebra, providing a free field realization of \(\mathfrak{g}\) (Theorem 5.7). Using this, we construct natural smooth \(\mathfrak{g}\)-modules including Fock modules and certain Whittaker modules. Our results provide new perspectives on representation theory of the N=1 BMS superalgebra with potential applications in asymptotic gravity symmetries. The structure of this paper is as follows: Section 2 reviews some notations and preliminaries. Section 3 presents a formula for the determinant of the contravariant form on Verma modules (Lemma 3.1), which can be used to determine the necessary and sufficient conditions for a Verma module to be simple (Theorem 3.2). Section 4 constructs simple smooth modules for \(\mathfrak{g}\) (Theorem 4.1)and classifies simple smooth modules under certain conditions (Theorem 4.5). Section 5 provides a free field realization of \(\mathfrak{g}\) (Theorem 5.7). Using this free field realization, some smooth modules over \(\mathfrak{g}\) are obtained from smooth modules over the Heisenberg-Clifford superalgebra. Throughout this paper, we will use the following notations: \(\mathbb{C}\), \(\mathbb{N}\), \(\mathbb{Z}_{+}\), and \(\mathbb{Z}\) refer to the sets of complex numbers, non-negative integers, positive integers, and integers, respectively. We consider a \(\mathbb{Z}_{2}\)-graded vector space \(V=V_{\bar{0}}\oplus V_{\bar{1}}\), where an element \(u\in V_{\bar{0}}\) (respectively, \(u\in V_{\bar{1}}\)) is called even (respectively, odd). We define \(|u|=0\) if \(u\) is even and \(|u|=1\) if \(u\) is odd. The elements in \(V_{\bar{0}}\) or \(V_{\bar{1}}\) are referred to as homogeneous, and whenever \(|u|\) is used, it means that \(u\) is homogeneous. ## 2. Notations and preliminaries In this section, we will review the notations and results associated with the N=1 BMS superalgebra. **Definition 2.1** ([5]).: _The_ **N=1 BMS superalgebra**__ \[\mathfrak{g}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C}L_{n}\oplus\bigoplus_{n\in \mathbb{Z}}\mathbb{C}M_{n}\oplus\bigoplus_{n\in\mathbb{Z}+\frac{1}{2}} \mathbb{C}Q_{n}\oplus\mathbb{C}\mathbf{c}_{1}\oplus\mathbb{C}\mathbf{c}_{2}\] _is a Lie superalgebra, where_ \[\mathfrak{g}_{\bar{0}}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C}L_{n}\oplus\bigoplus_{n \in\mathbb{Z}}\mathbb{C}M_{n}\oplus\mathbb{C}\mathbf{c}_{1}\oplus\mathbb{C} \mathbf{c}_{2},\quad\mathfrak{g}_{\bar{1}}=\bigoplus_{r\in\mathbb{Z}+\frac{1}{ 2}}\mathbb{C}Q_{r},\] _with the following commutation relations:_ \[[L_{m},L_{n}]=(m-n)L_{m+n}+\frac{1}{12}\delta_{m+n,0}(m^{3}-m) \mathbf{c}_{1},\] \[[L_{m},M_{n}]=(m-n)M_{m+n}+\frac{1}{12}\delta_{m+n,0}(m^{3}-m) \mathbf{c}_{2},\] \[[Q_{r},Q_{s}]=2M_{r+s}+\frac{1}{3}\delta_{r+s,0}\left(r^{2}-\frac {1}{4}\right)\mathbf{c}_{2},\] \[[L_{m},Q_{r}]=\left(\frac{m}{2}-r\right)Q_{m+r},\] \[[M_{m},M_{n}]=[M_{n},Q_{r}]=0,\] \[[\mathbf{c}_{1},\mathfrak{g}]=[\mathbf{c}_{2},\mathfrak{g}]=0,\] _for any \(m,n\in\mathbb{Z},r,s\in\mathbb{Z}+\frac{1}{2}\)._ Note that the even part \(\mathfrak{g}_{\bar{0}}\) corresponds to the BMS algebra, which is also called the Lie algebra \(W(2,2)\) ([44]). Additionally, the subalgebra \(\mathrm{Vir}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C}L_{n}\oplus\mathbb{C}\mathbf{ c}_{1}\) represents the Virasoro algebra. The N=1 BMS superalgebra \(\mathfrak{g}\) has a close relation with the N=1 Neveu-Schwarz algebra \[\mathfrak{ns}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C}L_{n}\oplus\bigoplus_{r\in \mathbb{Z}+\frac{1}{2}}\mathbb{C}G_{r}\oplus\mathbb{C}\mathbf{c}\] which satisfies the following commutation relations: \[[L_{m},L_{n}] =(m-n)L_{m+n}+\delta_{m,-n}\frac{m^{3}-m}{12}\mathbf{c},\] \[[L_{m},G_{r}] =\left(\frac{m}{2}-r\right)G_{m+r},\] \[[G_{r},G_{s}] =2L_{r+s}+\frac{1}{3}\delta_{r+s,0}\left(r^{2}-\frac{1}{4} \right)\mathbf{c},\] \[[\mathfrak{ns},\mathbf{c}] =0,\] for all \(m,n\in\mathbb{Z},r,s\in\mathbb{Z}+\frac{1}{2}\), where \[|L_{n}|=\overline{0},\quad|G_{r}|=\overline{1},\quad|\mathbf{c}|=\overline{0}.\] Let \(A\) denote the truncated polynomial algebra \(\mathbb{C}[t]/(t^{3})\). Let \(\mathfrak{ns}\otimes A\) denote the map Lie superalgebra with \[[x\otimes a,y\otimes b]=[x,y]\otimes ab,\quad x,y\in\mathfrak{ns},a,b\in A.\] Then there is a natural embedding \(\iota:\mathfrak{g}\to\mathfrak{ns}\otimes A\): \[L_{m} \mapsto L_{m}\otimes 1,\] \[M_{m} \mapsto L_{m}\otimes t^{2},\] \[Q_{r} \mapsto G_{r}\otimes t,\] \[\mathbf{c}_{1} \mapsto\mathbf{c}\otimes 1,\] \[\mathbf{c}_{2} \mapsto\mathbf{c}\otimes t.\] The N=1 BMS superalgebra \(\mathfrak{g}\) has an anti-involution \(\bar{\ }\): \(\mathfrak{g}\to\mathfrak{g}\) given by \[\overline{L_{n}}=L_{-n},\quad\overline{M_{n}}=M_{-n},\quad\overline{Q_{r}}=Q_ {-r},\quad\overline{\mathbf{c}_{1}}=\mathbf{c}_{1},\quad\overline{\mathbf{c}_ {2}}=\mathbf{c}_{2}\] for \(n\in\mathbb{Z},r\in\mathbb{Z}+\frac{1}{2}\). This anti-involution induces an anti-involution on the enveloping algebra \(U(\mathfrak{g})\) naturally. Moreover, \(\mathfrak{g}\) is equipped with a \(\frac{1}{2}\mathbb{Z}\)-grading by the eigenvalues of the adjoint action of \(L_{0}\): \[\mathfrak{g}=\bigoplus_{m\in\frac{1}{2}\mathbb{Z}}\mathfrak{g}_{m},\] where \[\mathfrak{g}_{m}=\begin{cases}\mathbb{C}L_{m}\oplus\mathbb{C}M_{m}\oplus \mathbb{C}\delta_{m,0}\mathbf{c}_{1}\oplus\mathbb{C}\delta_{m,0}\mathbf{c}_{2 },&\quad\text{if}\quad m\in\mathbb{Z};\\ \mathbb{C}Q_{m},&\quad\text{if}\quad m\in\mathbb{Z}+\frac{1}{2}.\end{cases}\] It follows that \(\mathfrak{g}\) possesses a triangular decomposition: \(\mathfrak{g}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{-}\), where \[\mathfrak{g}_{\pm} =\bigoplus_{n\in\mathbb{Z}_{+}}\mathbb{C}L_{\pm n}\oplus \bigoplus_{n\in\mathbb{Z}_{+}}\mathbb{C}M_{\pm n}\oplus\bigoplus_{r\in \mathbb{N}+\frac{1}{2}}\mathbb{C}Q_{\pm r},\] \[\mathfrak{g}_{0} =\mathbb{C}L_{0}\oplus\mathbb{C}M_{0}\oplus\mathbb{C}\mathbf{c}_ {1}\oplus\mathbb{C}\mathbf{c}_{2}.\] Let \(\varphi:\mathfrak{g}_{0}\to\mathbb{C}\) be a linear function. Recall that a highest weight vector in a \(U(\mathfrak{g})\)-module \(M\) is a vector \(v\in M\) such that \[xv=\varphi(x)v,\quad\mathfrak{g}_{+}v=0\] for all \(x\in U(\mathfrak{g})\). A **highest weight module** is a \(U(\mathfrak{g})\)-module which is cyclically generated by a highest weight vector. For any \(k\in\mathbb{Z}_{+}\), let \[\mathfrak{g}^{(k)}=\bigoplus_{i\geq k}\mathfrak{g}_{i}\oplus\mathbb{C} \mathbf{c}_{1}\oplus\mathbb{C}\mathbf{c}_{2}.\] Then \(\mathfrak{g}^{(k)}\) is a subalgebra of \(\mathfrak{g}\). Let \(\phi_{k}:\mathfrak{g}^{(k)}\to\mathbb{C}\) be a nontrivial Lie superalgebra homomorphism. It follows that \[\phi_{k}(L_{i})=\phi_{k}(M_{i})=0,\quad\forall i\geq 2k+1,\quad\text{and} \quad\phi_{k}(Q_{j+\frac{1}{2}})=0,\quad\forall j\geq k.\] A \(\mathfrak{g}\)-module \(W\) is called a **Whittaker module** if \(W\) is generated by a vector \(w\in W\), where \(\mathbb{C}w\) is a one-dimensional \(\mathfrak{g}^{(k)}\)-module induced by \(\phi_{k}\), i.e. \(xw=\phi_{k}(x)w\) for any \(x\in\mathfrak{g}^{(k)}\). In this case, \(w\) is called a Whittaker vector. A g-module \(W\) is called **smooth** in the sense that for every \(w\in W\), \[L_{i}w=M_{i}w=Q_{i-\frac{1}{2}}w=0\] for \(i\) sufficiently large. It is clear that both highest weight modules and Whittaker modules over g are smooth. If \(W\) is a g-module on which \(\mathbf{c}_{1},\mathbf{c}_{2}\) act as complex scalars \(c_{1},c_{2}\), respectively, we say that \(W\) is of **central charge** \((c_{1},c_{2})\). For \(c_{1},c_{2}\in\mathbb{C}\), let \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) denote the category whose objects are smooth g-modules of central charge \((c_{1},c_{2})\). **Definition 2.2**.: _Let \(L\) be a Lie superalgebra, \(V\) an \(L\)-module and \(x\in L\)._ 1. _If for any_ \(v\in V\) _there exists_ \(n\in\mathbb{Z}_{+}\) _such that_ \(x^{n}v=0\)_, then the action of_ \(x\) _on_ \(V\) _is said to be_ **locally nilpotent**_._ 2. _If for any_ \(v\in V\)_,_ \(\dim\left(\sum_{n\in\mathbb{N}}\mathbb{C}x^{n}v\right)<+\infty\)_, then the action of_ \(x\) _on_ \(V\) _is said to be_ **locally finite**_._ 3. _The action of_ \(L\) _on_ \(V\) _is said to be locally nilpotent if for any_ \(v\in V\) _there exists an_ \(n\in\mathbb{Z}_{+}\) _(depending on_ \(v\)_) such that_ \(x_{1}x_{2}\cdots x_{n}v=0\) _for any_ \(x_{1},x_{2},\cdots,x_{n}\in L\)_._ 4. _The action of_ \(L\) _on_ \(V\) _is said to be locally finite if for any_ \(v\in V\)_,_ \(\dim U(L)v<\infty\)_._ Denote by \(\mathbb{M}\) the set of all infinite vectors of the form \(\mathbf{i}:=(\ldots,i_{2},i_{1})\) with entries in \(\mathbb{N}\), satisfying the condition that the number of nonzero entries is finite, and \(\mathbb{M}_{1}:=\{\mathbf{i}\in\mathbb{M}\mid i_{k}=0,1,\ \forall k\in\mathbb{Z}_{+}\}\). For \(\mathbf{i}\in\mathbb{M}\), denote by \(\operatorname{supp}(\mathbf{i}):=\{i_{s}\mid i_{s}\neq 0,s\in\mathbb{Z}_{+}\}\). Let \(\mathbf{0}\) denote the element \((\ldots,0,0)\in\mathbb{M}\) and for \(i\in\mathbb{Z}_{+}\) let \(\epsilon_{i}\) denote the element \[(\ldots,0,1,0,\ldots,0)\in\mathbb{M},\] where \(1\) is in the \(i\)-th position from right. For \(\mathbf{0}\neq\mathbf{i}\in\mathbb{M}\), let \(p\) be the smallest integer such that \(i_{p}\neq 0\) and define \(\mathbf{i}^{\prime}=\mathbf{i}-\epsilon_{p}\). let \(q\) be the largest integer such that \(i_{q}\neq 0\) and define \(\mathbf{i}^{\prime\prime}=\mathbf{i}-\epsilon_{q}\). For \(\mathbf{i},\mathbf{j}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}\), we denote \[\ell(\mathbf{i})=\sum i_{k},\] \[Q^{\mathbf{k}}M^{\mathbf{i}}L^{\mathbf{i}}=\ldots Q_{-2+\frac{1}{2}}^{k_{2}}Q_ {-1+\frac{1}{2}}^{k_{1}}\ldots M_{-2}^{j_{2}}M_{-1}^{j_{1}}\ldots L_{-2}^{i_{2}} L_{-1}^{i_{1}}\in U(\mathfrak{g}_{-}),\] and \[\operatorname{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\sum_{n\in\mathbb{Z}_{+}}i _{n}\cdot n+\sum_{n\in\mathbb{Z}_{+}}j_{n}\cdot n+\sum_{n\in\mathbb{Z}_{+}}k_{ n}\cdot(n-\frac{1}{2}), \tag{2.1}\] which is called the length of \((\mathbf{i},\mathbf{j},\mathbf{k})\) (or the length of \(Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\)). For convenience, denote by \(\mathbf{i}|=\operatorname{w}(\mathbf{i},\mathbf{0},\mathbf{0})\), \(\mathbf{j}|=\operatorname{w}(\mathbf{0},\mathbf{j},\mathbf{0})\) and \(|\mathbf{k}|=\operatorname{w}(\mathbf{0},\mathbf{0},\mathbf{k})\) for given \(Q^{\mathbf{k}}M^{\mathbf{i}}L^{\mathbf{i}}\in U(\mathfrak{g}_{-})\). ## 3. Verma modules over the N=1 BMS superalgebra In this section, we present a determinant formula for the contravariant form on the Verma modules over the N=1 BMS superalgebra g and establish a necessary and sufficient condition for the simplicity of the Verma modules. ### Contravariant forms on Verma modules For \(h_{1},h_{2},c_{1},c_{2}\in\mathbb{C}\), let \(\mathbb{C}\) be a one-dimensional \(\mathfrak{g}_{0}\)-module defined by \[L_{0}1=h_{1}1,\quad M_{0}1=h_{2}1,\quad\mathbf{c}_{1}1=c_{1}1,\quad\mathbf{c}_{ 2}1=c_{2}1.\] Let \(\mathfrak{g}_{+}\) act trivially on \(1\), making \(\mathbb{C}\) be a \((\mathfrak{g}_{0}\oplus\mathfrak{g}_{+})\)-module. The Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) over \(\mathfrak{g}\) is defined by \[M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})=U(\mathfrak{g})\otimes_{U(\mathfrak{ g}_{0}\oplus\mathfrak{g}_{+})}\mathbb{C}\cong U(\mathfrak{g}_{-})\mathbf{1}, \quad\text{(as vector spaces)},\] where \(\mathbf{1}=1\otimes 1.\) Moreover, \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})=\bigoplus_{n\in\frac{1}{2}\mathbb{ N}}M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}\), where \[M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}=\{v\in M_{\mathfrak{g}}(h_{1},h _{2},c_{1},c_{2})\,|\,L_{0}v=(h_{1}+n)v\}.\] According to the Poincare-Birkhoff-Witt (PBW) theorem, every element \(v\) of \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) can be uniquely written in the following form \[v=\sum_{\mathbf{i},\mathbf{j}\in\mathbb{M},\mathbf{k}\in M_{1}}a_{\mathbf{i}, \mathbf{j},\mathbf{k}}Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{1}}\mathbf{1}, \tag{3.1}\] where \(a_{\mathbf{i},\mathbf{j},\mathbf{k}}\in\mathbb{C}\) and only finitely many of them are nonzero. For any \(v\in M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) as in (3.1), we denote by \(\mathrm{supp}(v)\) the set of all \((\mathbf{i},\mathbf{j},\mathbf{k})\in\mathbb{M}_{1}\times\mathbb{M}^{2}\) such that \(a_{\mathbf{i},\mathbf{j},\mathbf{k}}\neq 0.\) The Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) has a unique maximal submodule \(J(h_{1},h_{2},c_{1},c_{2})\) and the factor module \[L_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})=M_{\mathfrak{g}}(h_{1},h_{2},c_{1}, c_{2})/J(h_{1},h_{2},c_{1},c_{2})\] is simple. Let \(\langle\cdot,\cdot\rangle\) be a \(\mathbb{C}\)-value bilinear form on \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) defined by \[\langle a\mathbf{1},b\mathbf{1}\rangle=\langle\mathbf{1},\mathrm{P}(\bar{a}b )\mathbf{1}\rangle,\quad\langle\mathbf{1},\mathbf{1}\rangle=1,\quad\forall\ a,b\in U( \mathfrak{g}_{-}),\] where \(\mathrm{P}:U(\mathfrak{g})\to U(\mathfrak{g}_{0})\) be the Harish-Chandra projection, i.e., a projection along the decomposition \[U(\mathfrak{g})=U(\mathfrak{g}_{0})\oplus(\mathfrak{g}_{-}U(\mathfrak{g})+U( \mathfrak{g})\mathfrak{g}_{+}).\] Since \(\mathrm{P}(\overline{a}b)=\mathrm{P}(\overline{b}a)\), we know that the bilinear form \(\langle\cdot,\cdot\rangle\) is symmetric, i.e. \[\langle a\mathbf{1},b\mathbf{1}\rangle=\langle b\mathbf{1},a\mathbf{1} \rangle,\quad\forall\ a,b\in U(\mathfrak{g}_{-}). \tag{3.2}\] Also, the form \(\langle\cdot,\cdot\rangle\) is contravariant, i.e., for \(m\in\mathbb{Z},r\in\mathbb{Z}+\frac{1}{2}\), \(u,v\in M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\), we have \[\langle L_{m}u,v\rangle=\langle u,L_{-m}v\rangle,\quad\langle M_{m}u,v\rangle= \langle u,M_{-m}v\rangle,\quad\langle Q_{r}u,v\rangle=\langle u,Q_{-r}v\rangle.\] The simplicity of the Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) is encoded in the determinant of the contravariant form. Since distinct weight spaces of \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) are orthogonal with respect to \(\langle\,\ \rangle\), the Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) is simple if and only if the restricted forms \[\langle\cdot,\cdot\rangle_{n}:M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n} \times M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}\to\mathbb{C}\] are nondegenerate for all \(n\in\frac{1}{2}\mathbb{N}\). For \(n\in\frac{1}{2}\mathbb{N}\), let \[S_{n}=\{Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1}:\text{w}(\mathbf{i}, \mathbf{j},\mathbf{k})=n,\,\forall\,\,\mathbf{i},\mathbf{j}\in\mathbb{M}, \mathbf{k}\in\mathbb{M}_{1}\}.\] From the PBW theorem, \(S_{n}\) is a basis of \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}\). Hence \[|S_{n}|=p(n):=\#\left\{(\mathbf{i},\mathbf{j},\mathbf{k})\in\mathbb{M}^{2} \times\mathbb{M}_{1}:\text{w}(\mathbf{i},\mathbf{j},\mathbf{k})=n\right\},\] where \(p(n)\) can be counted by the generating series \[\sum_{n\in\frac{1}{2}\mathbb{N}}p(n)q^{n}=\frac{\prod_{k\in\mathbb{N}+\frac{1} {2}}(1+q^{k})}{\prod_{k\geq 1}(1-q^{k})^{2}}.\] Denote by \(>\) the _lexicographical total order_ on \(\mathbb{M}\), defined as follows: for any \(\mathbf{i},\mathbf{j}\in\mathbb{M}\), set \[\mathbf{i}>\mathbf{j}\ \Leftrightarrow\text{ there exists }r\in\mathbb{Z}_{+} \text{ such that }i_{r}>j_{r}\text{ and }i_{s}=j_{s},\ \forall\ s>r.\] Now we can induce a principal total order \(>\) on \(S_{n}\): \[Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1}>Q^{\mathbf{k}_{1}}M^{ \mathbf{j}_{1}}L^{\mathbf{i}_{1}}\mathbf{1}\] if and only if one of the following conditions is satisfied: 1. \(\mathbf{j}|>\mathbf{j}_{1}|\); 2. \(\mathbf{j}|=\mathbf{j}_{1}|\) and \(\mathbf{j}_{1}>\mathbf{j}\); 3. \(\mathbf{j}=\mathbf{j}_{1},\ |\mathbf{k}|>|\mathbf{k}_{1}|\); 4. \(\mathbf{j}=\mathbf{j}_{1},\ |\mathbf{k}|=|\mathbf{k}_{1}|\) and \(\mathbf{k}>\mathbf{k}_{1}\); 5. \(\mathbf{j}=\mathbf{j}_{1},\ \mathbf{k}=\mathbf{k}_{1},\text{ and }\mathbf{i}>\mathbf{i}_{1}\). Sort the elements in \(S_{n}\) by \(>\), for example, \[S_{0} =\{\mathbf{1}\},\] \[S_{\frac{1}{2}} =\{Q_{-\frac{1}{2}}\mathbf{1}\},\] \[S_{1} =\{M_{-1}\mathbf{1},L_{-1}\mathbf{1}\},\] \[S_{\frac{1}{2}} =\left\{Q_{-\frac{1}{2}}M_{-1}\mathbf{1},Q_{-\frac{1}{2}}\mathbf{ 1},Q_{-\frac{1}{2}}L_{-1}\mathbf{1}\right\},\] \[S_{2} =\left\{M_{-1}^{2}\mathbf{1},M_{-2}\mathbf{1},M_{-1}L_{-1} \mathbf{1},Q_{-\frac{1}{2}}Q_{-\frac{1}{2}}\mathbf{1},L_{-2}\mathbf{1},L_{-1} ^{2}\mathbf{1}\right\}.\] ### Simplicity of Verma modules Let \(G_{n}\) be the Gram matrix of the form \(\langle\cdot,\cdot\rangle_{n}\) defined by \[\langle Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1},Q^{\mathbf{k}_{1 }}M^{\mathbf{j}_{1}}L^{\mathbf{i}_{1}}\mathbf{1}\rangle,\] where \(\text{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\text{w}(\mathbf{i}_{1},\mathbf{j }_{1},\mathbf{k}_{1})=n\). For \(u=Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1}\in S_{n}\), we define \[u^{*}=Q^{\mathbf{k}}M^{\mathbf{i}}L^{\mathbf{j}}\mathbf{1}.\] Then \(S_{n}^{*}=\{u^{*}\ |\ u\in S_{n}\}\) is also a basis of \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}\). It is clear that \[S_{0}^{*} =\{\mathbf{1}\},\] \[S_{\frac{1}{2}}^{*} =\{Q_{-\frac{1}{2}}\mathbf{1}\},\] \[S_{1}^{\,*} =\left\{L_{-1}\mathbf{1},M_{-1}\mathbf{1}\right\},\] \[S_{\frac{1}{2}}^{\,*} =\left\{Q_{-\frac{1}{2}}L_{-1}\mathbf{1},Q_{-\frac{2}{2}}\mathbf{1 },Q_{-\frac{1}{2}}M_{-1}\mathbf{1}\right\},\] \[S_{2}^{\,*} =\left\{L_{-1}^{2}\mathbf{1},L_{-2}\mathbf{1},L_{-1}M_{-1} \mathbf{1},Q_{-\frac{3}{2}}Q_{-\frac{1}{2}}\mathbf{1},M_{-2}\mathbf{1},M_{-1}^ {2}\mathbf{1}\right\}.\] Let \(D_{n}=(d_{ij})\), where \(d_{ij}=\langle b_{i},b_{j}^{*}\rangle\) for \(i,j=1,\ldots p(n)\). Then \[D_{\frac{1}{2}}=(2h_{2}),\quad D_{1}=\begin{pmatrix}2h_{2}&0\\ 2h_{1}&2h_{2}\end{pmatrix},\quad D_{\frac{3}{2}}=\begin{pmatrix}4h_{2}^{2}&0&0 \\ *&2h_{2}+\frac{2}{3}c_{2}&0\\ *&*&4h_{2}^{2}\end{pmatrix},\] \[D_{2}=\begin{pmatrix}8h_{2}^{2}&0&0&0&0&0\\ *&4h_{2}+\frac{1}{2}c_{2}&0&0&0&0\\ *&*&4h_{2}^{2}&0&0&0\\ *&*&*&2h_{2}(2h_{2}+\frac{2}{3}c_{2})&0&0\\ *&*&*&*&4h_{2}+\frac{1}{2}c_{2}&0\\ *&*&*&*&*&8h_{2}^{2}\end{pmatrix}.\] If \(\mathbf{j}\in\mathbb{M}\), \(m\in\mathbb{Z}_{+}\) with \(m\geq k\) for all \(k\in\text{supp}(\mathbf{j})\), then \[L_{m}M^{\mathbf{j}}\mathbf{1}=\left(2mh_{2}+\frac{m^{3}-m}{12}c_{2}\right) \frac{\partial}{\partial M_{-m}}M^{\mathbf{j}}\mathbf{1},\] where \(\frac{\partial}{\partial M_{-m}}\) is the operation of formal partial derivative defined by the Leibniz rule and \[\frac{\partial}{\partial M_{-m}}(M_{-n})=\delta_{m,n},\quad\frac{\partial}{ \partial M_{-m}}(\mathbf{1})=0.\] It follows immediately that if \(\mathbf{i}>\mathbf{j}\), \[\left\langle L^{\mathbf{i}}\mathbf{1},M^{\mathbf{j}}\mathbf{1}\right\rangle=L _{1}^{i_{1}}L_{2}^{i_{2}}\cdots M_{-2}^{j_{2}}M_{-1}^{j_{1}}\mathbf{1}=0. \tag{3.3}\] If \(\mathbf{k}\in\mathbb{M}_{1}\), \(t\in\mathbb{Z}_{+}\) and \(k_{t}=1,k_{t+i}=0\), then \[Q_{t-\frac{1}{2}}Q^{\mathbf{k}}\mathbf{1}=\left(2h_{2}+\frac{4t^{2}-1}{12}c_{2 }\right)Q^{\mathbf{k}^{\prime\prime}}\mathbf{1}.\] It follows immediately that if \(\mathbf{i}>\mathbf{i}_{1}\), \[\left\langle Q^{\mathbf{i}}\mathbf{1},Q^{i_{1}}\mathbf{1}\right\rangle=0. \tag{3.4}\] **Lemma 3.1**.: _The matrix \(D_{n}\) is lower triangular for any \(n\in\frac{1}{2}\mathbb{Z}_{+}\). Moreover, the diagonal entries of \(D_{n}\) are_ \[d_{ii}=\prod_{r\in\text{supp}(\mathbf{i})}r\left(2h_{2}+\frac{r^{2}-1}{12}c_{2 }\right)^{i_{r}}\prod_{t\in\text{supp}(\mathbf{k})}\left(2h_{2}+\frac{4t^{2}- 1}{12}c_{2}\right)^{k_{t}}\prod_{s\in\text{supp}(\mathbf{j})}s\left(2h_{2}+ \frac{s^{2}-1}{12}c_{2}\right)^{j_{s}},\] _where \(b_{i}=Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1}\in S_{n}\)._ Proof.: Suppose that \(b_{i}=Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1}>Q^{\mathbf{k}_{1}}M^{ \mathbf{j}_{1}}L^{\mathbf{i}_{1}}\mathbf{1}=b_{j}\) with \(Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{1},Q^{\mathbf{k}_{1}}M^{ \mathbf{j}_{1}}L^{\mathbf{i}_{1}}\mathbf{1}\in S_{n}.\) It suffices to show \[\left\langle b_{i},b_{j}^{*}\right\rangle=\left\langle Q^{\mathbf{k}}M^{ \mathbf{j}}L^{\mathbf{i}}\mathbf{1},\left(Q^{\mathbf{k}_{1}}M^{\mathbf{j}_{1}} L^{\mathbf{i}_{1}}\mathbf{1}\right)^{*}\right\rangle=0.\] * If \(|\mathbf{j}|>|\mathbf{j}_{1}|\), then \[0=\left\langle M^{\mathbf{j}}\mathbf{1},L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle =\left\langle\mathbf{1},\overline{M^{\mathbf{j}}}L^{\mathbf{j}_{1}}\mathbf{1} \right\rangle\Rightarrow\overline{M^{\mathbf{j}}}L^{\mathbf{j}_{1}}\mathbf{1}=0.\] It follows that \[d_{ij} =\left\langle Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{ 1},Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}\cdot\overline{Q ^{\mathbf{k}}}\cdot Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}\overline{M^{\mathbf{ j}}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=0.\] * If \(|\mathbf{j}|=|\mathbf{j}_{1}|\) and \(\mathbf{j}<\mathbf{j}_{1}\), it follows from (3.2) and (3.3) that \(\overline{M^{\mathbf{j}}}L^{\mathbf{j}_{1}}\mathbf{1}=\overline{L^{\mathbf{j}_ {1}}}M^{\mathbf{j}}\mathbf{1}=0\). Then \[d_{ij} =\left\langle Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{ 1},Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}\cdot\overline{Q ^{\mathbf{k}}}\cdot Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}L^{\mathbf{j}_{1}} \mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}\cdot\overline{Q ^{\mathbf{k}}}\cdot Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}\overline{M^{\mathbf{ j}}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=0.\] * If \(\mathbf{j}=\mathbf{j}_{1},\ |\mathbf{k}|>|\mathbf{k}_{1}|\) or \(|\mathbf{k}|=|\mathbf{k}_{1}|,\mathbf{k}>\mathbf{k}_{1}\), it is clear that \(\overline{Q^{\mathbf{k}}}Q^{\mathbf{k}_{1}}\mathbf{1}=0\). It follows that \[d_{ij} =\left\langle Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{ 1},Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}M^{\mathbf{i}_{1} }\overline{Q^{\mathbf{k}}}Q^{\mathbf{k}_{1}}\mathbf{1}\right\rangle\left\langle \mathbf{1},\overline{M^{\mathbf{j}}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=0.\] * If \(\mathbf{j}=\mathbf{j}_{1},\ \mathbf{k}=\mathbf{k}_{1},\text{ and }\mathbf{i}> \mathbf{i}_{1}\), it follows from (3.3) that \(\overline{L^{\mathbf{i}}}M^{\mathbf{i}_{1}}\mathbf{1}=0\). Then \[d_{ij} =\left\langle Q^{\mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}\mathbf{ 1},Q^{\mathbf{k}_{1}}M^{\mathbf{i}_{1}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}\cdot\overline{M^ {\mathbf{j}}}\cdot\overline{Q^{\mathbf{k}}}\cdot Q^{\mathbf{k}_{1}}M^{\mathbf{ i}_{1}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}M^{\mathbf{i}_{1} }\mathbf{1}\right\rangle\left\langle\mathbf{1},\overline{Q^{\mathbf{k}}}Q^{ \mathbf{k}_{1}}\mathbf{1}\right\rangle\left\langle\mathbf{1},\overline{M^{ \mathbf{j}}}L^{\mathbf{j}_{1}}\mathbf{1}\right\rangle\] \[=0.\] Moreover, \[d_{ii} =\left\langle\mathbf{1},\overline{L^{\mathbf{i}}}M^{\mathbf{i}} \right\rangle\left\langle\mathbf{1},\overline{Q^{\mathbf{k}}}Q^{\mathbf{k}} \mathbf{1}\right\rangle\left\langle\mathbf{1},\overline{M^{\mathbf{j}}}L^{ \mathbf{j}}\mathbf{1}\right\rangle\] \[=\prod_{r\in\mathrm{supp}(\mathbf{i})}r\left(2h_{2}+\frac{r^{2}-1 }{12}c_{2}\right)^{i_{r}}\prod_{t\in\mathrm{supp}(\mathbf{k})}\left(2h_{2}+ \frac{4t^{2}-1}{12}c_{2}\right)^{k_{t}}\prod_{s\in\mathrm{supp}(\mathbf{j})}s \left(2h_{2}+\frac{s^{2}-1}{12}c_{2}\right)^{i_{s}}\] for \(i=1,2,\ldots p(n)\). We are now in the position to state the main result in this section: **Theorem 3.2**.: _The Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) is simple if and only if_ \[h_{2}+\frac{i^{2}-1}{24}c_{2}\neq 0,\quad\forall\ i\in\mathbb{Z}_{+}.\] Proof.: For all \(n\in\frac{1}{2}\mathbb{N}\), the restricted form \[\langle\cdot,\cdot\rangle_{n}:M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n} \times M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})_{n}\to\mathbb{C}\] is nondegenerate if and only if the determinant of the Gram matrix \(G_{n}\) is nonzero, if and only if \(\det D_{n}\neq 0\). By applying Lemma 3.1, we can directly conclude that \(\det D_{n}=\prod_{i=1}^{p(n)}d_{ii}\neq 0\) if and only if \[h_{2}+\frac{i^{2}-1}{24}c_{2}\neq 0,\quad\forall\ i\in\mathbb{Z}_{+}.\] As a corollary of Theorem 3.2, we know that \(M_{\mathfrak{g}}(0,0,c_{1},c_{2})\) is not simple. In fact, \(Q_{-\frac{1}{2}}\mathbf{1}\) is a singular vector of \(M(0,0,c_{1},c_{2})\), and then \(L_{-1}\mathbf{1}\) is a subsingular vector. The _vaccum module_ for \(\mathfrak{g}\) is the quotient module \[V_{\mathfrak{g}}(c_{1},c_{2})=M_{\mathfrak{g}}(0,0,c_{1},c_{2})/J,\] where \(J\) is the submodule of \(V_{\mathfrak{g}}(0,0,c_{1},c_{2})\) generated by \(L_{-1}\mathbf{1}\). Denote by \(\mathds{1}:=\mathbf{1}+J\) the vacuum vector of \(V_{\mathfrak{g}}(c_{1},c_{2})\). Using a similar argument as Theorem 3.2, one obtains the following criterion of simplicity of the vacuum module: **Theorem 3.3**.: _The vaccum module \(V_{\mathfrak{g}}(c_{1},c_{2})\) is simple if and only if \(c_{2}\neq 0\)._ ## 4. Classifications of simple smooth \(\mathfrak{g}\)-modules In this section, we construct and classify all simple smooth \(\mathfrak{g}\)-modules of central charge \((c_{1},c_{2})\) under certain conditions. ### Constructing simple smooth \(\mathfrak{g}\)-modules For any \(m\in\mathbb{N},n\in\mathbb{Z}\) with \(n\leq 1\), let \[\mathfrak{g}^{(m,n)} = \bigoplus_{i\geq m}\mathbb{C}L_{i}\oplus\bigoplus_{j\geq n} \mathbb{C}M_{j}\oplus\bigoplus_{k\geq 1}\mathbb{C}Q_{k-\frac{1}{2}} \oplus\mathbb{C}\mathbf{c}_{1}\oplus\mathbb{C}\mathbf{c}_{2}.\] Note that \(\mathfrak{g}^{(m,n)}\) is a subalgebra of \(\mathfrak{g}\) and \(\mathfrak{g}^{(0,0)}=\mathfrak{g}_{+}\oplus\mathfrak{g}_{0}\). Given \(q\in\mathbb{N}\) and a \(\mathfrak{g}^{(0,-q)}\)-module \(V\) of central charge \((c_{1},c_{2})\). We consider the following induced \(\mathfrak{g}\)-module \[\operatorname{Ind}_{q}(V)=\operatorname{Ind}_{\mathfrak{g}^{(0,-q)}}^{ \mathfrak{g}}V.\] According to the PBW theorem, every element \(v\) of \(\operatorname{Ind}_{q}(V)\) can be uniquely written in the following form \[v=\sum_{\mathbf{i}\mathbf{j}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}}Q^{ \mathbf{k}}M^{\mathbf{j}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}}, \tag{4.1}\] where all \(v_{\mathbf{i},\mathbf{j},\mathbf{k}}\in V\) and only finitely many of them are nonzero. Denote by \(\prec\) the _reverse lexicographical total order_ on \(\mathbb{M}\), defined as follows: for any \(\mathbf{i},\mathbf{k}\in\mathbb{M}\), set \[\mathbf{i}\prec\mathbf{j}\ \Leftrightarrow\ \text{there exists}\ r\in\mathbb{Z}_{+} \text{ such that }i_{r}<j_{r}\text{ and }i_{s}=j_{s},\ \forall\ 1\leq s<r.\] Next we introduce a principal total order on \(\mathbb{M}\times\mathbb{M}_{1}\) as follows. Denoted by \(\prec\): \((\mathbf{j},\mathbf{k})\prec(\mathbf{j}_{1},\mathbf{k}_{1})\) if and only if one of the following conditions is satisfied: (1) \(\mathbf{k}<\mathbf{k}_{1}\); (2) \(\mathbf{k}=\mathbf{k}_{1}\) and \(\mathbf{j}<\mathbf{j}_{1}\). Now we can induce a principal total order on \(\mathbb{M}^{2}\times\mathbb{M}_{1}\), still denoted by \(\prec\): \((\mathbf{i},\mathbf{j},\mathbf{k})\prec(\mathbf{i}_{1},\mathbf{j}_{1},\mathbf{ k}_{1})\) if and only if one of the following conditions is satisfied: (1) \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k})<\mathrm{w}(\mathbf{i}_{1}, \mathbf{j}_{1},\mathbf{k}_{1})\); (2) \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\mathrm{w}(\mathbf{i}_{1}, \mathbf{j}_{1},\mathbf{k}_{1})\) and \(\mathbf{i}_{1}\prec\mathbf{i}\); (3) \(\mathrm{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\mathrm{w}(\mathbf{i}_{1}, \mathbf{j}_{1},\mathbf{k}_{1})\) and \(\mathbf{i}=\mathbf{i}_{1}\) and \((\mathbf{j},\mathbf{k})\prec(\mathbf{j}_{1},\mathbf{k}_{1})\). For any \(v\in\mathrm{Ind}_{q}(V)\) as in (3.1), we denote by \(\mathrm{supp}(v)\) the set of all \((\mathbf{i},\mathbf{j},\mathbf{k})\in\mathbb{M}^{2}\times\mathbb{M}_{1}\) such that \(v_{\mathbf{i},\mathbf{j},\mathbf{k}}\neq 0\). For a nonzero \(v\in\mathrm{Ind}_{q}(V)\), we write \(\deg(v)\) the maximal (with respect to the principal total order on \(\mathbb{M}^{2}\times\mathbb{M}_{1}\)) element in \(\mathrm{supp}(v)\), called the degree of \(v\). Note that here and later we make the convention that \(\deg(v)\) is only for \(v\neq 0\). **Theorem 4.1**.: _For \(q\in\mathbb{N}\), \(c_{1},c_{2}\in\mathbb{C}\), let \(V\) be a simple \(\mathfrak{g}^{(0,-q)}\)-module of central charge \((c_{1},c_{2})\). Assume that there exists \(t\in\mathbb{N}\) satisfying the following two conditions:_ * _the action of_ \(M_{t}\) _on_ \(V\) _is injective if_ \(t>0\) _or the action of_ \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) _on_ \(V\) _is injective for all_ \(n\in\mathbb{Z}^{*}\) _if_ \(t=0\)_;_ * \(M_{i}V=0\) _for all_ \(i>t\) _and_ \(L_{j}V=0\) _for all_ \(j>t+q\)_._ _Then_ * \(Q_{i-\frac{1}{2}}V=0\) _for all_ \(i>t\)_;_ * _The induced module_ \(\mathrm{Ind}_{q}(V)\) _is a simple module in_ \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\)_._ Proof.: (i) For \(i>t\geq 0\), we have \(Q_{i-\frac{1}{2}}^{2}V=M_{2i-1}V=0\) by (b). If \(Q_{j-\frac{1}{2}}V=0\) we are done. Otherwise, \(W=Q_{j-\frac{1}{2}}V\) is a proper subspace of \(V\). For \(j\in\mathbb{Z}_{+}\), we have \[Q_{j-\frac{1}{2}}W=Q_{j-\frac{1}{2}}Q_{i-\frac{1}{2}}V=2M_{i+j-1}V-Q_{i-\frac{ 1}{2}}Q_{j-\frac{1}{2}}V\subset Q_{i-\frac{1}{2}}V=W.\] Clearly \(L_{n}W\subset W\) and \(M_{m}W\subset W\) for any \(n\in\mathbb{N}\) and \(m\in\mathbb{Z}\) with \(m\geq-q\). It follows that \(W\) is a proper \(\mathfrak{g}^{(0,-q)}\)-submodule of \(V\). Then \(W=Q_{i-\frac{1}{2}}V=0\) for \(i>t\) since \(V\) is simple. (ii) Since the actions of \(L_{i}\), \(M_{i},Q_{i-\frac{1}{2}}\) on \(\mathrm{Ind}_{q}(V)\) for all \(i>t+q\) are locally nilpotent, \(\mathrm{Ind}_{q}(V)\) is an object in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\). Suppose that \(W\) is a nonzero proper \(\mathfrak{g}\)-submodule of \(\mathrm{Ind}_{q}(V)\). Now we are going to deduce some contradictions. Take an element \(v\in W\setminus V\) such that \(\deg(v)=(\mathbf{i},\mathbf{j},\mathbf{k})\) is minimal. Write \[v=\sum_{\mathbf{i},\mathbf{j}\in\mathbb{M},\,\mathbf{k}\in\mathbb{M}_{1}}Q^{ \mathbf{k}}M^{\mathbf{i}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}}, \tag{4.2}\] where all \(v_{\mathbf{i},\mathbf{j},\mathbf{k}}\in V\) and only finitely many of them are nonzero. Note that there exists \(t\in\mathbb{N}\) satisfying the following condition: the action of \(M_{t}\) on \(V\) is injective if \(t>0\) or the action of \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) on \(V\) is injective for all \(n\in\mathbb{Z}^{*}\) if \(t=0\). **Claim 1.**\(\underline{\mathbf{i}}=\mathbf{0}\). If \(\underline{\mathbf{i}}\neq\mathbf{0}\), then \(\underline{\hat{i}}=\min\{s\mid i_{s}\neq 0\}>0\), we shall prove that \(\deg(M_{\underline{\mathbf{i}}+t}v)\prec(\underline{\mathbf{i}},\underline{ \mathbf{j}},\underline{\mathbf{k}})\). It contradicts to the choice of \(v\). Note that \(M_{\underline{\mathbf{i}}+t}v_{\mathbf{i},\mathbf{j},\mathbf{k}}=0\) for any \((\mathbf{i},\mathbf{j},\mathbf{k})\in\operatorname{supp}(v)\). **Case 1.**\(t>0\). One can easily check that \[\deg(M_{\underline{\mathbf{i}}+t}Q^{\underline{\mathbf{k}}}M^{\underline{ \mathbf{i}}}L^{\underline{\mathbf{i}}}v_{\underline{\mathbf{i}},\underline{ \mathbf{j}},\underline{\mathbf{k}}})=\deg(Q^{\underline{\mathbf{k}}}M^{ \underline{\mathbf{i}}}[M_{\underline{\mathbf{i}}+t},L^{\underline{\mathbf{i }}}]v_{\underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}}})=( \underline{\mathbf{i}}^{\prime},\underline{\mathbf{j}},\underline{\mathbf{k}}).\] Here \(M_{t}v_{\mathbf{i},\underline{\mathbf{j}},\underline{\mathbf{k}}}\neq 0\) by (a). For \((\mathbf{i},\mathbf{j},\mathbf{k})\in\operatorname{supp}(v)\) with \((\mathbf{i},\mathbf{j},\mathbf{k})\prec(\underline{\mathbf{i}},\underline{ \mathbf{j}},\underline{\mathbf{k}})\), if \(\operatorname{w}(\mathbf{i},\mathbf{j},\mathbf{k})<\operatorname{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}})\), then \[\deg M_{\underline{\mathbf{i}}+t}Q^{\underline{\mathbf{k}}}M^{\underline{ \mathbf{i}}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}}\prec( \underline{\mathbf{i}}^{\prime},\underline{\mathbf{j}},\underline{\mathbf{k}}).\] Now we suppose that \(\operatorname{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\operatorname{w}(\underline {\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}})\) and \(\mathbf{i}\prec\underline{\mathbf{i}}\). If \(\hat{i}>\underline{\hat{i}}\), then \(\operatorname{w}(\mathbf{i}^{\prime},\mathbf{j},\mathbf{k})<\operatorname{w} (\underline{\mathbf{i}}^{\prime},\underline{\mathbf{j}},\underline{\mathbf{k}})\). Hence \[\deg(M_{\underline{\mathbf{i}}+t}Q^{\underline{\mathbf{k}}}M^{\underline{ \mathbf{i}}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}})\prec( \mathbf{i}^{\prime},\mathbf{j},\mathbf{k})\prec(\underline{\mathbf{i}}^{ \prime},\underline{\mathbf{j}},\underline{\mathbf{k}}).\] If \(\hat{i}=\underline{\hat{i}}\), then \[\deg(M_{\underline{\mathbf{i}}+t}Q^{\underline{\mathbf{k}}}M^{\underline{ \mathbf{i}}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}})=(\mathbf{i}^{ \prime},\mathbf{j},\mathbf{k})\prec(\underline{\mathbf{i}}^{\prime}, \underline{\mathbf{j}},\underline{\mathbf{k}}).\] If \(\operatorname{w}(\mathbf{i},\mathbf{j},\mathbf{k})=\operatorname{w}( \underline{\mathbf{i}},\underline{\mathbf{j}},\underline{\mathbf{k}})\) and \(\mathbf{i}=\underline{\mathbf{i}}\), it is easy to see that \[\deg(M_{\hat{i}+t}Q^{\underline{\mathbf{k}}}M^{\underline{ \mathbf{i}}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}})=(\mathbf{i}^{ \prime},\mathbf{j},\mathbf{k})\preceq(\underline{\mathbf{i}}^{\prime}, \underline{\mathbf{j}},\underline{\mathbf{k}}).\] So Claim 1 holds in this case. **Case 2**: \(t=0\). Set \(|v|_{\operatorname{Vir}}=\max\{|\mathbf{i}|:(\mathbf{i},\mathbf{j},\mathbf{k })\in\operatorname{supp}(v)\}\) and \(\ell=\max\{\ell(\mathbf{i}):(\mathbf{i},\mathbf{j},\mathbf{k})\in \operatorname{supp}(v),|\mathbf{i}|=|v|_{\operatorname{Vir}}\}\). Now we rewrite \(v\) as \[v=\sum_{|\mathbf{i}|=|v|_{\operatorname{Vir}}}L^{\mathbf{i}}u_{ \mathbf{i}}+u, \tag{4.3}\] where \(|u|_{\operatorname{Vir}}<|v|_{\operatorname{Vir}}\) and \(u_{\mathbf{i}}\in U(\operatorname{b})V\), \(\mathfrak{b}\) is the subalgebra of \(\mathfrak{g}\) generated by \(\{M_{i},Q_{i-\frac{1}{2}},\mathbf{c}_{2}:i\in\mathbb{Z}\}\). Note that \(M_{i}u_{\mathbf{i}}=0\) for any \(i\in\mathbb{Z}_{+}\). For \(\mathbf{i}=(\cdots,i_{2},i_{1})\in\mathbb{M}\), denote by \[(L^{\mathbf{i}})^{\#}:=M_{1}^{i_{1}}M_{2}^{i_{2}}\cdots.\] Set \[\mathbb{S}:=\{\mathbf{i}:(\mathbf{i},\mathbf{j},\mathbf{k})\in \operatorname{supp}(v),|\mathbf{i}|=|v|_{\operatorname{Vir}},\ell(\mathbf{i}) =\ell\}.\] Choose the maximal \(\mathbf{i}_{\max}\in\mathbb{S}\) with respect to "\(\prec\)". By action of \((L^{\mathbf{i}_{\max}})^{\#}\) on (4.3), we get \[(L^{\mathbf{i}_{\max}})^{\#}v=(L^{\mathbf{i}_{\max}})^{\#}\cdot L^{ \mathbf{i}_{\max}}u_{\mathbf{i}_{\max}}\neq 0\] since \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) on \(V\) is injective for all \(n\in\mathbb{Z}^{*}\), and \((L^{\mathbf{i}_{\max}})^{\#}\cdot L^{\mathbf{i}}u_{\mathbf{i}}=0\) if \(\mathbf{i}<\mathbf{i}_{\max}\) and \((L^{\mathbf{i}_{\max}})^{\#}u=0\) for \(u\) in (4.3) (see [33, Theorem 5.2]). This is a contradiction to the choice of \(v\). So Claim 1 holds in all cases. Since \([M_{i},Q^{\mathbf{k}}M^{\underline{i}}]=0\) for any \(i\in\mathbb{Z},\mathbf{j}\in\mathbb{M},\mathbf{k}\in\mathbb{M}_{1}\), repeating the above steps, we can suppose that \[v=\sum_{(\underline{\mathbf{j}},\mathbf{k})\in\mathbb{M}\times\mathbb{M}_{1}}Q ^{\mathbf{k}}M^{\underline{i}}v_{\mathbf{j},\mathbf{k}}, \tag{4.4}\] where all \(v_{\mathbf{j},\mathbf{k}}\in V\) and only finitely many of them are nonzero. Note that \(Q_{\underline{k}+t-\frac{1}{2}}V=0\). **Claim 2.**\(\underline{\mathbf{k}}=\mathbf{0}\). If \(\underline{\mathbf{k}}\neq 0\), one can easily check that \[\deg(Q_{\underline{k}+t-\frac{1}{2}}v)=\deg\left(\sum_{(\underline{\mathbf{j} },\mathbf{k})\in\mathbb{M}\times\mathbb{M}_{1}}Q_{\underline{k}+t-\frac{1}{2} }Q^{\mathbf{k}}M^{\underline{i}}v_{\mathbf{j},\mathbf{k}}\right)=\deg\left( \sum_{(\underline{\mathbf{j}},\mathbf{k})\in\mathbb{M}\times\mathbb{M}_{1}}Q ^{\mathbf{k}^{\prime\prime}}M^{\underline{i}}v_{\mathbf{j},\mathbf{k}}\right) \prec(\mathbf{0},\underline{\mathbf{j}},\underline{\mathbf{k}}). \tag{4.5}\] It also contradicts the choice of \(v\). Next suppose that \[v=\sum_{\mathbf{j}\in\mathbb{M}}M^{\underline{i}}v_{\mathbf{j}}, \tag{4.6}\] where all \(v_{\mathbf{j}}\in V\) and only finitely many of them are nonzero. If \(\underline{\mathbf{j}}\neq 0\), one can easily check that \[\deg(L_{j+t}v)\prec(\mathbf{0},\underline{\mathbf{j}},\mathbf{0}).\] It also contradicts the choice of \(v\). Then the theorem holds. **Remark 4.2**.: _If \(t=q=0\) in Theorem 4.1, then \(\mathfrak{g}_{+}V=0\). Since \(\mathfrak{g}^{(0,0)}/\mathfrak{g}_{+}\) is abelian, \(V\) has to be one-dimensional and then \(\operatorname{Ind}_{0}(V)\) is a highest weight module over \(\mathfrak{g}\)._ ### Characterization of simple smooth \(\mathfrak{g}\)-modules In this subsection, we give a characterization of simple \(\mathfrak{g}\)-modules in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\). **Proposition 4.3**.: _Let \(W\) be a simple \(\mathfrak{g}\)-module in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) such that the action of \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) is injective on \(W\) for any \(n\in\mathbb{Z}^{*}\). Then the following statements are equivalent:_ 1. _There exists_ \(l\in\mathbb{Z}_{+}\) _such that the actions of_ \(L_{i},M_{i},Q_{i-\frac{1}{2}}\) _for all_ \(i\geq l\) _on_ \(W\) _are locally finite._ 2. _There exists_ \(l\in\mathbb{Z}_{+}\) _such that the actions of_ \(L_{i},M_{i},Q_{i-\frac{1}{2}}\) _for all_ \(i\geq l\) _on_ \(W\) _are locally nilpotent._ 3. \(W\cong\operatorname{Ind}_{q}(V)\) _for some_ \(q\in\mathbb{N}\) _and simple_ \(\mathfrak{g}^{(0,-q)}\)_-module_ \(V\) _such that both conditions_ \((a)\) _and_ \((b)\) _in Theorem_ 4.1 _are satisfied._ Proof.: First we prove \((1)\Rightarrow(3)\). Suppose that \(W\) is a simple \(\mathfrak{g}\)-module and there exists \(l\in\mathbb{Z}_{+}\) such that the actions of \(L_{i},M_{i},Q_{i-\frac{1}{2}},i\geq l\) are locally finite. Thus we can find a nonzero element \(w\in W\) such that \(L_{l}w=\lambda w\) for some \(\lambda\in\mathbb{C}\). For any \(j\geq l\), we denote \[V(j)=\sum_{m\in\mathbb{N}}\mathbb{C}L_{l}^{m}M_{j}w=U(\mathbb{C}L_{l})M_{j}w,\] which are all finite-dimensional. By Definition 2.2, it is clear that \(M_{j+(m+1)t}w\in V(j)\) if \(M_{j+ml}w\in V(j)\). By induction on \(m\), we obtain \(M_{j+ml}w\in V(j)\) for all \(m\in\mathbb{N}\). Hence \(\sum_{m\in\mathbb{N}}\mathbb{C}M_{j+ml}w\) are finite-dimensional for any \(j>l\), and then \[\sum_{i\in\mathbb{N}}\mathbb{C}M_{l+i}w=\mathbb{C}M_{l}w+\sum_{j=l+1}^{2l} \Big{(}\sum_{m\in\mathbb{N}}\mathbb{C}M_{j+ml}w\Big{)}\] is finite-dimensional. Now we can take \(p\in\mathbb{Z}_{+}\) such that \[\sum_{i\in\mathbb{N}}\mathbb{C}M_{l+i}w=\sum_{i=0}^{p}\mathbb{C}M_{l+i}w. \tag{4.7}\] Set \[V^{\prime}:=\sum_{r_{0},\ldots,r_{p}\in\mathbb{N}}\mathbb{C}M_{l}^{r_{0}} \cdots M_{l+p}^{r_{p}}w.\] It is clear that \(V^{\prime}\) is finite-dimensional by the condition (1). It follows that we can choose a minimal \(n\in\mathbb{N}\) such that \[(a_{0}M_{m}+a_{1}M_{m+1}+\cdots+a_{n}M_{m+n})V^{\prime}=0 \tag{4.8}\] for some \(m>l\) and \(a_{i}\in\mathbb{C}\). Applying \(L_{m}\) to (4.8), one has \[(a_{0}[L_{m},M_{m}]+\cdots+a_{n}[L_{m},M_{m+n}])V^{\prime}=0\] since \(L_{m}V^{\prime}\subset V^{\prime}\). Then \[(a_{1}M_{2m+1}+\cdots+a_{n}nM_{2m+n})V^{\prime}=0. \tag{4.9}\] Applying suitable \(L_{k}\)'s to (4.9) again, one has \(n=0\), that is, \[M_{m}V^{\prime}=0 \tag{4.10}\] for some \(m>l\). By action of \(L_{i}\) on (4.10), we have \[M_{m+i}V^{\prime}=0,\ \forall\ i>0. \tag{4.11}\] Similarly, we have \[L_{n+i}V^{\prime}=Q_{n+i-\frac{1}{2}}V^{\prime}=0 \tag{4.12}\] for some \(n>l\) and any \(i>0\). For any \(r,s,t\in\mathbb{Z}\), we consider the following vector space \[N_{r,s,t}=\{v\in W\mid L_{i}v=M_{j}v=Q_{k-\frac{1}{2}}v=0,\quad\forall i>r,j>s, k>t\}.\] Due to that the action of \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) is injective on \(W\) for any \(n\in\mathbb{Z}^{*}\), there exists a smallest nonnegative integer, also saying \(s\), with \(V:=N_{r,s,t}\neq 0\) for some \(r\geq s\). Moreover, we can choose \(t=s\) as Theorem 4.1. Denote \(q=r-s\geq 0\) and \(V=N_{s+q,s,s}\). For any \(i>s+q\), \(j>s,k>s\), it follows that \[L_{i}Q_{n-\frac{1}{2}}v=(\frac{i+1}{2}-n)Q_{n+i-\frac{1}{2}}v=0,\quad M_{j}Q_{n -\frac{1}{2}}v=0,\quad Q_{k-\frac{1}{2}}(Q_{n-\frac{1}{2}}v)=M_{k+n-1}^{\prime }v=0,\] for any \(v\in V,n\geq 1\). Clearly, \(Q_{n-\frac{1}{2}}v\in V\) for all \(n\geq 1\). Similarly, we can also obtain \(L_{k}v,M_{k-q}v\in V\) for all \(k\in\mathbb{N}\). Therefore, \(V\) is a \(\mathfrak{g}^{(0,-q)}\)-module. If \(s\geq 1\), by the definition of \(V\), the action of \(M_{s}\) on \(V\) is injective. Since \(W\) is simple and generated by \(V\), there exists a canonical surjective map \[\pi:\operatorname{Ind}_{q}(V)\to W,\quad\pi(1\otimes v)=v,\quad\forall v\in V.\] Next, we only need to show that \(\pi\) is also injective, that is to say, \(\pi\) as the canonical map is bijective. Let \(K=\ker(\pi)\). Obviously, \(K\cap V=0\). If \(K\neq 0\), we can choose a nonzero vector \(v\in K\) such that \(\deg(v)=(\mathbf{i},\mathbf{j},\mathbf{k})\) is minimal possible. Note that \(K\) is a \(\mathfrak{g}\)-submodule of \(\operatorname{Ind}_{q}(V)\). By the proof of Theorem 4.1, we can obtain another vector \(u\in K\) with \(\deg(u)\prec(\mathbf{i},\mathbf{j},\mathbf{k})\), which is a contradiction. This forces \(K=0\), that is, \(W\cong\operatorname{Ind}_{q}(V)\). Then \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module. If \(s=0\) and \(r\geq 0\,(q=r)\). Similar to the argument above for \(s\geq 1\) and using the proof of Theorem 4.1 and the assumption that the action of \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) on \(V\) is injective for any \(n\in\mathbb{Z}^{*}\), we can deduce that \(W\cong\operatorname{Ind}_{q}(V)\) and \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module. Moreover, \((3)\Rightarrow(2)\) and \((2)\Rightarrow(1)\) are clear. This completes the proof. **Lemma 4.4**.: _Let \(W\) be a simple smooth \(\mathfrak{g}\)-module in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\). Then there exists \(N\in\mathbb{Z}_{+}\) such that the actions of \(L_{i},M_{i},Q_{i-\frac{1}{2}}\) for all \(i\geq N\) on \(W\) are locally nilpotent._ Proof.: Let \(0\neq v\in W\), there exists \(s\in\mathbb{Z}_{+}\) such that \(L_{i}v=M_{i}v=Q_{i-\frac{1}{2}}v=0\) for all \(i\geq s\). For \(W=U(\mathfrak{g})v\), every element \(w\) of \(W\) can be uniquely written in the following form \[w=\sum_{\mathbf{k}\in\mathbb{M}_{1},\mathbf{i},\mathbf{j}\in\mathbb{M}}Q^{ \mathbf{k}}M^{\mathbf{i}}L^{\mathbf{i}}v_{\mathbf{i},\mathbf{j},\mathbf{k}},\] where \(v_{\mathbf{i},\mathbf{j},\mathbf{k}}\in U(\mathfrak{g}_{+})v\). Then, for \(i\geq s\), there exists \(N\) sufficiently large such that \[L_{i}^{N}w=M_{i}^{N}w=Q_{i-\frac{1}{2}}^{N}w=0.\] From Proposition 4.3 and Lemma 4.4, we have **Theorem 4.5**.: _For \(c_{1},c_{2}\in\mathbb{C}\), let \(W\) be a simple \(\mathfrak{g}\)-module in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) with \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) being injective on \(V\) for any \(n\in\mathbb{Z}^{*}\). Then \(W\) is isomorphic to a simple module of the form \(\operatorname{Ind}_{q}(V)\) in Theorem 4.1, where \(q\in\mathbb{N}\) and \(V\) is a simple \(\mathfrak{g}^{(0,-q)}\)-module._ **Remark 4.6**.: _Let \(M\) be a simple smooth \(\operatorname{Vir}\)-module with central charge \(c_{1}\). We assume that \(M\) has trivial actions of \(M_{i}\), \(Q_{i-\frac{1}{2}}\), and \(\mathbf{c}_{2}\) for any \(i\in\mathbb{Z}\). As a result, \(M\) is also a simple smooth \(\mathfrak{g}\)-module with central charge \((c,0)\). However, it should be noted that the action of \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) is not injective._ ### Examples of smooth \(\mathfrak{g}\)-modules For \(q,t\in\mathbb{N}\), set \[\mathfrak{g}_{q,t}:=\bigoplus_{i>q+t}\mathbb{C}L_{i}\oplus\bigoplus_{j>t}( \mathbb{C}M_{j}\oplus\mathbb{C}Q_{j-\frac{1}{2}}),\] we see that \(\mathfrak{g}_{q,t}\) is an ideal of \(\mathfrak{g}^{(0,-q)}\). Consider the quotient \(\mathfrak{a}^{(q,t)}=\mathfrak{g}^{(0,-q)}/\mathfrak{g}_{q,t}\), which is a finite-dimensional solvable Lie superalgebra. Theorem 4.5 gives a classification of simple modules in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) with \(M_{0}+\frac{1}{24}(n^{2}-1)\mathbf{c}_{2}\) injective action for any \(n\in\mathbb{Z}^{*}\). In order to obtain these simple \(\mathfrak{g}\)-modules, we have to use simple modules over \(\mathfrak{a}^{(q,t)}\) for all \(q,t\in\mathbb{N}\). The classification problem for simple \(\mathfrak{a}^{(q,t)}\)-modules remains unsolved, as far as we know, except when \((q,t)=(0,0),(1,0),(2,0)\). #### 4.3.1. Verma module For \((q,t)=(0,0)\), the algebra \(\mathfrak{a}^{(0,0)}\) is commutative and its simple modules are one-dimensional, which leads exactly to Verma modules. In details, for \(h_{i}\), \(c_{i}\in\mathbb{C}\), \(i=1,2\), let \(\mathbb{C}\) be the one-dimensional \(\mathfrak{g}^{(0,0)}\)-module defined by \[L_{0}1=h_{1}1,\quad M_{0}1=h_{2}1,\quad\mathbf{c}_{1}1=c_{1}1,\quad\mathbf{c}_ {2}1=c_{2}1,\quad\mathfrak{g}_{+}1=0.\] It is clear that \(\mathbb{C}\) is a simple \(\mathfrak{g}^{(0,0)}\)-module. It follows from Theorem 4.5 that \(\operatorname{Ind}_{0}(\mathbb{C})\) is a simple \(\mathfrak{g}\)-module in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) if \(h_{2}+\frac{1}{24}(n^{2}-1)c_{2}\neq 0\) for \(n\in\mathbb{Z}^{*}\), which is exactly the Verma module \(M_{\mathfrak{g}}(h_{1},h_{2},c_{1},c_{2})\) constructed in Theorem 3.2. #### 4.3.2. Whittaker module For any \(k\in\mathbb{Z}_{+}\), let \(\phi_{k}:\mathfrak{g}^{(k)}\to\mathbb{C}\) be a nontrivial Lie superalgebra homomorphism with \(\phi_{k}(\mathbf{c}_{1})=c_{1},\phi_{k}(\mathbf{c}_{2})=c_{2}\). It follows that \[\phi_{k}(L_{i})=\phi_{k}(M_{i})=0,\quad\forall i\geq 2k+1,\quad\text{and} \quad\phi_{k}(Q_{j+\frac{1}{2}})=0,\quad\forall j\geq k.\] Let \(\mathbb{C}w\) be the one-dimensional \(\mathfrak{g}^{(k)}\)-module with \(xw=\phi_{k}(x)w\) for all \(x\in\mathfrak{g}^{(k)}\). The universal Whittaker module \(W(\phi_{k})\) can be defined as \[W(\phi_{k})=U(\mathfrak{g})\otimes_{U(\mathfrak{g}^{(k)})}\mathbb{C}w.\] Let \(V=\operatorname{Ind}_{\mathfrak{g}^{(k)}}^{\mathfrak{g}^{(0,0)}}(\mathbb{C}w)\). Applying similar arguments as in the proof of Theorem 4.1 we can prove that \(V\) is a simple \(\mathfrak{g}^{(0,0)}\)-module if and only if \(\phi_{k}(M_{2k-1})\neq 0\) or \(\phi_{k}(M_{2k})\neq 0\). It follows that \[W(\phi_{k})=\operatorname{Ind}_{\mathfrak{g}^{(0,0)}}^{\mathfrak{g}}(V)= \operatorname{Ind}_{0}(V).\] As an application of Theorem 4.1 (here \((q,t)=(0,2k-1)\) or \((0,2k)\)), we can obtain the following result. **Corollary 4.7** ([14]).: _For \(k\in\mathbb{Z}_{+}\), the universal Whittaker module \(W(\phi_{k})\) is a simple \(\mathfrak{g}\)-module in \(\mathcal{R}_{\mathfrak{g}}(c_{1},c_{2})\) if and only if \(\phi_{k}(M_{2k})\neq 0\) or \(\phi_{k}(M_{2k-1})\neq 0\)._ ## 5. Free field realizations of smooth \(\mathfrak{g}\)-modules In this section, we construct a free field realization of the N=1 BMS superalgebra via the Heisenberg-Clifford algebra \(\mathfrak{h}\). Using this free field realization, we construct some smooth modules over the N=1 BMS superalgebra from smooth modules over the Heisenberg-Clifford superalgebra. ### N=1 BMS vertex superalgebra Recall that a vertex superalgebra (cf. [26, 27]) is a quadruple \((V,\mathds{1},D,Y)\), where \(V\) is a \(\frac{1}{2}\mathbb{Z}\)-graded super vector space \[V=\bigoplus_{n\in\mathbb{Z}\mathbb{Z}}V_{n}=V_{\bar{0}}\oplus V_{\bar{1}}\] with \(V_{\bar{0}}=\sum_{n\in\mathbb{Z}}V_{n}\) and \(V_{\bar{1}}=\sum_{n\in\mathbb{Z}+\frac{1}{2}}V_{n}\), \(\mathds{1}\) is a specified vector called the vacuum of \(V\), \(D\) is an endomorphism of \(V\), and \(Y\) is a linear map \[V \to(\operatorname{End}V)[[z,z^{-1}]],\] \[v \mapsto Y(v,z)=\sum_{n\in\mathbb{Z}}v_{n}z^{-n-1}\quad(v_{n}\in( \operatorname{End}V)_{\bar{v}}),\] satisfying the following conditions for \(u,v\in V,\) and \(m,n\in\mathbb{Z}\) : 1. \(u_{n}v=0\quad\) for \(n\) sufficiently large; 2. \(Y(\mathds{1},z)=\operatorname{Id}_{V}\); 3. \(Y(v,z)\mathds{1}\in V[[z]]\) and \(\lim_{z\to 0}Y(v,z)\mathds{1}=v\); 4. \(\frac{d}{dz}Y(v,z)=Y(Dv,z)=[D,Y(v,z)]\); 5. For \(\mathbb{Z}_{2}\)-homogeneous \(u,v\in V\), the following Jacobi identity holds: \[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y(u,z_{1}) Y(v,z_{2})-(-1)^{|u||v|}z_{0}^{-1}\delta\left(\frac{z_{2}-z_{1}}{-z_{0}} \right)Y(v,z_{2})Y(u,z_{1})\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y(Y(u,z_{0} )v,z_{2}),\] where \(\delta(z)=\sum_{n\in\mathbb{Z}}z^{n}\) and \((z_{i}-z_{j})^{n}\) is expanded as a formal power series in \(z_{j}\). A homomorphism from a vertex superalgebra \(V_{1}\) to another vertex superalgebra \(V_{2}\) is a linear map \(f:V_{1}\to V_{2}\) such that \[f(\mathds{1}_{1})=\mathds{1}_{2},\quad f(Y_{1}(u,z)v)=Y_{2}(f(u),z)f(v),\quad \forall u,v\in V_{1}.\] For \(u,v\in V\), we define the normal order \(\stackrel{{\circ}}{{\circ}}\)\(\stackrel{{\circ}}{{\circ}}\) of vertex operators as follows: \[\stackrel{{\circ}}{{\circ}}Y(u,z_{1})Y(v,z_{2})\stackrel{{ \circ}}{{\circ}}=Y^{+}(u,z_{1})Y(v,z_{2})+(-1)^{|u||v|}Y(v,z_{2})Y^{-}(u,z_{1 }),\] where \[Y(u,z)=Y^{-}(u,z)+Y^{+}(u,z)=\sum_{n\geq 0}u_{n}z^{-n-1}+\sum_{n<0}u_{n}z^{-n- 1}.\] Let \(V\) be a vertex superalgebra. A \(V\)-module is a triple \((M,d,Y_{M})\), where \(M\) is a \(\mathbb{Z}_{2}\)-graded vector space, \(d\) is an endomorphism of \(M\), and \(Y_{M}\) is a linear map \[V \to(\operatorname{End}M)[[z,z^{-1}]],\] \[v \mapsto Y_{M}(v,z)=\sum_{n\in\mathbb{Z}}v_{n}z^{-n-1}\quad(v_{n}\in \operatorname{End}M),\] which satisfies that for all \(u,v\in V,\)\(w\in M,\) 1. \(u_{n}w=0\) for \(n\) sufficiently large; 2. \(Y_{M}(\mathds{1},z)=\operatorname{Id}_{M}\); * \(\frac{d}{dz}Y_{M}(v,z)=Y_{M}(Dv,z)=[d,Y_{M}(v,z)]\); * For \(\mathbb{Z}_{2}\)-homogeneous \(u,v\), the following Jacobi identity holds: \[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y_{M}(u,z_{1})Y_{M}(v,z _{2})-(-1)^{|u|\|}z_{0}^{-1}\delta\left(\frac{z_{2}-z_{1}}{-z_{0}}\right)Y_{M}( v,z_{2})Y_{M}(u,z_{1})\\ =z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{M}(Y(u, z_{0})v,z_{2}).\] We shall see that there is a canonical vertex superalgebra structure of on the vacuum module \(V_{\mathfrak{g}}(c_{1},c_{2})\). **Lemma 5.1**.: _Set_ \[L(z)=\sum_{n\in\mathbb{Z}}L_{n}z^{-n-2},\ M(z)=\sum_{n\in\mathbb{Z}}M_{n}z^{-n -2},\ Q(z)=\sum_{r\in\mathbb{Z}+\frac{1}{2}}Q_{r}z^{-r-\frac{3}{2}}\in\mathrm{ End}(V_{\mathfrak{g}}(c_{1},c_{2}))[[z,z^{-1}]].\] _Then the defining relations for the N=1 BMS superalgebra \(\mathfrak{g}\) are equivalent to the following operator product expansions (OPEs)._ \[L(z_{1})L(z_{2}) \sim \frac{\frac{c_{1}}{2}}{(z_{1}-z_{2})^{4}}+\frac{2L(z_{2})}{(z_{1 }-z_{2})^{2}}+\frac{L^{\prime}(z_{2})}{z_{1}-z_{2}},\] \[L(z_{1})M(z_{2}) \sim \frac{\frac{c_{2}}{2}}{(z_{1}-z_{2})^{4}}+\frac{2M(z_{2})}{(z_{1 }-z_{2})^{2}}+\frac{M^{\prime}(z_{2})}{z_{1}-z_{2}},\] \[L(z_{1})Q(z_{2}) \sim \frac{\frac{3}{2}Q(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{Q^{\prime}(z_ {2})}{z_{1}-z_{2}},\] \[Q(z_{1})Q(z_{2}) \sim \frac{\frac{2c_{2}}{3}}{(z_{1}-z_{2})^{3}}+\frac{2M(z_{2})}{z_{1 }-z_{2}}.\] _Furthermore, \(\{L(z),M(z),Q(z)\}\) is a set of mutually local homogeneous vertex operators on every smooth module over the N=1 BMS superalgebra \(\mathfrak{g}\)._ Proof.: The following operator product expansions (OPEs) can be verified directly. It follows that \[(z_{1}-z_{2})^{4}[L(z_{1}),L(z_{2})]=(z_{1}-z_{2})^{4}[L(z_{1}),M( z_{2})]=0,\] \[(z_{1}-z_{2})^{2}[L(z_{1}),Q(z_{2})]=(z_{1}-z_{2})^{3}[Q(z_{1}),Q (z_{2})]=0.\] By the local system theory for vertex superalgebras developed in [27], we get the following result. **Proposition 5.2**.: _For \(c_{1},c_{2}\in\mathbb{C}\), \(V_{\mathfrak{g}}(c_{1},c_{2})\) has a vertex superalgebra structure, which is uniquely determined by the condition that \(\mathds{1}\) is the vacuum vector, \(Dv=L_{-1}v\), and_ \[Y(L_{-2}\mathds{1},z)=L(z),\quad Y(M_{-2}\mathds{1},z)=M(z),\quad Y(Q_{-\frac {3}{2}}\mathds{1},z)=Q(z).\] _Moreover, there is a one-to-one correspondence between smooth \(\mathfrak{g}\)-modules of central charge \((c_{1},c_{2})\) and \(V_{\mathfrak{g}}(c_{1},c_{2})\)-modules._ ### Free field realizations **Definition 5.3**.: _The Heisenberg-Clifford algebra \(\mathfrak{h}\mathfrak{c}=\mathfrak{h}\mathfrak{c}_{0}\oplus\mathfrak{h} \mathfrak{c}_{1}\) is the Lie superalgebra generated by \(a_{n},b_{n},c_{r},\mathbf{k},n\in\mathbb{Z},r\in\mathbb{Z}+\frac{1}{2}\) with_ \[[a_{m},b_{n}] =m\delta_{m+n,0}\mathbf{k},\quad\{c_{r},c_{s}\}=\delta_{r+s,0} \mathbf{k},\] \[=[b_{m},b_{n}]=[a_{m},c_{r}]=[b_{m},c_{r}]=0,\quad[\mathbf{k}, \mathfrak{h}\mathfrak{c}]=0,\] _where \(\mathfrak{h}\mathfrak{c}_{0}=\bigoplus_{n\in\mathbb{Z}}\mathbb{C}a_{n}\oplus \bigoplus_{n\in\mathbb{Z}}\mathbb{C}b_{n}\oplus\mathbb{C}\mathbf{k}\) and \(\mathfrak{h}\mathfrak{c}_{1}=\bigoplus_{r\in\mathbb{Z}+\frac{1}{2}}\mathbb{ C}c_{r}.\)_ It is clear that \(\mathfrak{h}\mathfrak{c}\) is generated by free bosons \(\{a_{n},b_{n},\mathbf{k}\mid n\in\mathbb{Z}\}\) and neutral fermions \(\{c_{n},\mathbf{k}\mid n\in\mathbb{Z}+\frac{1}{2}\}.\) Define \[\deg a_{n}=\deg b_{n}=n,\quad\deg c_{r}=r,\quad\forall n\in\mathbb{Z},r\in \mathbb{Z}+\frac{1}{2}.\] Then \(\mathfrak{h}\mathfrak{c}\) is a \(\frac{1}{2}\mathbb{Z}\)-graded Lie superalgebra. Set \[\mathfrak{h}\mathfrak{c}_{\pm}=\bigoplus_{n\in\mathbb{Z}_{+}}\mathbb{C}a_{\pm n }\oplus\bigoplus_{n\in\mathbb{Z}_{+}}\mathbb{C}b_{\pm n}\oplus\bigoplus_{r \in\mathbb{N}+\frac{1}{2}}\mathbb{C}c_{\pm r},\quad\mathfrak{h}\mathfrak{c}_ {0}=\mathbb{C}a_{0}\oplus\mathbb{C}b_{0}\oplus\mathbb{C}\mathbf{k}.\] Then \(\mathfrak{h}\mathfrak{c}=\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h} \mathfrak{c}_{0}\oplus\mathfrak{h}\mathfrak{c}_{-}.\) Note that \(\mathfrak{h}\mathfrak{c}_{0}\) is the center of the superalgebra \(\mathfrak{h}\mathfrak{c}\). For \(a,b,\ell\in\mathbb{C}\), let \(\mathbb{C}\) be the one-dimensional \(\mathfrak{h}\mathfrak{c}_{0}\)-module defined by \[a_{0}1=a1,\quad b_{0}1=b1,\quad\mathbf{k}1=\ell 1.\] Let \(\mathfrak{h}\mathfrak{c}_{+}\) act trivially on \(\mathbb{C}\), making \(\mathbb{C}\) be a \((\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h}\mathfrak{c}_{0})\)-module. The **Verma module** for \(\mathfrak{h}\mathfrak{c}\) is defined by \[M_{\mathfrak{h}\mathfrak{c}}(\ell,a,b)=U(\mathfrak{h}\mathfrak{c})\otimes_{U (\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h}\mathfrak{c}_{0})}\mathbb{C}.\] An \(\mathfrak{h}\mathfrak{c}\)-module \(M\) is said to be **smooth** if for any \(u\in M\), \(a_{i}u=b_{j}u=c_{k}u=0\) for \(i,j,k\) sufficiently large. For \(\ell\in\mathbb{C}\), an \(\mathfrak{h}\mathfrak{c}\)-module \(M\) is said to be **level**\(\ell\) if for any \(u\in M\), \(\mathbf{k}u=\ell u\). It is clear that the Verma module \(M_{\mathfrak{h}\mathfrak{c}}(\ell,a,b)\) is a smooth \(\mathfrak{g}\)-module of level \(\ell\). Suppose that \(\phi:\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h}\mathfrak{c}_{0}\to \mathbb{C}\) is a homomorphism of Lie superalgebras. It follows that \(\phi\left(c_{r}\right)=0\) for all \(r>0\). Then \(\mathbb{C}w_{\phi}\) becomes a one-dimensional \((\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h}\mathfrak{c}_{0})\)-module defined by \(xw_{\phi}=\phi(x)w_{\phi}\) for all \(x\in\mathfrak{h}\mathfrak{c}_{+}\oplus\mathfrak{h}\mathfrak{c}_{0}\). The induced \(\mathfrak{h}\mathfrak{c}\)-module \(W_{\mathfrak{h}\mathfrak{c}}(\phi)=\operatorname{Ind}_{\mathfrak{h}\mathfrak{c }_{+}\oplus\mathfrak{h}\mathfrak{c}_{0}}^{\mathfrak{h}\mathfrak{c}}\mathbb{C}w_ {\phi}\) is called a **Whittaker module** associated to \(\phi\). Note that \(W_{\mathfrak{h}\mathfrak{c}}(\phi)\) is not necessarily a smooth \(\mathfrak{h}\mathfrak{c}\)-module. Following [33], we have **Lemma 5.4**.: _The Whittaker module \(W_{\mathfrak{h}\mathfrak{c}}(\phi)\) is simple if and only if \(\phi(\mathbf{k})\neq 0\)._ It is clear that there is a canonical vertex superalgebra structure of on \(M_{\mathfrak{h}\mathfrak{c}}(1,0,0)\). **Proposition 5.5** ([27]).: \(M_{\mathfrak{h}\mathfrak{c}}(1,0,0)\) _has a vertex superalgebra structure with \(\mathds{1}=1\otimes 1\) and_ \[Y(a_{-1}\mathds{1},z)=a(z),\quad Y(b_{-1}\mathds{1},z)=b(z),\quad Y(c_{-\frac{1 }{2}}\mathds{1},z)=c(z),\] ###### Abstract We study the \(\mathbb{R}^{3}\)-module of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of central charge \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules. ## 1 Introduction Let \(\mathbb{R}^{3}\) be a central charge of \(1\) and \(M_{\mathbb{R}}(1,0,0)\)-modules of \[\frac{\mathcal{L}(z_{1})\mathcal{M}(z_{2})}{(z_{1}-z_{2})^{2}}\] \[= \frac{a(z_{1})b(z_{2})}{(z_{1}-z_{2})^{2}}b(z_{1})b(z_{2})+\rho\, \underbrace{a(z_{1})b^{\prime}(z_{2})}b(z_{1})+\rho\,\underbrace{a^{\prime}(z_ {1})b(z_{2})}_{\text{$\circ$}}+\rho^{2}a^{\prime}(z_{1})b^{\prime}(z_{2})\] \[= \frac{b(z_{2})b(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{b^{\prime}(z_{2} )b(z_{2})}{z_{1}-z_{2}}+\frac{2\rho}{(z_{1}-z_{2})^{3}}b(z_{1})\] \[+\frac{2\rho b^{\prime}(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{\rho b^ {\prime\prime}(z_{1})}{z_{1}-z_{2}}+\frac{-2\rho}{(z_{1}-z_{2})^{3}}+\frac{-6 \rho^{2}}{(z_{1}-z_{2})^{4}}\] \[= \frac{b(z_{2})b(z_{2})+2\rho b^{\prime}(z_{2})}{(z_{1}-z_{2})^{2} }+\frac{b^{\prime}(z_{2})b(z_{2})+\rho b^{\prime\prime}(z_{1})}{z_{1}-z_{2}}+ \frac{-6\rho^{2}}{(z_{1}-z_{2})^{4}}\] \[= \frac{-6\rho^{2}}{(z_{1}-z_{2})^{4}}+\frac{2\mathcal{M}(z_{2})}{ (z_{1}-z_{2})^{2}}+\frac{\mathcal{M}^{\prime}(z_{2})}{z_{1}-z_{2}};\] \[\underbrace{\mathcal{L}(z_{1})\mathcal{Q}(z_{2})}_{\text{$\circ$}}\] \[= \frac{b(z_{2})c(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{b^{\prime}(z_{2} )c(z_{2})}{z_{1}-z_{2}}+\frac{-2\rho c(z_{2})}{(z_{1}-z_{3})^{3}}+\frac{ \frac{1}{2}c(z_{2})b(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{\frac{1}{2}c^{\prime}(z _{2})b(z_{2})}{z_{1}-z_{2}}+\frac{\frac{1}{2}c^{\prime}(z_{2})b(z_{2})}{z_{1} -z_{2}}\] \[+\frac{2\rho c(z_{2})}{(z_{1}-z_{2})^{3}}+\frac{2\rho c^{\prime} (z_{2})}{(z_{1}-z_{2})^{2}}+\frac{\rho c^{\prime\prime}(z_{2})}{z_{1}-z_{2}}+ \frac{\rho c^{\prime}(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{\rho c^{\prime\prime}(z _{2})}{z_{1}-z_{2}}\] \[= \frac{\frac{3}{2}b(z_{2})c(z_{2})+3\rho c^{\prime}(z_{2})}{(z_{1} -z_{2})^{2}}+\frac{b^{\prime}(z_{2})c(z_{2})+c^{\prime}(z_{2})b(z_{2})+2\rho c ^{\prime\prime}(z_{2})}{z_{1}-z_{2}}\] \[= \frac{\frac{3}{2}\mathcal{Q}(z_{2})}{(z_{1}-z_{2})^{2}}+\frac{ \mathcal{Q}^{\prime}(z_{2})}{z_{1}-z_{2}};\] and \[\underbrace{\mathcal{Q}(z_{1})\mathcal{Q}(z_{2})}_{\text{$\circ$} }= \frac{b(z_{1})b(z_{2})}{z_{1}-z_{2}}+\frac{2\rho b(z_{2})}{(z_{1}-z_{2})^{2}}+ \frac{2\rho b^{\prime}(z_{2})}{z_{1}-z_{2}}+\frac{-2\rho b(z_{2})}{(z_{1}-z_{2 })^{2}}+\frac{-8\rho^{2}}{(z_{1}-z_{2})^{3}}\] \[= \frac{2\rho b^{\prime}(z_{2})+b(z_{2})b(z_{2})}{z_{1}-z_{2}}+ \frac{-8\rho^{2}}{(z_{1}-z_{2})^{3}}\] \[= \frac{-8\rho^{2}}{(z_{1}-z_{2})^{3}}+\frac{2\mathcal{M}(z_{2})}{ z_{1}-z_{2}}.\] We are now in a position to state the main result of this section. **Theorem 5.7**.: _For \(\rho\in\mathbb{C}\), every smooth \(\mathfrak{h}\)-module \(W\) of level \(1\) becomes a smooth \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\) with the following actions:_ \[L_{n}\mapsto\sum_{m\in\mathbb{Z}}\ \mathop{\circ}\limits^{\circ}a_{m}b_{n-m} \mathop{\circ}\limits^{\circ}-(n+1)\rho a_{n}-\frac{1}{2}\sum_{s\in\mathbb{Z}+ \frac{1}{2}}(s+\frac{1}{2})\mathop{\circ}\limits^{\circ}c_{s}c_{n-s}\mathop{ \circ}\limits^{\circ},\] \[M_{n} \mapsto\frac{1}{2}\sum_{m\in\mathbb{Z}}b_{m}b_{n-m}-(n+1)\rho b_{n},\] \[Q_{r} \mapsto\sum_{s\in\mathbb{Z}+\frac{1}{2}}(r+\frac{1}{2})b_{r-s}c_{s} -2(r+\frac{1}{2})\rho c_{r},\] \[\mathbf{c}_{1} \mapsto\frac{5}{2},\] \[\mathbf{c}_{2} \mapsto-12\rho^{2},\] _for any \(n\in\mathbb{Z}\), \(r\in\mathbb{Z}+\frac{1}{2},\rho\in\mathbb{C}\)._ Proof.: By Proposition 5.5, any smooth \(\mathfrak{b}c\)-module \(W\) of level \(1\) is an \(M_{\mathfrak{b}c}(1,0,0)\)-module. From Proposition 5.6, \(W\) becomes a \(V_{\mathfrak{g}}\left(\frac{5}{2},-12\rho^{2}\right)\)-module. By Proposition 5.2, \(W\) is a smooth \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\). By Theorem 5.7, the Verma module \(M_{\mathfrak{b}c}(1,a,b)\) is a \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\). We refer to this module as a **Fock module** over \(\mathfrak{g}\), denoted by \(\mathcal{F}_{\mathfrak{g}}(a,b,\rho)\). **Corollary 5.8**.: _For \(a,b,\rho\in\mathbb{C}\), the Fock \(\mathfrak{g}\)-module \(\mathcal{F}_{\mathfrak{g}}(a,b,\rho)\) is simple if and only if \(b+(n-1)\rho\neq 0\) for any \(n\in\mathbb{Z}^{*}\)._ Proof.: By Theorem 5.7, we have \[L_{0}\mathbf{1}=(ab-\rho a)\mathbf{1},\quad M_{0}\mathbf{1}=\left(\frac{1}{2}b ^{2}-\rho b\right)\mathbf{1},\quad\mathbf{c}_{1}\mathbf{1}=\frac{5}{2}\mathbf{ 1},\quad\mathbf{c}_{2}\mathbf{1}=-12\rho^{2}\mathbf{1}.\] It follow that \(\mathcal{F}_{\mathfrak{g}}(a,b,\rho)\) is a highest weight \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\). From the universal property of the Verma module \(M_{\mathfrak{g}}(ab-\rho a,\frac{1}{2}b^{2}-\rho b,\frac{5}{2},-12\rho^{2})\) and \[\dim M_{\mathfrak{g}}\left(ab-\rho a,\frac{1}{2}b^{2}-\rho b,\frac{5}{2},-12 \rho^{2}\right)_{n}=\dim M_{\mathfrak{b}c}(1,a,b)_{n}=p(n),\] we have an isomorphism of \(\mathfrak{g}\)-modules : \[M_{\mathfrak{g}}(ab-\rho a,\frac{1}{2}b^{2}-\rho b,\frac{5}{2},-12\rho^{2}) \cong M_{\mathfrak{b}c}(1,a,b).\] It follows from Theorem 3.2 that \(\mathfrak{g}\)-module \(M_{\mathfrak{b}c}(1,a,b)\) is simple if and only if \[\frac{1}{2}b^{2}-\rho b-\frac{1}{2}(i^{2}-1)\rho^{2}\neq 0,\quad\forall i\in \mathbb{Z}_{+},\] if and only if \[(b+(i-1)\rho)(b-(i+1)\rho)\neq 0,\] if and only if \(b+(n-1)\rho\neq 0\) for any \(n\in\mathbb{Z}^{*}\). Let \(W_{\mathfrak{b}c}(\phi)\) be the Whittaker \(\mathfrak{b}c\)-module associated to \(\phi\) with \(\phi(\mathbf{k})=1\). Suppose that \[a_{i+1}w_{\phi}=b_{i+1}w_{\phi}=0,\quad\forall i\in\mathbb{Z}_{+}.\] Note \(c_{n-\frac{1}{2}}w=0\) for any \(n\in\mathbb{Z}_{+}\). By Lemma 5.4, \(W_{\text{bc}}(\phi)\) is a simple smooth \(\mathfrak{h}\)-module of level 1. For \(\rho\in\mathbb{C}\), it follows Theorem 5.7 that \(W_{\text{bc}}(\phi)\) becomes a \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\). We refer to this module as a **Fock-Whittaker module** over \(\mathfrak{g}\), denoted by \(\mathcal{F}_{\mathfrak{g}}(\phi,\rho)\). **Corollary 5.9**.: _The Fock-Whittaker \(\mathfrak{g}\)-module \(\mathcal{F}_{\mathfrak{g}}(\phi,\rho)\) is simple if and only if \(\phi(b_{1})\neq 0\)._ Proof.: By Theorem 5.7, we have \[L_{1}w_{\phi} =(\phi(a_{0})\phi(b_{1})+\phi(a_{1})\phi(b_{0})-2\rho\phi(a_{1}) )w_{\phi},\ L_{2}w_{\phi}=\phi(a_{1})\phi(b_{1})w_{\phi},\ L_{i}w_{\phi}=0,\] \[M_{1}w_{\phi} =(\phi(b_{0})-2\rho)\phi(b_{1})w_{\phi},\ M_{2}w_{\phi}=\frac{1} {2}\phi(b_{1})^{2}w_{\phi},\ M_{i}w_{\phi}=0,\] \[Q_{\frac{1}{2}}w_{\phi} =\phi(b_{1})c_{-\frac{1}{2}}w_{\phi},\ Q_{r}w_{\phi}=0,\ \mathbf{c}_{1}w_{\phi}=\frac{5}{2}w_{\phi},\ \mathbf{c}_{2}w_{\phi}=-12\rho^{2}w_{\phi}\] for any \(i\geq 3,r\geq\frac{3}{2}\). It is clear that \(\mathcal{F}_{\mathfrak{g}}(\phi,\rho)\) is a Whittaker \(\mathfrak{g}\)-module of central charge \(\left(\frac{5}{2},-12\rho^{2}\right)\). From Corollary 4.7, \(\mathcal{F}_{\mathfrak{g}}(\phi,\rho)\) is simple if and only if \(\phi(b_{1})\neq 0\). ## Acknowledgements The authors would like to thank Prof. Haisheng Li for his useful and helpful comments on our manuscript. This work was supported by the National Natural Science Foundation of China (11971315, 12071405,12171155) and NSERC (311907-2020).
2303.01094
CTRLStruct: Dialogue Structure Learning for Open-Domain Response Generation
Dialogue structure discovery is essential in dialogue generation. Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses. However, most previous work focused on dialogue structure learning in task-oriented dialogue other than open-domain dialogue which is more complicated and challenging. In this paper, we present a new framework CTRLStruct for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information. Precisely, dialogue utterances encoded by bi-directional Transformer are further trained through a special designed contrastive learning task to improve representation. Then we perform clustering to utterance-level representations and form topic-level clusters that can be considered as vertices in dialogue structure graph. The edges in the graph indicating transition probability between vertices are calculated by mimicking expert behavior in datasets. Finally, dialogue structure graph is integrated into dialogue model to perform controlled response generation. Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models, as well as outperform some typical sentence embedding methods in dialogue utterance representation. Code is available in GitHub.
Congchi Yin, Piji Li, Zhaochun Ren
2023-03-02T09:27:11Z
http://arxiv.org/abs/2303.01094v1
# CTRLStruct: Dialogue Structure Learning for Open-Domain Response Generation ###### Abstract. Dialogue structure discovery is essential in dialogue generation. Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses. However, most previous work focused on dialogue structure learning in task-oriented dialogue other than open-domain dialogue which is more complicated and challenging. In this paper, we present a new framework **CTRLStruct** for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information. Precisely, dialogue utterances encoded by bi-directional Transformer are further trained through a special designed contrastive learning task to improve representation. Then we perform clustering to utterance-level representations and form topic-level clusters that can be considered as vertices in dialogue structure graph. The edges in the graph indicating transition probability between vertices are calculated by mimicking expert behavior in datasets. Finally, dialogue structure graph is integrated into dialogue model to perform controlled response generation. Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models, as well as outperform some typical sentence embedding methods in dialogue utterance representation. Code is available in GitHub1. Footnote 1: [https://github.com/lemonavis/CTRLStruct](https://github.com/lemonavis/CTRLStruct) Dialogue Structure Learning, Open-Domain Dialogue Generation, Utterance Representation, Contrastive Learning, Imitation Learning + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) + Footnote †: 2023 Copyright held by the owner/authenticity. Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-12/30/4...$15.00 [https://doi.org/10.1145/3543507.3583285](https://doi.org/10.1145/3543507.3583285) ## 1. Introduction Open-domain dialogue generation is a challenging task due to its complex multi-turn interacting structure, diverse topics, and lack of specific goals. As a result, only considering language modeling in response generation is far from enough like past work, such as GPT2 (Shen et al., 2018), BART (Bart et al., 2018), and T5 (Zhu et al., 2018). Dialogue structure, indicating dialogue states and their transitions, plays an integral part in dialogue generation. It's typically regarded as the combination of utterance-level structure and topic-level structure (Zhu et al., 2018). The former can be viewed as flow of dialogue utterances while the latter stands for the topic transitions. As illustrated in Figure 1, transition from "Have you finished you homework" to "Yes, I have." is considered as utterance-level structure, while the topic-level structure is transitions from a higher perspective, like from "study" to "vacation". Intuitively, if dialogue structure is modeled in multi-turn conversations, the chatbot can realize which topic it is in and what topic might be involved in the next step, which will largely improve multi-turn coherence as well as controllability and interpretability. Dialogue structure was first mentioned in task-oriented dialogue researches. Early researches relied on hand-crafted rules (K like Hidden Markov Model (HMM) shows promise in modeling latent structure of dialogues (H In Equation (1), \(z_{i}\) and \(z_{i}^{+}\) are representations of positive pairs \(x_{i}\) and \(x_{i}^{+}\). \(r\) is a temperature hyperparameter and \(sim(z_{i},z_{i}^{+})\) indicates some distance measurement between \(z_{i}\) and \(z_{i}^{+}\). InfoNCE loss lays solid foundation for modern contrastive learning methods like MoCo (Moh et al., 2017) and SimCLR (Deng et al., 2017) in computer vision. In NLP domain, SimCSE (Song et al., 2018) uses dropout in Transformers to build positive samples and successfully learns state-of-the-art sentence embedding. ## 3. Methodology ### Overview Given a dialogue corpus \(\mathcal{D}\) that contains \(|D|\) conversations denoted as \(\{X_{1},X_{2},\ldots,X_{|D|}\}\), each conversation is composed of multi-turn utterances of two speakers, \(A\) and \(B\). The \(i\)-th conversation \(X_{i}\) contains \(k_{i}\) utterances \(X_{i}=[S_{A_{i}},S_{B_{i}},S_{A_{i}},\ldots]\). In dialogue structure modeling, utterances with similar semantic meanings are supposed to gather in the same topic cluster \(c_{i}\). We assume that each cluster center vector \(c_{i}\) contains topic-level information. We take these clusters as random variables in Markov chain, where each has certain probability of moving to another. Our goal is to figure out the dialogue structure consisted of topic transitions, and utilize the dialogue structure to control response generation. As shown in Figure 2, our proposed CTRLStruct model consists of three main components: contrastive utterance representation learning part, dialogue structure modeling part, and dialogue structure controlled response generation part. In the first part, the mean-pooling output of a bi-directional Transformer block is viewed as original version of utterance representation. Then we apply contrastive learning to further train the Transformer encoder and get the final utterance representation. In the second part, we gather semantically similar utterances together to form topic-level clusters and utilize imitation learning to automatically calculate transition probabilities among topics, and then the dialogue structure graph has been built. To conduct dialogue structure controlled response generation, given the dialogue context, we can obtain the target topic according to the dialogue structure graph. Then during training, we bring the representation of generated response closer to the predicted cluster center. In this way topic-level dialogue structure information is integrated in the training procedure of the auto-regressive decoder. ### Utterance Representation Learning For each input utterance \(S\), the bi-directional Transformer encoder outputs original representation as \(\mathbf{H}\in\mathbb{R}^{m\times n}\) where \(m\) stands for the number of tokens in utterance. Previous work (Zhou et al., 2018; Wang et al., 2019) showed that taking mean-pooling of pre-trained models' output embeddings leads to better performance than max-pooling or \([CLS]\) representation. We perform mean-pooling to H and get \(n\)-dimensional vector \(\mathbf{h}\) as preliminary utterance representation. Different from other forms of corpus, dialogue has its unique features that are non-negligible in dialogue utterance representation learning. It can be viewed as directional flow of topics from a higher perspective. So we make the basic assumption that human dialogue is highly **contextual** and **sequential**. "Contextual" means that one response should be closely related to its context in topic level. The closer one utterance is to another, the more relevant it is to that utterance. "Sequential" means dialogue is composed of sequential utterances with internal logic. Conversations like "A: Are you free to climb mountain with us tomorrow? B: Sorry, I have to finish my job first." are conventional, but if its order is changed, that would be weird. However, even with the defined assumptions above, it is still hard to develop a set of rules to clearly explain the role one utterance plays in the dialogue. One utterance might be semantically similar with its previous utterance, or has few connection with the Figure 2. The framework of the whole model. -\(\mathbf{<}\mathbf{>}\)-and -\(\mathbf{<}\mathbf{\textit{s}}\)-in the input are special tokens indicating begin of sentence and end of sentence respectively. Notice that the bi-directional attention encoder trained through contrastive learning in CTRLStruct is not used in dialogue generation, original pre-trained encoder is used instead. previous one but has a close relationship with its next response. For example, when an utterance is a response to some queries, it satisfies the former situation. When an utterance is the beginning of a new topic, it corresponds to the latter situation. We define this problem as Utterance Ascription Issue. We apply contrastive learning to handle the above-mentioned problem. Two kinds of correlation are defined: **Absolute Correlation** and **Relative Correlation**. As is illustrated in Figure 2, Absolute Correlation follows the contrastive learning framework proposed by SimCLR (Chen et al., 2017). Two data augmentation samples \(S^{\prime}_{A_{i}}\) and \(S^{\prime\prime}_{A_{i}}\) of one utterance \(S_{A_{i}}\) constitute a positive pair. Specifically, we choose four different data augmentation strategies in CTRIStruct, which are insert contextual relevant words, random replacement, synonym substitution, and dropout augmentation trick applied in SimCSE (Kang et al., 2019). Relative Correlation is composed of **Strong Relativity** and **Weak Relativity**. Considering the sequential feature of dialogue, the relativity between \(S_{A_{i}}\), \(S_{B_{i}}\) is not equal to that between \(S_{B_{i}},S_{A_{i}}\) like sort of non-metric distance measurement. Take three sequential utterances \(S_{B_{i-1}},S_{A_{i}},S_{B_{i}}\) in dialogue for example. As topics flow directionally in dialogue, we define Strong Relativity for \(S_{A_{i}}\) as \(S_{A_{i}}\) and its next response \(S_{B_{i}}\) making up a positive pair. Weak Relativity stands for \(S_{A_{i}}\) and its previous response \(S_{B_{i-1}}\) making up a weak positive pair. Strong Relativity and Weak Relativity are critical in solving the Utterance Ascription Issue. When an utterance is semantically similar with its previous utterance, the constraint of Weak Relativity can maintain the relationship so as not to be ruined by Strong Relativity. When an utterance is similar with its next utterance, Strong Relativity can bring them closer in semantic space. The relations are vividly shown in Figure 2. Under the above settings, we design Absolute Correlation Loss \(l_{AC}\) and Relative Correlation Loss \(l_{RC}\). Supposing \(S^{\prime}_{A_{i}}\) and \(S^{\prime\prime}_{A_{i}}\) are two data augmentation utterances of \(S_{A_{i}}\), \(\mathbf{h}\) is the original encoder representation of utterance \(S\). Absolute Correlation Loss of response \(S_{A_{i}}\) is defined as \[l_{AC}(S^{\prime}_{A_{i}},S^{\prime\prime}_{A_{i}})=-\log\frac{e^{sim(\mathbf{H} _{A_{i}},\mathbf{h}^{\prime}_{A_{i}})/r}}{\sum_{S_{i}\in[\cup_{j=1}^{|D|}X_{j}} \mathbf{1}_{[i\neq A_{i}]}\mathbf{e}^{sim(\mathbf{H}_{A_{i}},\mathbf{h}_{i})/r}}, \tag{2}\] where \(\mathbf{1}_{[i\neq A_{i}]}\in\{0,1\}\) is an indicator function evaluating to \(1\) iff \(i\neq A_{i}\), \(sim(\mathbf{a},\mathbf{b})\) is the cosine similarity \(\frac{\mathbf{a}^{\top}\mathbf{b}}{\mathbf{a}\|\|\mathbf{b}\|}\) between vector \(\mathbf{a}\) and \(\mathbf{b},X_{j}\) is the \(j\)-th conversation. The capacity of sets is \(2|D|\) because we use one utterance's two augmentation samples for contrastive learning. Equation (2) also applies to speaker \(B\)'s responses. Relative Correlation Loss consists of two losses, Strong Relativity Loss \(l_{SR}\) and Weak Relativity Loss \(l_{WR}\). Strong Relativity Loss is \[l_{SR}(S_{A_{i}},S_{B_{i}})=-\log\frac{e^{sim(\mathbf{h}_{A_{i}},\mathbf{h}_{B_{i}})/ r}}{\sum_{S_{i}\in[\cup_{j=1}^{|D|}X_{j}}\mathbf{1}_{[i\neq A_{i}]}\mathbf{e}^{sim(\mathbf{ h}_{A_{i}},\mathbf{h}_{i})/r}}, \tag{3}\] where \(S_{B_{i}}\) is the next sentence of \(S_{A_{i}}\). Similarly, Weak Relativity Loss is defined as \(S_{A_{i}}\) and its previous response, \[l_{WR}(S_{A_{i}},S_{B_{i-1}})=-\log\frac{\lambda_{1}e^{sim(\mathbf{h}_{A_{i}},\bm {h}_{B_{i-1}})/r}}{\sum_{S_{i}\in[\cup_{j=1}^{|D|}X_{j}}\mathbf{1}_{[i\neq A_{i}]} \mathbf{e}^{sim(\mathbf{h}_{A_{i}},\mathbf{h}_{i})/r}}. \tag{4}\] Compared to \(l_{SR}\), Weak Relativity is reflected in the coefficient \(\lambda_{1}\). The total Relative Correlation Loss is the sum of Strong Relativity Loss and Weak Relativity Loss \[l_{RC}=l_{SR}+l_{WR}. \tag{5}\] Since mini-batch gradient descent is used to optimize neural network, Absolute Correlation Loss and Relative Correlation Loss of batch with size \(N\) is written as \[Loss_{AC}=\frac{1}{2N}\sum_{i=1}^{N}[l_{AC}(S^{\prime}_{i},S^{\prime\prime}_{ i})+l_{AC}(S^{\prime\prime}_{i},S^{\prime}_{i})], \tag{6}\] \[Loss_{RC}=\frac{1}{N-1}\big{[}\sum_{i=1}^{N-1}l_{SR}(S_{i},S_{i+1})+\sum_{i=2}^{ N}l_{WR}(S_{i},S_{i-1})\big{]}. \tag{7}\] The total loss in utterance representation training is the sum of Absolute Correlation Loss and Relative Correlation Loss \[Loss_{total}=Loss_{AC}+Loss_{RC}. \tag{8}\] The whole self-supervised training process of contrastive utterance representation is conducted on bi-directional Transformer encoder. When it finishes, the encoder outputs utterance representation for dialogue structure modeling in next part. ### Dialogue Structure Modeling We perform clustering to utterance representations, aiming to gather utterances sharing similar topics. We utilize K-Means with cosine similarity as clustering methods and achieve good clustering performance and conciseness. Utterance representations are gathered into \(k\) topic-level clusters \(C_{1},C_{2},\dots,C_{k}\) whose center vectors are \(\mathbf{c}_{1},\mathbf{c}_{2},\dots,\mathbf{c}_{k}\) respectively. As shown in Figure 2, our goal is to model dialogue structure graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where topic-level clusters are taken as vertices and transition probabilities to other vertices are viewed as edges. In CTRLStruct, we don't explicitly model the utterance transitions inside certain topic cluster like other methods. Because Large pre-trained language model is capable of fixing the problem. With such topic-level dialogue structure graph, our model can predict whether to continue talks related to current topic or transit to another topic during conversations. Imitation learning (Chen et al., 2017; Chen et al., 2017) is applied in transition probability calculation. However, environment or simulator is not available in our faced problem. We only have dialogue datasets with unknown topic state transition probabilities, which can be viewed as offline environment in reinforcement learning concept. Fortunately, Rajaraman et al. (Rajaraman et al., 2019) prove the optimality of behavioral cloning under the settings that the state transition function is unknown and the environment for simulation is unavailable. The idea of behavioral cloning (Chen et al., 2017; Chen et al., 2017) is attempt to recover expert policy with given high-quality dataset. Since the ultimate goal of open-domain dialogue system is to achieve human-level conversation quality, we apply behavioral cloning to mimic topic transitions from expert trajectories in high-quality dialogue datasets. Utterance representation \(bm{h}_{i}\) is taken as state and cluster center vector \(\mathbf{c}_{j}\) is considered as action in behavioral cloning setting. The size of state space and action space are \(\mathcal{S}\in\mathbb{R}^{k\cup n}\) and \(\mathcal{A}\in\mathbb{R}^{k\cup n}\) respectively, where \(u\) is the total number of utterances and \(k\) is the number of clusters. At time step \(t\), action \(\mathbf{c}_{t+1}\), which is the center vector of state \(\mathbf{h}_{t+1}\)'s cluster, will be taken and state \(\mathbf{h}_{t}\) is to transit to state \(\mathbf{h}_{t+1}\). So the expert trajectory can be written as \(\mathbf{h}_{1},\mathbf{c}_{2},\mathbf{h}_{2},\mathbf{c}_{3},\mathbf{h}_{3},\dots,\mathbf{h}_{m}\). From the above settings, state space \(\mathcal{S}\) can be considered as non-discrete space for \(u\) is too large while action space \(\mathcal{A}\) is discrete space. However in real training process we view action space as continuous space for the diversity of topics (actions). We select the cluster center vector that is closest to calculated action in cosine similarity as final action to take in current state. Under such situation in behaviour cloning we usually use Maximum Likelihood Estimation (MLE) to directly estimate the policy \(\widehat{\pi}_{\theta}\). Since \(\mathcal{S},\mathcal{A}\in\mathbb{R}\), the optimization objective can be written as \[\max_{\theta}\sum_{(\mathbf{h},\mathbf{c})\in\mathcal{D}}\log\left(\widehat{\pi}_{ \theta}(\mathbf{c}\mid\mathbf{h})\right). \tag{9}\] Gaussian distribution is adopted to represent the policy as most behavioral cloning methods do (Kolmogorov, 1999) in continuous action or state space. For each state \(\mathbf{h}\), we assume policy \(\widehat{\pi}_{\theta}(\cdot|\mathbf{h})\sim\mathcal{N}\left(\mu_{\theta}(\mathbf{h}), \sigma_{\theta}^{2}(\mathbf{h})\right)\) where \(\mu_{\theta}(\mathbf{h})\) and \(\sigma_{\theta}^{2}(\mathbf{h})\) are the mean and variance \[\widehat{\pi}_{\theta}(\cdot|\mathbf{h})=\frac{1}{\sqrt{2\pi}\sigma_{\theta}(\mathbf{ h})}e^{-\frac{(\mathbf{c}-\mu_{\theta}(\mathbf{h}))^{2}}{2\sigma_{\theta}(\mathbf{h})^{2}}}. \tag{10}\] According to Equation (10), Equation (9) can be reduced to \[\min_{\theta}\sum_{(\mathbf{h},\mathbf{c})\in\mathcal{D}}\frac{(\mathbf{c}-\mu_{\theta}( \mathbf{h}))^{2}}{2\sigma_{\theta}^{2}(\mathbf{h})}+\frac{1}{2}\log\left(2\pi\sigma_{ \theta}^{2}(\mathbf{h})\right). \tag{11}\] We use Actor-to-Critic network (Kolmogorov, 1999) to estimate the value of \(\mu_{\theta}(\mathbf{h})\). Specifically, both Actor network and Critic network are consisted of fully connected layers. Critic network estimates value through state and Actor network figures out what action to take under certain state. Since reward is not available in behaviour cloning, we don't consider it in Actor-to-Critic network. Variance \(\sigma_{\theta}^{2}(\mathbf{h})\) is set as constant independent of parameter \(\theta\). So the final object to optimize is translated into mean square error regression problem \[\min_{\theta}\sum_{(\mathbf{h},\mathbf{c})\in\mathcal{D}}\left(\mathbf{c}-\mu_{\theta}( \mathbf{h})\right)^{2}. \tag{12}\] After solving the regression problem with neural network, what action agent at certain state will take as well as the corresponding probability can be predicted by the policy \(\pi_{\theta}\). ### Dialogue Structure Controlled Generation In the dialogue generation stage, we follow previous work (Zhu et al., 2017) to factorize the joint probabilities over tokens as the product of conditional probabilities \[p(Y)=\prod_{i=1}^{n}P\left(t_{i}\mid t_{1},\ldots,t_{i-1}\right), \tag{13}\] where \(Y\) is one response that contains a sequence of tokens \(t_{1},t_{2},\ldots\), \(t_{n}\). Auto-regressive model relies on Transformer decoder blocks with left-to-right attention to generate tokens one after another. In our encoder-decoder model CTRBLStruct, dialogue history \(X_{i}\) is sent to encoder and the decoder generates responses in auto-regressive way. We optimize the negative log likelihood of original response: \[l_{NLL}(Y)=-\sum_{i=1}^{n}\log P(t_{i}\mid t_{1},\ldots,t_{i-1}). \tag{14}\] Dialogue structure graph is integrated into the auto-regressive Transformer decoder in the following manner. What cluster \(C_{i+1}\) the next response will be in through \(\pi_{\theta}(\cdot|\mathbf{h}_{i})\) can be predicted. So when the auto-regressive model finishes outputting tokens, we take the mean-pooling of model's last hidden state output to get the representation \(\mathbf{h}_{i+1}\) of the generated utterance. The representation vector and cluster center vector \(\mathbf{c}_{i+1}\) can be viewed as one dimensional probability distribution as well, representing utterance distribution in semantics space and topic space respectively. We adopt Kullback-Leibler divergence (Kullback and Leibler, 1999) to bring distribution in semantic space closer to topic space, which ensures the generated utterance's relevance to its supposed topic. Kullback-Leibler divergence can be written as \[D_{KL}(\mathbf{h}||\mathbf{c})=\sum_{x\in X}[\mathbf{h}_{i+1}(x)\log\frac{\mathbf{h}_{i+1}(x)} {\mathbf{c}_{i+1}(x)}]. \tag{15}\] The total loss to optimize in the decoder is the sum of negative log likelihood loss and Kullback-Leibler divergence \[Loss_{Gen}=l_{NLL}+\lambda_{2}D_{KL}. \tag{16}\] Then the training process of CTRBLStruct has finished. We can apply the encoder-decoder model for dialogue generation like other Transformer based generative language models. ## 4. Experimental Setups ### Research Questions We aim to answer the following research questions through experiments on response generation and utterance representation: * **RQ1**: How does CTRBLStruct perform in open-domain dialogue compared to several strong dialogue generation models? * **RQ2**: Does CTRBLStruct really control the topic of response? Does the discovered dialogue structure help in response generation? * **RQ3**: How does CTRBLStruct perform in dialogue utterance representation compared to other sentence embedding methods? Can semantically similar utterances cluster in CTRBLStruct? * **RQ4**: How is the generalization ability of CTRBLStruct? Can it be applied to models with other types of backbone? ### Datasets To evaluate the performance of dialogue structure learning in response generation, we conduct experiments on two popular open-domain dialogue datasets, DailyDialog (Dai et al., 2017) and PersonaChat (Person et al., 2018). DailyDialog is a human-written multi-turn dialogue dataset which is less noisy and covers various topics about daily life. We use the dataset randomly separated by the authors as training/validation/test sets with 11,118/1,000/1,000 conversations. In PersonaChat chit-chat agent is endowed with a configurable and consistent persona to generate more personal, specific and engaging conversions. The original version of PersonaChat is divided as 10,907/1,000/968 dialogues for training/validation/test sets. In our work, conversations in DailyDialog and PersonaChat without any extra label or persona information are used during model training. ### Compared Methods CTRLStruct is compared to several strong models with respect to the performance of dialogue generation and utterance representation. To evaluate the quality of dialogue generation, traditional models and large pre-trained language models are selected: * **Seq2Seq**: Vanilla Sequence-to-Sequence model used in machine translation with attention mechanism (Wang et al., 2017). * **CVAE**: CVAE (Wang et al., 2017) is a generative model combining variational auto-encoder (He et al., 2017) with RNN Sequence-to-Sequence framework. * **BART**: BART (Kang et al., 2017) pre-trains a Sequence-to-Sequence model combining bi-directional and auto-regressive Transformers. BART-large is used in experiments. * **DialoGPT**: DialoGPT (Wang et al., 2017) extends GPT-2 (Wang et al., 2017) to address conversational neural response generation by training on Reddit data. The medium version of DialoGPT is applied in experiments. * **BlenderBot**: BlenderBot (Wang et al., 2017) which features Blended Skill Talk set-up is one of the most performing open-domain chatbots. We apply the 2.7B parameter model in our experiments. To evaluate the utterance representation capability, we choose the following methods: * **BERT**: BERT (Devlin et al., 2017) is composed of bi-directional Transformer encoders pre-trained by masked language modeling on next sentence prediction task. * **Unsupervised SimCSE**: Unsupervised version of SimCSE (Krizhevsky et al., 2014) regards two augmented samples of one utterance as positive pair and perform contrastive learning, which is the state-of-the-art sentence embedding framework. ### Evaluation Metrics We assess the performance of dialogue generation where both automatic and human evaluation metrics are applied. Automatic evaluation metrics include BLEU-1/2 (Wang et al., 2017), Distinct-1/2 (Wang et al., 2017) and ROUGE-L (Wang et al., 2017). BLEU, Distinct and ROUGE-L measure the n-gram overlap between generated utterance and ground truth, generation diversity and the number of longest common subsequence between generated utterance and ground truth respectively. In human evaluation, we follow settings in PLATO (Chen et al., 2018), where evaluators are supposed to score on a scale of \(\{0,1,2\}\) from four aspects - fluency, coherence, informativeness and overall. Zero indicates bad performance, one indicates normal and two stands for good responses. Moreover, we also analyze if the generated responses are truly related to its supposed topic. We design two evaluation metrics named Hard Topic Hit Accuracy (HTHA) and Soft Topic Hit Accuracy (STHA), aiming to prove CTRLStruct can control the topic flow in generation process compared to other models. HTHA is defined as the proportion of generated responses' topic clusters matching their pseudo-labels. Here pseudo-label stands for the cluster identity number of ground truth response, which is obtained through the dialogue structure modeling part in CTRLStruct. STHA is defined as the proportion of generated responses' topic clusters similar to their pseudo-labels. The similarity is measured through the cosine distance between generated response's cluster center vector and ground truth's cluster center vector. If the similarity exceeds given threshold \(\varphi\), then the generated response is viewed as matching its pseudo-label. HTHA can be considered as special STHA whose threshold \(\varphi=1.00\). Besides, macro-\(F1\) and micro-\(F1\) are also applied to evaluate model's topic-aware generation performance. The evaluation of unsupervised utterance representation in dialogue is a challenge. We share the same sentiment with Reimers and Gurevych (Wang et al., 2017) that utterance representation with good expressiveness can help semantically similar sentences cluster. Since ground truth labels of clusters are unknown, internal evaluation metrics including Calinski-Harabasz Index (Calinski-Harabasz, 2017) and Davies-Bouldin Index (Davies and Bouldin, 2017) are used to assess the quality of clusters. Higher Calinski-Harabasz Index score and lower Davies-Bouldin Index score indicate better clusters definition and separation. ### Implementation Details In utterance representation training, we set the coefficient \(\lambda_{1}\) of Weak Relativity Loss as 0.2. We found that a large \(\lambda_{1}\) will cause collapse in model training. Utterance representations are separated into 60 clusters through K-Means. The model is trained for 20 \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Automatic Evaluation**} & \multicolumn{3}{c}{**Human Evaluation**} \\ \cline{3-8} & & **BLEU-1/2** & **Distinct-1/2** & **ROUGE-L** & **Fluency** & **Coherence** & **Informativeness** & **Overall** \\ \hline \multirow{8}{*}{PersonaChat} & Seq2Seq & 0.170 / 0.023 & 0.008 / 0.032 & 0.062 & 1.72 & 0.40 & 1.24 & 0.98 \\ & CVAE & 0.192 / 0.041 & 0.018 / 0.073 & 0.059 & 1.76 & 0.64 & 1.36 & 1.28 \\ & BART & 0.291 / 0.109 & 0.030 / 0.107 & 0.154 & 1.84 & 0.82 & 1.42 & 1.38 \\ & DialoGPT & **0.324** / 0.114 & 0.031 / 0.137 & 0.188 & 1.84 & 1.04 & **1.80** & 1.58 \\ & BlenderBot & 0.280 / 0.112 & **0.036** / **0.167** & **0.198** & **1.88** & 1.20 & 1.76 & 1.52 \\ & CTRLStruct w/o Total_Loss & 0.267 / 0.092 & 0.024 / 0.077 & 0.151 & 1.80 & 0.76 & 1.54 & 1.26 \\ & CTRLStruct w/o WR_Loss & 0.306 / 0.113 & 0.030 / 0.111 & 0.159 & 1.86 & 1.28 & 1.70 & 1.56 \\ & CTRLStruct & 0.316 / **0.119** & 0.032 / 0.114 & 0.161 & **1.88** & **1.40** & 1.72 & **1.62** \\ \hline \multirow{8}{*}{DailyDialog} & Seq2Seq & 0.309 / 0.036 & 0.068 / 0.220 & 0.049 & 1.86 & 0.46 & 0.62 & 0.94 \\ & CVAE & 0.381 / 0.138 & 0.072 / 0.299 & 0.046 & 1.86 & 0.58 & 0.72 & 1.18 \\ \cline{1-1} & BART & 0.364 / 0.141 & 0.112 / 0.378 & 0.075 & 1.92 & 1.66 & 1.38 & 1.54 \\ \cline{1-1} & DialoGPT & 0.353 / 0.134 & 0.106 / 0.352 & 0.105 & 1.90 & 1.60 & 1.46 & 1.60 \\ \cline{1-1} & BlenderBot & 0.335 / 0.124 & 0.111 / 0.340 & **0.108** & 1.94 & 1.64 & 1.62 & 1.72 \\ \cline{1-1} & CTRLStruct w/o Total_Loss & 0.346 / 0.130 & 0.095 / 0.328 & 0.076 & 1.90 & 1.34 & 1.24 & 1.36 \\ \cline{1-1} & CTRLStruct w/o WR_Loss & 0.388 / 0.151 & 0.112 / 0.387 & 0.077 & 1.92 & **1.72** & 1.56 & 1.70 \\ \cline{1-1} & CTRLStruct & **0.397** / **0.157** & **0.118** / **0.402** & 0.080 & **1.96** & **1.72** & **1.68** & **1.78** \\ \hline \hline \end{tabular} \end{table} Table 1. Automatic and human evaluations of our model and five other strong dialogue generation models on two open-domain dialogue datasets without additional labelled information. The highest values are written in bold. epochs with batch size of 256. The temperature coefficient of models is set as 0.05. We use nlpaug package (Wang et al., 2017) for data augmentation.2 Footnote 2: [https://github.com/makcedward/nlpaug](https://github.com/makcedward/nlpaug) In generation part, pre-trained BART-large is chosen as encoder and decoder in our model. Actor-to-Critic network is used in behavioral cloning in CTRLStruct. The coefficient \(\lambda_{2}\) of KL-divergence is set as 1.2 to reach the best performance in both datasets. In automatic evaluation, we use beam search in pre-trained language models while in human evaluation and topic matching evaluation, both top-\(p\) sampling (Kang et al., 2017) and top-\(k\) sampling (Kang et al., 2018) are adopted to generate more diverse conversations. ## 5. Results and Analysis ### Dialogue Response Generation (RQ1) As suggested in the empirical study of dialogue evaluation (Zhou et al., 2017), some automatic metrics are originally built for machine translation and can't systematically evaluate the quality of generated dialogue. So we combine automatic evaluation with human evaluation. Experimental results are listed in Table 1. In automatic evaluation, CTRLStruct gets the highest BLEU-1/2 score in DailyDialog and the highest BLEU-2 score in PersonaChat where DialoGPT scores better on BLEU-1. Our model remains ahead in Distinct-1/2 in DailyDialog dataset, but fails to surpass DialoGPT in PersonaChat dataset. In ROUGE-L, BlenderBot performs the best in both datasets. As to human evaluation, we invited fifty people to communicate with eight models trained on different datasets in double blind experiment. Notice that evaluators know which models are trained on the same dataset for fair comparison. Results show that all the models have the ability to generate fluent dialogue without obvious grammatical or spelling errors. Superiority of CTRLStruct is reflected in coherence evaluation, where the best score is more than one point higher than the lowest score. During informativeness evaluation, DialoGPT performs better in PersonaChat dataset, but in DailyDialog CTRLStruct is able to generate complicated utterances from vocabulary and structure perspective. In comprehensive assessment, CTRLStruct gets the highest score in both datasets. All the experiments indicate that CTRLStruct performs better overall than several strong dialogue generation models. ### Topic Control Quality (RQ2) In this part, we test whether the generated responses of different models are related to their supposed topics and how well dialogue structure control the topics in generation process. We view the ground truth answer's topic in test set as supposed topic and turn the evaluation into a multi-classification problem, where the generated response's topic needs to be correctly classified to its ground truth response's topic. Since the true labels of topics are unable to know, pseudo-labels, namely identity numbers of topic clusters, are utilized in evaluation. These pseudo-labels are obtained through the clustering process in dialogue structure modeling part of CTRLStruct. The labels of generated responses are also calculated through the dialogue structure modeling part in CTRLStruct for fair comparison. So the multi-classification problem is formulated as given \(k\) categories (\(k\) is the total number of clusters) and some samples (generated responses), we need to assess if the model has the ability to perform right topic classifications. Table 2 shows the experimental results of evaluating different models' response relatedness to topics in DailyDialog and PersonaChat. CTRLStruct outperforms its counterparts in macro-\(F\)1, micro-\(F\)1, Hard Topic Hit Accuracy (HTHA) and Soft Topic Hit Accuracy (STHA) on both datasets, indicating that our method can generate more topic-related responses compared to other baselines. When the similarity threshold \(\varphi\geq 0.90\), HTHA and STHA have the same results, indicating the topic segmentation is good. With the similarity constraint becoming loose, the topic hit accuracy surges. Compared to the results in DailyDialog, scores in PersonaChat is rather low. We attribute this phenomenon to the low quality of PersonaChat dataset. PersonaChat isn't consisted of human conversations like DailyDialog and its topic diversity is low. So when we preset more topic categories than it actually has, a lot of noise will be introduced and damage CTRLStruct. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Dataset** & **Model** & **macro-\(F\)1** & **micro-\(F\)1** & **HTHA** & **STHA** (\(\varphi=0.95\)) & **STHA** (\(\varphi=0.90\)) & **STHA** (\(\varphi=0.85\)) & **STHA** (\(\varphi=0.80\)) \\ \hline \multirow{8}{*}{PersonaChat} & Seq2Seq & 0.02 & 0.02 & 2.97\% & 2.97\% & 2.97\% & 3.09\% & 5.75\% \\ & CVAE & 0.02 & 0.03 & 3.51\% & 3.51\% & 3.51\% & 3.71\% & 7.20\% \\ & BART & 0.03 & 0.06 & 8.12\% & 8.12\% & 8.12\% & 8.85\% & 18.00\% \\ & DialoGPT & 0.03 & 0.05 & 7.80\% & 7.80\% & 7.80\% & 8.33\% & 16.17\% \\ & BlenderBot & 0.03 & 0.07 & 8.75\% & 8.75\% & 8.75\% & 9.56\% & 21.01\% \\ & CTRLStruct w/o TotalLoss & 0.03 & 0.06 & 7.83\% & 7.83\% & 7.83\% & 8.40\% & 17.54\% \\ & CTRLStruct w/o WR\_Loss & **0.04** & **0.08** & **9.27\%** & 9.27\% & 9.27\% & 10.13\% & 24.32\% \\ & CTRLStruct & **0.04** & **0.08** & **9.52\%** & **9.52\%** & **9.52\%** & **10.46\%** & **26.54\%** \\ \hline \multirow{8}{*}{DailyDialog} & Seq2Seq & 0.09 & 0.07 & 9.43\% & 9.43\% & 9.43\% & 13.00\% & 18.49\% \\ & CVAE & 0.09 & 0.06 & 9.22\% & 9.22\% & 9.22\% & 12.49\% & 17.97\% \\ & BART & 0.14 & 0.12 & 16.38\% & 16.38\% & 16.38\% & 19.86\% & 27.81\% \\ & DialoGPT & 0.14 & 0.11 & 13.22\% & 13.22\% & 13.22\% & 16.41\% & 23.14\% \\ & BlenderBot & 0.16 & 0.14 & 17.89\% & 17.89\% & 17.89\% & 22.05\% & 31.14\% \\ & CTRLStruct w/o TotalLoss & 0.14 & 0.12 & 15.81\% & 15.81\% & 15.81\% & 19.32\% & 26.62\% \\ & CTRLStruct w/o WR\_Loss & 0.19 & **0.18** & 22.86 \% & 22.86\% & 22.86\% & 27.27\% & 37.08\% \\ & CTRLStruct & **0.20** & **0.18** & **23.05\%** & **23.05\%** & **23.05\%** & **27.78\%** & **37.70\%** \\ \hline \hline \end{tabular} \end{table} Table 2. Experimental results showing topic-level relatedness of generated responses and its pseudo-label. HTHA and STHA are short for Hard Topic Hit Accuracy and Soft Topic Hit Accuracy. The highest values are written in bold. ### Utterance Representation (RQ3) Under the assumptions that utterances with similar meanings are supposed to get closer in the representation space, we evaluate utterance representation quality of different models through performing clustering on utterance representations and analyze the metrics of clusters. As shown in Table 4, CTRLStruct outperforms other methods in both Calinski-Harabasz Index and Davies-Bouldin Index on two datasets. BERT and SimCSE score similarly in DailyDialog, but the latter gets a poor performance in PersonaChat. Moreover, we test the influence of cluster numbers to CTRLStruct. As shown in Figure 3, with the number of clusters increasing from 30 to 90, Calinski-Harabasz Index drops fast first and then slow. As to Davies-Bouldin Index, the value fluctuates when the number of clusters increases in both datasets. One more premise to consider is that the finer topic segmentation is, the higher quality dialogue structure modeling will achieve. A small number of topics will affect the control ability of dialogue structure. So it's supposed to increase the number of clusters as much as possible without damaging the clustering quality. Combining all the factors, we draw the conclusion that in DailyDialog the best \(K\) number in K-Means is 60 and in PersonaChat the best \(K\) number is 50. Generally speaking, CTRLStruct is robust to the number of clusters. ### Generalization Ability (RQ4) CTRLStruct is a framework of discovering dialogue structure from unlabelled corpus and conducting controlled generation, so it's agnostic to the type of backbone models. Original CTRLStruct utilizes BART which is encoder-decoder architecture as backbone. We conduct experiments on GPT2 (Wang et al., 2019) which belongs to decoder-only architecture to see whether CTRLStruct can improve its coherence. Results are listed in Table 3. The topic hit rate of GPT2 increases when integrated with CTRLStruct. However it should be noted the proposed dialogue structure is generated through Transformer encoder, so it can't be applied to none-Transformer network. ### Ablation Study We conduct ablation study on CTRLStruct from three perspectives: utterance representation, response generation, and dialogue structure control. As illustrated in Table 4, BERT outputs original representations and can be viewed as CTRLStruct without total loss. SimCSE is equivalent to CTRLStruct without Relative Correlation Loss. We notice that utterance representation performance is gradually improved with the constraint of Absolute Correlation Loss and Relative Correlation Loss. In generation stage, we assess CTRLStruct without Weak Relativity Loss and CTRLStruct without total loss. Results in Table 1 show that CTRLStruct performs better than that without Weak Relativity Loss, and CTRLStruct without total loss even performs worse than BART. In dialogue structure control stage, Figure 2 shows similar results. The constraint of Weak Relativity Loss can slightly improve CTRLStruct's topic hit accuracy. But CTRLStruct without Total Loss gets poor performance on all indicators. We believe wrong dialogue structure caused by poor utterance representation can mislead response generation and cause bad performance in both topic control and response generation, which is consistent with previous opinion that good utterance representation leads to better dialogue structure and good dialogue structure leads to better generated dialogues. ### Case Analysis We analyze some cases in response generation and dialogue structure modeling, including communications with CTRLStruct to show its advantages and utterances randomly selected in different topic clusters. Due to space limitations, details are shown in Appendix. ## 6. Conclusion In this paper, we present a novel framework CTRLStruct, which combines contrastive learning, clustering, and imitation learning to effectively capture dialogue utterance representation and construct topic-level dialogue structure. Moreover, dialogue structure which helps track topic flow in conversations is integrated into open-domain response generation. Experiments confirm the superiority of CTRLStruct to other strong dialogue generation models in generating coherent and topic-related conversations. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**DailyDialog**} & \multicolumn{2}{c}{**PersonaChat**} \\ & **CHI \(\uparrow\)** & **DBI\(\downarrow\)** & **CHI \(\uparrow\)** & **DBI \(\downarrow\)** \\ \hline BERT & 566.79 & 3.62 & 848.64 & 3.83 \\ SimCSE & 566.64 & 3.57 & 386.52 & 4.36 \\ CTRLStruct w/o WR\_Loss & 618.42 & 3.28 & 857.78 & 3.65 \\ CTRLStruct & **649.22** & **3.26** & **871.89** & **3.63** \\ \hline \hline \end{tabular} \end{table} Table 4. Results of clustering performance on utterance representation, where CHI and DBI stand for Calinski-Harabasz Index and Davies-Bouldin Index respectively. Weak Relativity Loss is abbreviated to WR_Loss. Figure 3. Line charts showing the influence of different cluster numbers on clustering performance. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **Model** & **HTHA** & **STHA(0.85)** & **STHA(0.80)** \\ \hline \multirow{2}{*}{PersonaChat} & GPT2 & 5.72\% & 6.04\% & 12.02\% \\ & GPT2 + CTRLStruct & **6.84\%** & **7.28\%** & **14.88\%** \\ \hline \multirow{2}{*}{DailyDialog} & GPT2 & 11.73\% & 14.30\% & 20.59\% \\ & GPT2 + CTRLStruct & **16.95\%** & **20.70\%** & **29.13\%** \\ \hline \hline \end{tabular} \end{table} Table 3. Results showing generalization ability of CTRLStruct on GPT2 in two datasets. ## Acknowledgments This research is supported by the National Natural Science Foundation of China (No.62106105), the CCF-Tencent Open Research Fund (No.RAGR20220122), the Scientific Research Starting Foundation of Nanjing University of Aeronautics and Astronautics (No.YQR21022), and the High Performance Computing Platform of Nanjing University of Aeronautics and Astronautics.
2307.07112
Concavity property of minimal $L^{2}$ integrals with Lebesgue measurable gain VIII -- partial linearity and log-convexity
In this article, we give some necessary conditions for the concavity property of minimal $L^2$ integrals degenerating to partial linearity, a charaterization for the concavity degenerating to partial linearity for open Riemann surfaces, and some relations between the concavity property for minimal $L^2$ integrals and the log-convexity for Bergman kernels.
Shijie Bao, Qi'an Guan, Zheng Yuan
2023-07-14T01:28:09Z
http://arxiv.org/abs/2307.07112v2
Concavity property of minimal \(L^{2}\) integrals with Lebesgue measurable gain viii -- partial linearity and log-concavity ###### Abstract. In this article, we give some necessary conditions for the concavity property of minimal \(L^{2}\) integrals degenerating to partial linearity, a characterization for the concavity degenerating to partial linearity for open Riemann surfaces, and some relations between the concavity property for minimal \(L^{2}\) integrals and the log-convexity for Bergman kernels. Key words and phrases:minimal \(L^{2}\) integral, plurisubharmonic function, concavity, partial linearity 2020 Mathematics Subject Classification: 32D15, 32E10, 32L10, 32U05, 32W05 ## 1. Introduction Let \(D\subset\mathbb{C}^{n}\) be a pseudoconvex domain containing the origin \(o\in\mathbb{C}^{n}\), and let \(\psi<0\) and \(\varphi+\psi\) be plurisubharmonic functions on \(D\). Let \(f_{0}\) be a holomorphic function near \(o\). Let us recall the minimal \(L^{2}\) integrals (see [18]) \[G(t):=\inf\left\{\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}:F\in\mathcal{O}(\{\psi <-t\})\,\&\,(F-f_{0},o)\in\mathcal{I}(\varphi+\psi)_{o}\right\},\] where \(t\geq 0\) and \(\mathcal{I}(\varphi+\psi)\) is the multiplier ideal sheaf, which was defined as the sheaf of germs of holomorphic functions \(f\) such that \(|f|^{2}e^{-\varphi-\psi}\) is locally integrable (see e.g. [40, 35, 37, 9, 10, 7, 11, 34, 38, 39, 8, 15]). In [18], Guan established a concavity property for the minimal \(L^{2}\) integrals \(G(t)\), and a sharp version of Guan-Zhou's effectiveness result [33] of the strong openness property [32]. **Theorem 1.1** ([18]).: _If \(G(0)<+\infty\), then \(G(-\log r)\) is concave with respect to \(r\in(0,1]\)._ Applying the concavity (Theorem 1.1), Guan obtained a proof of Saitoh's conjecture for conjugate Hardy \(H^{2}\) kernels [17] and a sufficient and necessary condition of the existence of decreasing equisingular approximations with analytic singularities for the multiplier ideal sheaves with weights \(\log(|z_{1}|^{a_{1}}+\cdots+|z_{n}|^{a_{n}})\)[19]. After that, Guan [16] (see also [21]) presented a concavity property on Stein manifolds with smooth gain (the weakly pseudoconvex Kahler case was obtained by Guan-Mi [20]), which was applied by Guan-Yuan to obtain an optimal support function related to the strong openness property [27] and an effectiveness result of the strong openness property in \(L^{p}\)[25]. In [28], Guan-Yuan obtained the concavity property on Stein manifolds with Lebesgue measurable gain (the weakly pseudoconvex Kahler case was obtained by ## 1. Introduction Let \(M\) be a compact manifold with compact boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\partial M\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\mathcal{F}\). Let \(\mathcal{F}\) be a smooth manifold with boundary \(\mathcal{F}\). **Theorem 1.2** ([22], see also [23, 24]).: _Let \(c\in\mathcal{P}_{T,M}\) satisfy \(\int_{T_{1}}^{+\infty}c(t)e^{-t}dt<+\infty\), where \(T_{1}>T\). If there exists \(t\in[T,+\infty)\) satisfying that \(G(t)<+\infty\), then \(G(h^{-1}(r))\) is concave with respect to \(r\in(\int_{T_{1}}^{T}c(t)e^{-t}\mathrm{d}t,\int_{T_{1}}^{+\infty}c(t)e^{-t} \mathrm{d}t)\), \(\lim\limits_{t\to T+0}G(t)=G(T)\) and \(\lim\limits_{t\to+\infty}G(t)=0\), where \(h(t)=\int_{t}^{+\infty}c(t_{1})e^{-t_{1}}\mathrm{d}t_{1}\)._ Guan-Mi-Yuan also obtained the following corollary of Theorem 1.2, which is a necessary condition for the concavity degenerating to linearity (some related results can be referred to [21, 28, 42]). **Corollary 1.3** ([22]).: _Let \(c\in\mathcal{P}_{T,M}\) satisfy \(\int_{T_{1}}^{+\infty}c(t)e^{-t}dt<+\infty\), where \(T_{1}>T\). If \(G(t)\in(0,+\infty)\) for some \(t\geq T\) and \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{T}^{+\infty}c(s)e^{-s}\mathrm{d}s)\), then there exists a unique holomorphic \((n,0)\) form \(F\) on \(M\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \(G(t;c)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\geq T\)._ _Furthermore, we have_ \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}a(-\psi)=\frac{G(T_{1};c)} {\int_{T_{1}}^{+\infty}c(t)e^{-t}\mathrm{d}t}\int_{t_{1}}^{t_{2}}a(t)e^{-t} \mathrm{d}t,\] _for any nonnegative measurable function \(a\) on \((T,+\infty)\), where \(T\leq t_{1}<t_{2}\leq+\infty\)._ Thus, there is a natural problem, which was posed in [29]: **Problem 1.4** ([29]).: _How to characterize the concavity property degenerating to linearity?_ For open Riemann surfaces, Guan-Yuan [28] gave an answer to Problem 1.4 for single point, i.e. a characterization for the concavity \(G(h^{-1}(r);\varphi,\psi,c)\) degenerating to linearity, where the weights \(\varphi\) may not be subharmonic and the gain \(c\) may not be smooth (the case of subharmonic weights and smooth gain was proved by Guan-Mi [21]). The characterization [28] was used by Guan-Mi-Yuan to prove a weighted version of Suita conjecture for higher derivatives [22], and was used by Guan-Yuan to prove a weighted version of Saitoh's conjecture [31]. After that, Guan-Yuan [29] gave an answer to Problem 1.4 for finite points on open Riemann surfaces, which was used to obtain a characterization of the holding of equality in optimal jets \(L^{2}\) extension problem from analytic subsets to open Riemann surfaces. Some recent results on high-dimensional manifolds can be referred to [30, 1, 2, 3]. In this article, we consider the case \(G(h^{-1}(r))\) being partially linear, and give some necessary conditions for the concavity property of minimal \(L^{2}\) integrals degenerating to partial linearity. For open Riemann surfaces, we give a charaterization for the concavity degenerating to partial linearity. Finally, we discuss the relations between concavity property for minimal \(L^{2}\) integrals and log-convexity for Bergman kernels. ### \(G(h^{-1}(r))\) being partially linear In this section, we discuss the case that the minimal \(L^{2}\) integral \(G(h^{-1}(r))\) is partially linear. The following theorem gives a necessary condition for \(G(h^{-1}(r))\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). **Theorem 1.5**.: _Let \(c\in\mathcal{P}_{T,M}\) satisfy \(\int_{T_{1}}^{+\infty}c(t)e^{-t}dt<+\infty\), where \(T_{1}>T\). Let \(T_{2}\in(T_{1},+\infty)\). If \(G(t)\in(0,+\infty)\) for some \(t\geq T\) and \(G(h^{-1}(r))\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\), then there exists a unique holomorphic \((n,0)\) form \(F\) on \(\{\psi<-T_{1}\}\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \(G(t;c)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\in[T_{1},T_{2}]\)._ _Furthermore, we have_ \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}a(-\psi)=\frac{G(T_{1};c)- G(T_{2};c)}{\int_{T_{1}}^{T_{2}}c(t)e^{-t}\mathrm{d}t}\int_{t_{1}}^{t_{2}}a(t)e^{ -t}\mathrm{d}t, \tag{1.1}\] _for any nonnegative measurable function \(a\) on \([T_{1},T_{2}]\), where \(T_{1}\leq t_{1}<t_{2}\leq T_{2}\)._ We give a remark on the \(F\) in the above theorem. **Remark 1.6**.: _Let \(t\in[T_{1},T_{2})\), and let \(\tilde{c}\) be a nonnegative measurable function on \([T,+\infty)\). If \(\mathcal{H}^{2}(\tilde{c},t)\subset\mathcal{H}^{2}(c,t)\), and the holomorphic \((n,0)\) form \(F\) in Theorem 1.5 satisfies_ \[\int_{\{\psi<-T_{2}\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi)=G(T_{2};\tilde{c}),\] _then_ \[G(t;\tilde{c})=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi)=G(T_{2}; \tilde{c})+\frac{G(T_{1};c)-G(T_{2};c)}{\int_{T_{1}}^{T_{2}}c(s)e^{-s} \mathrm{d}s}\int_{t}^{T_{2}}\tilde{c}(s)e^{-s}\mathrm{d}s\] _for \(t\in[T_{1},T_{2}]\)._ The following theorem is a sufficient condition for the minimal \(L^{2}\) integral \(G(h^{-1}(r))\) being partially linear. **Theorem 1.7**.: _Let \(\psi<-T,\tilde{\psi}<-T\) be plurisubharmonic functions on \(M\), and let \(\varphi,\tilde{\varphi}\) be Lebesgue measurable functions on \(M\) such that \(\varphi+\psi,\tilde{\varphi}+\tilde{\psi}\) are plurisubharmonic functions on \(M\). Let \(Z_{0}\), \(\mathcal{F}\), \(\tilde{Z}_{0}\), \(\tilde{\mathcal{F}}\) be as above with respect to \((\varphi,\psi),(\tilde{\varphi},\tilde{\psi})\). Let \(T_{1},T_{2}\in[T,+\infty)\) with \(T_{1}<T_{2}\). Let \(c(t)\in\mathcal{P}_{T,M,\varphi,\psi}\) satisfying \(\int_{T_{2}}^{+\infty}c(t)e^{-t}dt<+\infty\), \(\tilde{c}(t)\in\mathcal{P}_{T,M,\tilde{\varphi},\tilde{\psi}}\) satisfying \(\int_{T_{2}}^{+\infty}c(t)e^{-t}dt<+\infty\). Assume that_ _(1). \(\tilde{\psi}<-T_{1}\) on \(\{\psi<-T_{1}\}\), and \(\tilde{\psi}<-T_{2}\) on \(\{\psi<-T_{2}\}\);_ _(2). \(\tilde{\psi}=\psi,\tilde{\varphi}=\varphi\) on \(\{-T_{2}\leq\psi<-T_{1}\}\);_ _(3). \(\tilde{Z}_{0}=Z_{0}\), \(\mathcal{I}(\tilde{\psi}+\tilde{\varphi})|_{Z_{0}}\subset\mathcal{I}(\psi+ \varphi)|_{Z_{0}}\) and \(\mathcal{F}|_{Z_{0}}\subset\tilde{\mathcal{F}}|_{Z_{0}}\);_ _(4). \(\mathcal{H}^{2}(\tilde{c},t,\tilde{\varphi},\tilde{\psi})\supset\mathcal{H}^{2 }(c,t,\varphi,\psi)\) for any \(t\in[T_{1},T_{2}]\)._ _If \(G(\tilde{h}^{-1}(r);\tilde{\varphi},\tilde{\psi},\tilde{c})\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}\tilde{c}(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}\tilde{c}(s)e^{-s}\mathrm{d}s\)], where \(\tilde{h}(t)=\int_{t}^{+\infty}\tilde{c}(l)e^{-l}\mathrm{d}l\), and the holomorphic \((n,0)\) form \(\tilde{F}\) in Theorem 1.5 which satisfies that \((\tilde{F}-f)\in H^{0}(\tilde{Z}_{0},(\mathcal{O}(K_{M})\otimes\tilde{ \mathcal{F}})|_{\tilde{Z}_{0}})\) and_ \[\int_{\{\tilde{\psi}<-t\}}|\tilde{F}|^{2}e^{-\tilde{\varphi}}\tilde{c}(-\tilde {\psi})=G(t;\tilde{\varphi},\tilde{\psi},\tilde{c}),\ \forall t\in[T_{1},T_{2}], \tag{1.2}\] _also satisfies that_ \[\int_{\{\psi<-T_{2}\}}|F|^{2}e^{-\varphi}c(-\psi)=G(T_{2};\varphi,\psi,c),\] _Then \(G(h^{-1}(r);\varphi,\psi,c)\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s\)], where \(h(t)=\int_{t}^{+\infty}c(l)e^{-l}\mathrm{d}l\)._ Applying Theorem 1.5, we obtain the following necessary condition for \(\varphi+\psi\) when \(G(h^{-1}(r))\) is partially linear. **Theorem 1.8**.: _Let \((M,X,Z)\) be a triple satisfying condition \((A)\), and let \(\psi\) be a plurisubharmonic function on \(M\). Let \(\varphi\) be a Lebesgue measurable function on \(M\) such that \(\varphi+\psi\) is a plurisubharmonic function on \(M\). Let \(T_{1},T_{2}\in[T,+\infty)\) with \(T_{1}<T_{2}\). Let \(c\in\mathcal{P}_{T,M}\) satisfy \(\int_{T_{2}}^{+\infty}c(t)e^{-t}dt<+\infty\). Assume that there exists \(t\geq T\) such that \(G(t)\in(0,+\infty)\). If there exists a Lebesgue measurable function \(\tilde{\varphi}\) on \(M\) such that:_ _(1). \(\tilde{\varphi}+\psi\) is plurisubharmonic function on \(M\);_ _(2). \(\tilde{\varphi}=\varphi\) on \(\{\psi<-T_{2}\}\);_ _(3). \(\tilde{\varphi}\geq\varphi\) and \(\tilde{\varphi}\not\equiv\varphi\) on the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\) (assume that the interior is not empty);_ _(4). \(\lim\limits_{t\to T_{1}+0}\sup\limits_{\{-t\leq\psi<-T_{1}\}}(\tilde{ \varphi}-\varphi)=0\);_ _(5). \(\tilde{\varphi}-\varphi\) is bounded on \(\{\psi<-T_{1}\}\), then \(G(h^{-1}(r))\) is not linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\int_{T_{1}}^{+\infty}c(s)e^ {-s}\mathrm{d}s]\)._ _Especially, if \(\varphi+\psi\) is strictly plurisubharmonic at some \(z_{1}\), where \(z_{1}\) is a point in the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\), then \(G(h^{-1}(r))\) is not linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\int_{T_{1}}^{+\infty}c(s)e^ {-s}\mathrm{d}s]\)._ Applying Theorem 1.5, we obtain the following necessary condition for \(\psi\) when \(G(h^{-1}(r))\) is partially linear. **Theorem 1.9**.: _Let \((M,X,Z)\) be a triple satisfying condition \((A)\), and let \(\psi\) be a plurisubharmonic function on \(M\). Let \(\varphi\) be a Lebesgue measurable function on \(M\) such that \(\varphi+\psi\) is a plurisubharmonic function on \(M\). Assume that there exists \(t\geq T\) such that \(G(t)\in(0,+\infty)\). Let \(T_{1},T_{2}\in[T,+\infty)\) with \(T_{1}<T_{2}\). Let \(c\in\mathcal{P}_{T,M}\) satisfy \(\int_{T_{2}}^{+\infty}c(t)e^{-t}dt<+\infty\). If there exists a plurisubharmonic function \(\tilde{\psi}\) on \(M\) such that:_ _(1). \(\tilde{\psi}<-T_{1}\) on \(\{\psi<-T_{1}\}\);_ _(2). \(\tilde{\psi}=\psi\) on \(\{\psi<-T_{2}\}\);_ _(3). \(\tilde{\psi}\geq\psi\) and \(\tilde{\psi}\not\equiv\psi\) on the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\) (assume that the interior is not empty);_ _(4). \(\lim\limits_{t\to T_{2}-0}\sup\limits_{\{-T_{2}\leq\psi<-t\}}(\tilde{\psi}- \psi)=0\),_ _then \(G(h^{-1}(r))\) is not linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\int_{T_{1}}^{+\infty}c(s)e^ {-s}\mathrm{d}s]\)._ _Especially, if \(\psi\) is strictly plurisubharmonic at some \(z_{1}\), where \(z_{1}\) is a point in the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\), then \(G(h^{-1}(r))\) is not linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\int_{T_{1}}^{+\infty}c(s)e^ {-s}\mathrm{d}s]\)._ We give an example of the partially linear case. **Example 1.10**.: _Let \(M=\Delta\) be the unit disc in \(\mathbb{C}\), and \(Z_{0}=o\) be the origin. Let \(g<0\) be a increasing convex function on \((-\infty,0)\). Take \(\psi=g(\log|z|)\) and \(\varphi=2\log|z|-g(\log|z|)\) on \(\Delta\). Let \(f\equiv dz\), \(c\equiv 1\) and \(\mathcal{F}_{o}=(z)_{o}\). It is clear that_ \[G(t)=\int_{\{\psi<-t\}}|dz|^{2}e^{-\varphi}=4\pi\int_{0}^{e^{g^{-1}(-t)}}se^{-2 \log s+g(\log s)}ds\] _for any \(t\geq 0\). Thus, for any \(a,b\in[0,1]\)\((a<b)\), \(G(-\log r)\) is linear on \([a,b]\) if and only if \(g^{\prime}=const\) on \((\log a,\log b)\)._ ### A characterization for the concavity degenerating to partial linearity In this section, we give a characterization for the concavity degenerating to partial linearity on open Riemann surfaces. Let \(\Omega\) be an open Riemann surface, which admits a nontrivial Green function \(G_{\Omega}\). Let \(a\) be a negative number, and let \(z_{0}\in\Omega\). Let \[\psi=G_{\Omega}(\cdot,z_{0})+\max\{G_{\Omega}(\cdot,z_{0}),a\}\] on \(\Omega\), and let \[\varphi=\varphi_{0}+\min\{G_{\Omega}(\cdot,z_{0}),a\}\] on \(\Omega\), where \(\varphi_{0}\) is a subharmonic function on \(\Omega\). Let \(c(t)\) be a Lebesgue measurable function on \([0,+\infty)\), such that \(c(t)e^{-t}\) is decreasing on \([0,+\infty)\) and \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\). Let \(w\) be a local coordinate on a neighborhood \(V_{z_{0}}\) of \(z_{0}\) satisfying \(w(z_{0})=0\). Let \(f_{0}\) be a holomorphic \((1,0)\) form on \(V_{z_{0}}\). Denote that \[G(t):=\inf\Bigg{\{}\int_{\{\psi<-t\}}|\tilde{F}|^{2}e^{-\varphi} c(-\psi): \tilde{F}\in H^{0}(\{\psi<-t\},\mathcal{O}(K_{\Omega}))\\ \&\,(\tilde{F}-f_{0},z_{0})\in(\mathcal{O}(K_{\Omega})\otimes \mathcal{I}(\varphi+\psi))_{z_{0}}\Bigg{\}}\] Note that \(\psi\) and \(\varphi+\psi\) are subharmonic on \(\Omega\). Then \(G(h^{-1}(r))\) is concave on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\) by Theorem 1.2, where \(h(t)=\int_{t}^{+\infty}c(s)e^{-s}ds\) for \(t\geq 0\). Let us recall some notations (see [13, 28, 22]). Let \(p:\Delta\to\Omega\) be the universal covering from unit disc \(\Delta\) to \(\Omega\). For any function \(u\) on \(\Omega\) with value \([-\infty,+\infty)\) such that \(e^{u}\) is locally the modulus of a holomorphic function, there exist a character \(\chi_{u}\) (a representation of the fundamental group of \(\Omega\)) and a holomorphic function \(f_{u}\) on \(\Delta\), such that \(|f_{u}|=p^{*}\,(e^{u})\) and \(g^{*}f=\chi(g)f\) for any element \(g\) of the fundamental group of \(\Omega\), where \(|\chi|=1\). If \(u_{1}-u_{2}=\log|f|\), where \(f\) is a holomorphic function on \(\Omega\), then \(\chi_{u_{1}}=\chi_{u_{2}}\). For the Green function \(G_{\Omega}(\cdot,z_{0})\), denote that \(\chi_{z_{0}}:=\chi_{G_{\Omega}(\cdot,z_{0})}\) and \(f_{z_{0}}:=f_{G_{\Omega}(\cdot,z_{0})}\). We give a characterization for the concavity of \(G(h^{-1}(r))\) degenerating to partial linearity. **Theorem 1.11**.: _Assume that \(\{z\in\Omega:G_{\Omega}(z,z_{0})=a\}\Subset\Omega\) and \(G(0)\in(0,+\infty)\). Then \(G(h^{-1}(r))\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\) and \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\) if and only if the following statements hold:_ \((1)\)_\(\varphi_{0}=2\log|g|+2u\) on \(\Omega\), where \(u\) is a harmonic function on \(\Omega\) and \(g\) is a holomorphic function on \(\Omega\) satisfying that \(ord_{z_{0}}(g)=ord_{z_{0}}(f_{0})\);_ \((2)\)_\(\chi_{-u}=\chi_{z_{0}}\), where \(\chi_{-u}\) and \(\chi_{z_{0}}\) is the characters associated to the functions \(-u\) and \(G_{\Omega}(\cdot,z_{0})\) respectively._ **Remark 1.12**.: _By Theorem 1.17 in [28] (see also Theorem 2.7), \(G(h^{-1}(r))\) is not linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\)._ Relations between the concavity property for minimal \(L^{2}\) integrals and the log-convexity for Bergman kernels Recall the definition of the weighted Bergman kernel on a bounded planar domain \(D\subset\mathbb{C}\). For any \(z\in D\), the weighted Bergman kernel on \(D\) is defined by \[B_{D,\varphi}(z):=\left(\inf\left\{\int_{D}|f|^{2}e^{-\varphi}:f\in\mathcal{O}(D ),\ f(z)=1\right\}\right)^{-1},\] where \(\varphi\) is a subharmonic function on \(D\). Let \(\phi\) be a negative subharmonic function on \(D\) such that \(\phi(z_{0})=-\infty\), where \(z_{0}\in D\). Berndtsson's subharmonicity of the fiberwise Bergman kernels ([4]) implies the log-convexity of the Bergman kernels on the sublevel sets of \(\phi\), i.e., the function \(\log B_{\{\phi<-t\}\cap D,\varphi}(z_{0})\) is convex with respect to \(t\in[0,+\infty)\) (see [5]). Note that for \(\phi=2G_{D}(\cdot,z_{0})\), the reciprocal of the weighted Bergman kernel on the sublevel set equals to some specific minimal \(L^{2}\) integral mentioned above. Actually, we have \[\big{(}B_{\{\phi<-t\}\cap D,\varphi}(z_{0})\big{)}^{-1}=\mathscr{G}(t),\] where \[\mathscr{G}(t):=\inf\bigg{\{}\int_{\{2G(\cdot,z_{0})<-t\}}|f|^{2} e^{-\varphi}: f\in\mathcal{O}(\{2G(\cdot,z_{0})<-t\})\] \[\&(f-1,z_{0})\in\mathcal{I}(2G(\cdot,z_{0}))_{z_{0}}\bigg{\}}.\] It follows that \(-\log\mathscr{G}(t)\) is convex with respect to \(t\in[0,+\infty)\). Besides, Theorem 1.2 gives that \(\mathscr{G}(-\log r)\) is concave with respect to \(r\in(0,1]\). We can find that, if we note that \(\log\mathscr{G}(t)+t\) is lower bounded on \((0,+\infty)\) (see [5]), the log-convexity of the Bergman kernels can imply the concavity of \(\mathscr{G}(-\log r)\), which follows from a simple calculation (see Lemma 2.18). It is natural to ask: **Problem 1.13**.: _Is \(-\log G(t)\) convex on \([0,+\infty)\) for general minimal \(L^{2}\) integral \(G(t)\)?_ We give some negative answers to the above problem. Follow the notations and assumptions in Theorem 1.11. **Theorem 1.14**.: _Assume that statements \((1)\) and \((2)\) in Theorem 1.11 hold. Then \(-\log G(t)\) is not convex on \([0,+\infty)\)._ Theorem 1.14 shows that \(-\log G(t)\) may not be convex if the weight \(\varphi\) is not subharmonic. In the following, we consider the case that the weight is trivial and the ideal \(\mathcal{F}_{z_{0}}\) is not the maximal ideal of \(\mathcal{O}_{z_{0}}\). Let \(\Omega\subset\mathbb{C}\) be a domain bounded by finite analytic curves, and let \(z_{0}\in\Omega\). Denote that \[G_{k}(t):=\inf\Bigg{\{}\int_{\{2(k+1)G_{\Omega}(\cdot,z_{0})<-t\} } \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Theorem 1.15**.: _If \(\Omega\) is not a disc (\(\Omega\) may be conformally equivalent to a disc), then there exists large enough \(k>0\) such that \(-\log G_{k}(t)\) is not convex on \([0,+\infty)\)._ We give an example on the disc as follows: **Example 1.16**.: _Let \(k\geq 1\) be an integer. Let \(\psi:=2(k+1)\log|z|\) on \(\Delta\), and let \(a_{j}\in\mathbb{C}\) for any \(0\leq j\leq k\). Denote that \(G(t):=\inf\{\int_{\{\psi<-t\}}|F|^{2}:\mathcal{O}(\{\psi<-t\})\,\&\,f^{(j)}(o)= j!a_{j}\) for any \(0\leq j\leq k\}\) for \(t\geq 0\). Note that \(G(-\log r)\) is concave on \((0,1]\) by Theorem 1.1. If there exist \(j_{1}\) and \(j_{2}\) satisfying \(j_{1}\neq j_{2}\), \(a_{j_{1}}\neq 0\) and \(a_{j_{2}}\neq 0\), then \(-\log G(t)\) is not convex on \([0,+\infty)\) (the proof is given in Section 6)._ ## 2. Preparations In this section, we do some preparations. ### Some lemmas about minimal \(L^{2}\) integrals We recall some lemmas, which will be used in the discussion of minimal \(L^{2}\) integrals. **Lemma 2.1** (see [22]).: _Let \((M,X,Z)\) be a triple satisfying condition \((A)\) and \(c(t)\in\mathcal{P}_{T,M}\). Let \(B\in(0,+\infty)\) and \(t_{0}>t_{1}>T\) be arbitrary given. Let \(\psi<-T\) be a plurisubharmonic function on \(M\). Let \(\varphi\) be a plurisubharmonic function on \(M\). Let \(F\) be a holomorphic \((n,0)\) form on \(\{\psi<-t_{0}\}\) such that_ \[\int_{\{\psi<-t_{0}\}}|F|^{2}e^{-\varphi}c(-\psi)<+\infty.\] _Then there exists a holomorphic \((n,0)\) form \(\tilde{F}\) on \(\{\psi<-t_{1}\}\) such that_ \[\int_{\{\psi<-t_{1}\}}|\tilde{F}-(1-b_{t_{0},B}(\psi))F|^{2}e^{-\varphi-\psi+ v_{t_{0},B}(\psi)}c(-v_{t_{0},B}(\psi))\leq C\int_{t_{1}}^{t_{0}+B}c(t)e^{-t} \mathrm{d}t,\] _where_ \[C=\int_{M}\frac{1}{B}\mathbb{I}_{\{-t_{0}-B<\psi<-t_{0}\}}|F|^{2}e^{-\varphi- \psi},\] \(b_{t_{0},B}(t)=\int_{-\infty}^{t}\frac{1}{B}\mathbb{I}_{\{-t_{0}-B<s<-t_{0}\} }\mathrm{d}s\)_, and \(v_{t_{0},B}(t)=\int_{-t_{0}}^{t}b_{t_{0},B}\mathrm{d}s-t_{0}\)._ Following the assumptions and notations in Theorem 1.2, we recall three properties about \(G(t)\). **Lemma 2.2** (see [22]).: _The following three statements are equivalent:_ _(1). \(f\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\);_ _(2). \(G(t)=0\) for some \(t\geq T\);_ _(3). \(G(t)=0\) for any \(t\geq T\)._ The following lemma shows the existence of minimal form. **Lemma 2.3** (see [22]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying_ \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)=G(t)\] _and \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\)._ _Furthermore, for any holomorphic \((n,0)\) form \(\hat{F}\) on \(\{\psi<-t\}\) satisfying_ \[\int_{\{\psi<-t\}}|\hat{F}|^{2}e^{-\varphi}c(-\psi)<+\infty\] _and \((\hat{F}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), we have the following equality,_ \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)+\int_{\{\psi<-t\}}| \hat{F}-F_{t}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<-t\}}|\hat{F}|^{2}e^{-\varphi}c(-\psi).\] **Lemma 2.4** (see [22]).: \(G(t)\) _is decreasing with respect to \(t\in[T,+\infty)\). \(\lim_{t\to t_{0}+0}G(t)=G(t_{0})\) for any \(t_{0}\in[T,+\infty)\). And if \(G(t)<+\infty\) for some \(t>T\), then \(\lim_{t\to+\infty}G(t)=0\). Especially, \(G(t)\) is lower semicontinuous on \([T,+\infty)\)._ We also recall the following two lemmas. **Lemma 2.5** (see [14]).: _Let \(N\) be a submodule of \(\mathcal{O}^{q}_{\mathbb{C}^{n},o}\), \(q\in\mathbb{Z}_{+}\cup\{\infty\}\). Let \(\{f_{j}\}\subset\mathcal{O}_{\mathbb{C}^{n}}(U)^{q}\) be a sequence of \(q-\)tuples holomorphic functions in an open neighborhood \(U\) of the origin \(o\). Assume that \(\{f_{j}\}\) converges uniformly in \(U\) towards a \(q-\)tuples \(f\in\mathcal{O}^{q}_{\mathbb{C}^{n},o}\), and assume furthermore that all the germs \((f_{j},o)\) belong to \(N\). Then \((f,o)\in N\)._ **Lemma 2.6** (see [28]).: _Let \(M\) be a complex manifold. Let \(S\) be an analytic subset of \(M\). Let \(\{g_{j}\}_{j=1,2,\ldots}\) be a sequence of nonnegative Lebesgue measurable functions on \(M\), which satisfies that \(g_{j}\) are almost everywhere convergent to \(g\) on \(M\) when \(j\to+\infty\), where \(g\) is a nonnegative Lebesgue measurable function on \(M\). Assume that for any compact subset \(K\) of \(M\backslash S\), there exist \(s_{K}\in(0,+\infty)\) and \(C_{K}\in(0,+\infty)\) such that_ \[\int_{K}g_{j}^{-s_{K}}\mathrm{d}V_{M}\leq C_{K}\] _for any \(j\), where \(\mathrm{d}V_{M}\) is a continuous volume form on \(M\)._ _Let \(\{F_{j}\}_{j=1,2,\ldots}\) be a sequence of holomorphic \((n,0)\) form on \(M\). Assume that \(\liminf_{j\to+\infty}\int_{M}|F_{j}|^{2}g_{j}\leq C\), where \(C\) is a positive constant. Then there exists a subsequence \(\{F_{j_{l}}\}_{l=1,2,\ldots}\), which satisfies that \(\{F_{j_{l}}\}\) is uniformly convergent to a holomorphic \((n,0)\) form \(F\) on \(M\) on any compact subset of \(M\) when \(l\to+\infty\), such that_ \[\int_{M}|F|^{2}g\leq C.\] ### Some results on open Riemann surfaces Let \(\Omega\) be an open Riemann surface, which admits a nontrivial Green function \(G_{\Omega}\). Let \(z_{0}\in\Omega\). Let \(\psi\) be a subharmonic function on \(\Omega\) such that \((\psi-pG_{\Omega}(\cdot,z_{0}))(z_{0})>-\infty\), where \(p>0\) is a constant, and let \(\varphi\) be a Lebesgue measurable function on \(\Omega\) such that \(\varphi+\psi\) is subharmonic on \(\Omega\). Let \(f_{0}\) be a holomorphic \((1,0)\) form on a neighborhood of \(z_{0}\). Let \(c(t)\) be a function on \([0,+\infty)\), which satisfies that \(c(t)e^{-t}\) is decreasing, \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\) and \(e^{-\varphi}c(-\psi)\) has a positive lower bound on any compact subset of \(\Omega\backslash\{z_{0}\}\). Denote that \[G(t):=\inf\Bigg{\{}\int_{\{\psi<-t\}}|f|^{2}e^{-\varphi}c(-\psi): f\in H^{0}(\{\psi<-t\},\mathcal{O}(K_{\Omega}))\] \[\&\,(f-f_{0},z_{0})\in(\mathcal{O}(K_{\Omega})\otimes\mathcal{I} (\varphi+\psi))_{z_{0}}\Bigg{\}}\] for any \(t\geq 0\). By Theorem 1.2, \(G(h^{-1}(r))\) is concave with respect to \(r\in[0,\int_{0}^{+\infty}c(t)e^{-t}dt]\), where \(h(t)=\int_{t}^{+\infty}c(s)e^{-s}ds\) for any \(t\geq 0\). The following theorem give a characterization for \(G(h^{-1}(r))\) degenerating to linearity. **Theorem 2.7** ([28]).: _Assume that \(G(0)\in(0,+\infty)\). Then \(G(h^{-1}(r))\) is linear with respect to \(r\) on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\) if and only if the following statements hold:_ _(1) \(\varphi+\psi=2\log|g|+2G_{\Omega}(z,z_{0})+2u\) and \(ord_{z_{0}}(g)=ord_{z_{0}}(f_{0})\), where \(g\) is a holomorphic function on \(\Omega\) and \(u\) is a harmonic function on \(\Omega\);_ _(2) \(\psi=2pG_{\Omega}(z,z_{0})\) on \(\Omega\) for some \(p>0\);_ _(3) \(\chi_{-u}=\chi_{z_{0}}\)._ **Remark 2.8**.: _When the three statements in Theorem 2.7 hold,_ \[b_{0}gp_{*}(f_{u}df_{z_{0}})\] _is the unique holomorphic \((1,0)\) form \(F\) on \(\Omega\) such that \((F-f,z_{0})\in(\mathcal{O}(K_{\Omega}))_{z_{0}}\otimes\mathcal{I}(\varphi+ \psi)_{z_{0}}\) and \(G(t)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\geq 0\), where \(p:\Delta\to\Omega\) is the universal covering, \(f_{u}\) and are holomorphic functions on \(\Delta\) satisfying \(|f_{u}|=p^{*}(e^{u})\) and \(|f_{z_{0}}|=p^{*}\left(e^{G_{\Omega}(\cdot,z_{0})}\right)\), and \(b_{0}\) is a constant such that \(ord_{z_{0}}(b_{0}gp_{*}(f_{u}df_{z_{0}})-f_{0})>ord_{z_{0}}(f_{0})\)._ Let \(w\) be a local coordinate on a neighborhood \(V_{z_{0}}\) of \(z_{0}\in\Omega\) satisfying \(w(z_{0})=0\). Let us recall some properties about Green functions. **Lemma 2.9** (see [36], see also [41]).: \(G_{\Omega}(z,z_{0})=\sup_{v\in\Delta_{0}(z_{0})}v(z)\)_, where \(\Delta_{0}(z_{0})\) is the set of negative subharmonic functions \(v\) on \(\Omega\) satisfying that \(v-\log|w|\) has a locally finite upper bound near \(z_{0}\)._ **Lemma 2.10** (see [28]).: _For any open neighborhood \(U\) of \(z_{0}\), there exists \(t>0\) such that \(\{G_{\Omega}(z,z_{0})<-t\}\) is a relatively compact subset of \(U\)._ The following lemma will be used in the proof of Lemma 2.12. **Lemma 2.11**.: _For any \(a<0\), \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a\}\) is a connected set._ Proof.: We prove Lemma 2.11 by contradiction: if not, there a connected component \(V_{1}\neq\emptyset\) of \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a\}\) such that \(z_{0}\not\in V_{1}\). Denote that \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a\}\backslash V_{1}=V_{2}\). Then there exists an open subset \(U_{1}\) of \(\Omega\), which satisfies that \[V_{1}\subset\overline{V}_{1}\subset U_{1}\ \ \text{and}\ \ V_{2}\in\Omega \backslash U_{1}.\] Set \[v(z)=\left\{\begin{array}{ll}\max\{G_{\Omega}(z,z_{0}),a\},&z\in U_{1}\\ G_{\Omega}(z,z_{0}),&z\in\Omega\backslash U_{1}\end{array}\right.\] on \(\Omega\). Note that \(v=G_{\Omega}(\cdot,z_{0})\) is subharmonic on \(\Omega\backslash\overline{V}_{1}\) and \(v=\max\{G_{\Omega}(\cdot,z_{0}),a\}\) is subharmonic on \(U_{1}\), then we know that \(v\) is subharmonic on \(\Omega\). Note that \[0>v\geq G_{\Omega}(\cdot,z_{0})\] on \(\Omega\). By Lemma 2.9, we have \(v=G_{\Omega}(\cdot,z_{0})\), which contradicts to \(G_{\Omega}(\cdot,z_{0})<a\) on \(V_{1}\). Thus, Lemma 2.11 holds. The following lemma will be used in the proof of Theorem 1.11. **Lemma 2.12**.: _Let \(a_{0}<0\), and assume that \(\{z\in\Omega:G_{\Omega}(z,z_{0})=a_{0}\}\Subset\Omega.\) Then for any open neighborhood \(U\) of \(\{z\in\Omega:G_{\Omega}(z,z_{0})=a_{0}\}\), there exists \(a_{1}>a_{0}\) such that \(\{z\in\Omega:a_{0}<G_{\Omega}(\cdot,z_{0})<a_{1}\}\Subset U\)._ Proof.: As \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a_{0}\}\) is a connected set by Lemma 2.11 and \(\{z\in\Omega:G_{\Omega}(z,z_{0})=a_{0}\}\Subset\Omega\), without loss of generality, assume that \(\tilde{U}:=U\cup\{z\in\Omega:G_{\Omega}(z,z_{0})<a_{0}\}\) is connected and \(U\Subset\Omega\). Since \(\{z\in\Omega:G_{\Omega}(z,z_{0})=a_{0}\}\subset U\Subset\Omega\), there exists \(a_{1}>a_{0}\) satisfying \[[a_{0},a_{1}]\cap\{a<0:\exists z\in\partial U\,s.t.\,G_{\Omega}(z,z_{0})=a\}=\emptyset,\] which shows that \[\{z\in\Omega:a_{0}<G_{\Omega}(\cdot,z_{0})<a_{1}\}\cap U\Subset U.\] Then we have \[\overline{\tilde{U}\cap\{z\in\Omega:G_{\Omega}(z,z_{0})<a_{1}\}}\] \[= \{z\in\Omega:G_{\Omega}(z,z_{0})\leq a_{0}\}\cup\overline{U\cap \{z\in\Omega:a_{0}<G_{\Omega}(z,z_{0})<a_{1}\}}\] \[\subset \tilde{U}\] As \(\tilde{U}\) and \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a_{1}\}\) are connected open sets, we have \(\{z\in\Omega:G_{\Omega}(z,z_{0})<a_{1}\}\subset\tilde{U}\). Thus, we get \(\{z\in\Omega:a_{0}<G_{\Omega}(\cdot,z_{0})<a_{1}\}=\{z\in\Omega:a_{0}<G_{ \Omega}(\cdot,z_{0})<a_{1}\}\cap U\Subset U\). We recall the coarea formula. **Lemma 2.13** (see [12]).: _Suppose that \(\Omega\) is an open set in \(\mathbb{R}^{n}\) and \(u\in C^{1}(\Omega)\). Then for any \(g\in L^{1}(\Omega)\),_ \[\int_{\Omega}g(x)|\bigtriangledown u(x)|dx=\int_{\mathbb{R}}\left(\int_{u^{-1 }(t)}g(x)dH_{n-1}(x)\right)dt,\] _where \(H_{n-1}\) is the \((n-1)\)-dimensional Hausdorff measure._ Assume that \(\Omega\subset\mathbb{C}\) is a domain bounded by finite analytic curves. Denote that \[s(t):=e^{2t}\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\] for \(t\geq 0\), where \(\lambda(A)\) is the Lebesgue measure for any measurable subset \(A\) of \(\mathbb{C}\). **Lemma 2.14** ([6]).: _The function \(s(t)\) is decreasing on \([0,+\infty)\)._ The following lemma gives a characterization for \(s(t)\) degenerating to a constant function. **Lemma 2.15**.: _If the function \(s(t)\) is a constant function on \([0+\infty)\), then \(\Omega\) is a disc._ Proof.: Firstly, we recall the proof of Lemma 2.14 in [6]. Denote that \(h(t)=\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\}).\)\(f(t):=\log s(t)=2t+\log h(t)\). Let \(-t\) be a regular value of \(G_{\Omega}(\cdot,z_{0})\), then we have \[f^{\prime}(t)=2+\frac{h^{\prime}(t)}{h(t)}.\] Lemma 2.13 shows that \[h(t)=\int_{-\infty}^{-t}\int_{\{G_{\Omega}(\cdot,z_{0})=s\}}\frac{d\sigma}{| \nabla G|}ds,\] which implies that \[h^{\prime}(t)=-\int_{\{G_{\Omega}(\cdot,z_{0})=-t\}}\frac{d\sigma}{|\nabla G|}.\] Using Cauchy-Schwarz inequality, we have \[-h^{\prime}(t)\geq\frac{(\sigma(\{G_{\Omega}(\cdot,z_{0})=-t\}))^{2}}{\int_{\{G _{\Omega}(\cdot,z_{0})=-t\}}|\nabla G|d\sigma}=\frac{(\sigma(\{G_{\Omega}( \cdot,z_{0})=-t\}))^{2}}{2\pi}.\] The isoperimetric inequality shows that \[(\sigma(\{G_{\Omega}(\cdot,z_{0})=-t\}))^{2}\geq 4\pi\lambda(\{G_{\Omega}( \cdot,z_{0})<-t\}), \tag{2.1}\] then we have \(f^{\prime}(t)\leq 0\). Now, we prove Lemma 2.15. If \(s(t)\) is a constant function, then \(f^{\prime}\equiv 0\), which implies that inequality (2.1) becomes an equality. Following from the characterization of the isoperimetric inequality becoming an equality, we know that \(\{G_{\Omega}(\cdot,z_{0})<-t\}\) is a disc for any regular value \(-t\) of \(G_{\Omega}(\cdot,z_{0})\). Thus, \(\Omega\) is a disc. ### Other useful lemmas In this section, we give some lemmas, which will be used in the proofs of the main theorems. Let \(\theta_{0}\in(0,\pi]\), and denote that \(L_{\theta_{0}}:=\{z\in\Delta:Arg(z)\in(0,\theta_{0})\}\), where \(Arg(z)=\theta\in(-\pi,\pi]\) for any \(z=re^{i\theta}\in\mathbb{C}\). **Lemma 2.16**.: _Let \(v\) be a subharmonic function on the unit disc \(\Delta\) satisfying that \(v=k\log|z|\) on \(L_{\theta_{0}}\), where \(k>0\) is a constant. Then \(v-k\log|z|\) is subharmonic on \(\Delta\)._ Proof.: It suffices to prove the Lelong number \(v(dd^{c}v,o)\geq k\), and we prove it by contradiction: if not, we have \(v(dd^{c}\tilde{v},o)=0\), where \(\tilde{v}:=v-v(dd^{c}v,o)\log|z|\) is a subharmonic function on \(\Delta\). Denote that \(C:=\sup_{z\in\Delta_{\frac{1}{2}}}\tilde{v}<+\infty\). We have \[v(dd^{c}\tilde{v},o) =\lim_{r\to 0}\frac{\frac{1}{2\pi}\int_{0}^{2\pi}\tilde{v}(re^{i \theta})d\theta}{\log r}\] \[\geq\lim_{r\to 0}\frac{C+\frac{1}{2\pi}\int_{0}^{\theta_{0}}(k-v (dd^{c}v,o))\log|r|d\theta}{\log r}\] \[=\frac{(k-v(dd^{c}v,o))\theta_{0}}{2\pi}>0,\] which contradicts to \(v(dd^{c}\tilde{v},o)=0\), hence \(v-k\log|z|\) is subharmonic on \(\Delta\). **Lemma 2.17**.: _Let \(v\) be a subharmonic function on the unit disc \(\Delta\) satisfying that \(v=0\) on \(L_{\theta_{0}}\). Then \(v(o)=0\)._ Proof.: As \(v\) is upper semicontinuous, \(v(o)\geq 0\). As \(v\) is subharmonic, it follows from the mean value inequality that \[v(o)\leq\frac{1}{2\pi}\int_{0}^{2\pi}v(re^{i\theta})d\theta=\frac{1}{2\pi}\int _{\theta_{0}}^{2\pi}v(re^{i\theta})d\theta\] holds for any \(r>0\). Then there is a point \(z_{r}\in\{|z|=r\}\) such that \[v(z_{r})\geq\frac{2\pi}{2\pi-\theta_{0}}v(o)\] for any \(r\). As \(v\) is upper semicontinuous, we have \[v(o)\geq\limsup_{r\to 0}v(z_{r})\geq\limsup_{r\to 0}\frac{2\pi}{2\pi-\theta_{0}}v(o) =\frac{2\pi}{2\pi-\theta_{0}}v(o),\] which shows that \(v(o)=0\). Let us give two lemmas on concave functions. **Lemma 2.18**.: _Suppose \(x(t):\mathbb{R}^{+}\to\mathbb{R}^{+}\) is a strictly decreasing function. If \(-\log x(t)\) is convex, and \(\log x(t)+t\) is lower bounded on \(\mathbb{R}^{+}\), then \(x(-\log r)\) is concave with respect to \(r\in(0,1)\)._ Proof.: We may assume \(x(t)\in C^{2}(0,+\infty)\), otherwise we use the approximation method. Then \(x^{\prime}(t)<0\). By that \(-\log x(t)\) is convex, direct computation gives \[x^{\prime\prime}(t)x(t)-(x^{\prime}(t))^{2}\leq 0. \tag{2.2}\] Since \(-\log x(t)\) is convex and \(\log x(t)+t\) is lower bounded, we have that \(\log x(t)+t\) is increasing on \(\mathbb{R}^{+}\), which implies that \[x^{\prime}(t)+x(t)\geq 0. \tag{2.3}\] Combining (2.2) with (2.3), we get \(x^{\prime\prime}(t)+x^{\prime}(t)\leq 0\), yielding that \(x(-\log r)\) is concave with respect to \(r\in(0,1)\). **Lemma 2.19**.: _Let \(g(r)\) be a concave function on \([0,1]\), which is strictly increasing on \([0,1]\) and satisfies \(g(0)=0\). If there exists \(r_{0}\in(0,1)\) such that \(g(r)=ar+b\) on \([r_{0},1]\) and \(g(r)\not\equiv ar\) on \([0,1]\), where \(a,b\in\mathbb{R}\), then \(h(t):=-\log g(e^{-t})\) is not a convex function on \([0,+\infty)\)._ Proof.: It is clear that \(h(t)=-\log(ae^{-t}+b)\) on \([0,-\log r_{0}]\). Then we have \[h^{\prime\prime}=\frac{-abe^{t}}{(a+be^{t})^{2}} \tag{2.4}\] on \([0,-\log r_{0}]\). As \(g(r)\) is strictly increasing and concave on \([0,1]\) and \(g(r)\not\equiv ar\) on \([0,1]\), then \(a>0\) and \(b>0\). Inequality (2.4) shows that \(h^{\prime\prime}<0\) on \([0,-\log r_{0}]\). Thus, \(h(t)\) is not a convex function on \([0,+\infty)\). ## 3. Proofs of Theorem 1.5, Remark 1.6 and Theorem 1.7 In this section, we prove Theorem 1.5, Remark 1.6 and Theorem 1.7. Proof of Theorem 1.5.: Firstly, we prove that there exists a unique holomorphic \((n,0)\) form \(F\) on \(\{\psi<-T_{1}\}\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \(G(t;c)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\in[T_{1},T_{2}]\). According to the assumptions and Lemma 2.2, we can assume that \(G(T_{1})\in(0,+\infty)\). For any \(t\in[T_{1},T_{2}]\), by Lemma 2.3, there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) such that \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)=G(t).\] And by Lemma 2.1, for any \(t\in[T_{1},T_{2})\) and \(B_{j}>0\), there exists a holomorphic \((n,0)\) form \(\tilde{F}_{j}\) on \(\{\psi<-T_{1}\}\) such that \(H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \[\begin{split}&\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}-(1-b_{t,B_{j}}( \psi))F_{t}|^{2}e^{-\varphi-\psi+v_{t,B_{j}}(\psi)}c(-v_{t,B_{j}}(\psi))\\ \leq&\int_{T_{1}}^{t+B_{j}}c(s)e^{-s}\mathrm{d}s\int _{\{\psi<-T_{1}\}}\frac{1}{B_{j}}\mathbb{I}_{\{-t-B_{j}<\psi<-t\}}|F_{t}|^{2}e ^{-\varphi-\psi}.\end{split} \tag{3.1}\] We denote \(v_{t,B_{j}}\) by \(v_{j}\). As \(s\leq v_{j}(s)\) and \(c(s)e^{-s}\) is decreasing, we have \[e^{-\psi+v_{j}(\psi)}c(-v_{j}(\psi))\geq c(-\psi).\] Combining with (3.1), we obtain \[\begin{split}&\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}-(1-b_{t,B_{j}}( \psi))F_{t}|^{2}e^{-\varphi}c(-\psi)\\ \leq&\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}-(1-b_{t,B_ {j}}(\psi))F_{t}|^{2}e^{-\varphi-\psi+v_{j}(\psi)}c(-v_{j}(\psi))\\ \leq&\int_{T_{1}}^{t+B_{j}}c(s)e^{-s}\mathrm{d}s \int_{\{\psi<-T_{1}\}}\frac{1}{B_{j}}\mathbb{I}_{\{-t-B_{j}<\psi<-t\}}|F_{t}|^ {2}e^{-\varphi-\psi}\\ \leq&\frac{e^{t+B_{j}}\int_{T_{1}}^{t+B_{j}}c(s)e^{ -s}\mathrm{d}s}{\inf_{s\in(t,t+B_{j})}c(s)}\int_{\{\psi<-T_{1}\}}\frac{1}{B_{j }}\mathbb{I}_{\{-t-B_{j}<\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)\\ \leq&\frac{e^{t+B_{j}}\int_{T_{1}}^{t+B_{j}}c(s)e^{ -s}\mathrm{d}s}{\inf_{s\in(t,t+B_{j})}c(s)}\left(\int_{\{\psi<-t\}}\frac{1}{B _{j}}|F_{t}|^{2}e^{-\varphi}c(-\psi)-\int_{\{\psi<-t-B_{j}\}}\frac{1}{B_{j}}|F_ {t}|^{2}e^{-\varphi}c(-\psi)\right)\\ \leq&\frac{e^{t+B_{j}}\int_{T_{1}}^{t+B_{j}}c(s)e^{ -s}\mathrm{d}s}{\inf_{s\in(t,t+B_{j})}c(s)}\cdot\frac{G(t)-G(t+B_{j})}{B_{j}}. \end{split} \tag{3.2}\] We may assume \(B_{j}\in(0,T_{2}-t)\) and \(\lim\limits_{j\to+\infty}B_{j}=0\). Then by the partially linear assumption of \(G(h^{-1}(r))\), (3.2) implies that there is a nonnegative constant \(\kappa\) such that \[\begin{split}&\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}-(1-b_{t,B_{j}}( \psi))F_{t}|^{2}e^{-\varphi}c(-\psi)\\ \leq&\frac{e^{t+B_{j}}\int_{T_{1}}^{t+B_{j}}c(s)e^{-s }\mathrm{d}s}{\inf_{s\in(t,t+B_{j})}c(s)}\cdot\frac{\kappa\int_{t}^{t+B_{j}}c (s)e^{-s}\mathrm{d}s}{B_{j}}.\end{split} \tag{3.3}\] Here \(\kappa\) is the slope of the linear function \(G(h^{-1}(r))|_{[h(T_{2}),h(T_{1})]}\). We prove that \(\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}|^{2}e^{-\varphi}c(-\psi)\) is uniformly bounded with respect to \(j\). Note that \[\left(\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}-(1-b_{t,B_{j}}(\psi))F_ {t}|^{2}e^{-\varphi}c(-\psi)\right)^{1/2}\] \[\geq \left(\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}|^{2}e^{-\varphi}c(-\psi )\right)^{1/2}-\left(\int_{\{\psi<-T_{1}\}}|(1-b_{t,B_{j}}(\psi))F_{t}|^{2}e^ {-\varphi}c(-\psi)\right)^{1/2},\] then it follows from (3.3) that \[\left(\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}|^{2}e^{-\varphi}c(-\psi) \right)^{1/2}\] \[\leq \left(\frac{e^{t+B_{j}}\int_{T_{1}}^{t+B_{j}}c(s)e^{-s}\mathrm{d}s} {\inf_{s\in(t,t+B_{j})}c(s)}\cdot\frac{\kappa\int_{t}^{t+B_{j}}c(s)e^{-s} \mathrm{d}s}{B_{j}}\right)^{1/2}\] \[+\left(\int_{\{\psi<-T_{1}\}}|(1-b_{t,B_{j}}(\psi))F_{t}|^{2}e^{- \varphi}c(-\psi)\right)^{1/2}.\] Since \(0\leq b_{t,B_{j}}(\psi)\leq 1\), \(\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j}|^{2}e^{-\varphi}c(-\psi)\) is uniformly bounded for any \(j\). Then it follows from Lemma 2.6 that there is a subsequence of \(\{\tilde{F}_{j}\}\), denoted by \(\{\tilde{F}_{j_{k}}\}\), which is uniformly convergent to a holomorphic \((n,0)\) form \(\tilde{F}\) on \(\{\psi<-T_{1}\}\) on any compact subset of \(\{\psi<-T_{1}\}\). Then by Fatou's Lemma, \[\int_{\{\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\leq\liminf_{k\to+ \infty}\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j_{k}}|^{2}e^{-\varphi}c(-\psi)<+\infty.\] As \((\tilde{F}_{j}-F_{t})\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{ Z_{0}})\) for any \(j\), Lemma 2.5 implies that \((\tilde{F}-F_{t})\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\). With direct calculations, we have \[\lim_{j\to+\infty}b_{t,B_{j}}(s)=\left\{\begin{array}{ll}0,&s\in(-\infty,-t) \\ 1,&s\in[-t,+\infty)\end{array}\right.,\] and \[\lim_{j\to+\infty}v_{t,B_{j}}(s)=\left\{\begin{array}{ll}-t,&s\in(-\infty,- t)\\ s,&s\in[-t,+\infty)\end{array}\right..\] Then it follows from (3.3) and Fatou's Lemma that \[\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi-\psi-t}c(t)+ \int_{\{-t\leq\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<-T_{1}\}}\lim_{k\to+\infty}|\tilde{F}_{j_{k}}-(1-b_{ t,B_{j_{k}}}(\psi))F_{t}|^{2}e^{-\varphi-\psi+v_{t,B_{j_{k}}}(\psi)}c(-v_{t,B_{j_{k}}} (\psi))\] \[\leq \liminf_{k\to+\infty}\int_{\{\psi<-T_{1}\}}|\tilde{F}_{j_{k}}-(1 -b_{t,B_{j_{k}}}(\psi))F_{t}|^{2}e^{-\varphi-\psi+v_{t,B_{j_{k}}}(\psi)}c(-v_{ t,B_{j_{k}}}(\psi))\] \[\leq \liminf_{k\to+\infty}\left(\frac{e^{t+B_{j_{k}}}\int_{T_{1}}^{t+ B_{j_{k}}}c(s)e^{-s}\mathrm{d}s}{\inf_{s\in(t,t+B_{j_{k}})}c(s)}\cdot\frac{ \kappa\int_{t}^{t+B_{j_{k}}}c(s)e^{-s}\mathrm{d}s}{B_{j_{k}}}\right)\] \[= \kappa\int_{T_{1}}^{t}c(s)e^{-s}\mathrm{d}s.\] It follows from the choice of \(F_{t}\), and Lemma 2.3 that \[\begin{split}&\int_{\{\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(- \psi)\\ =&\int_{\{\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(- \psi)+\int_{\{-t\leq\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\\ =&\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi) +\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi}c(-\psi)\\ &+\int_{\{-t\leq\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(- \psi).\end{split} \tag{3.4}\] As \(c(t)e^{-t}\) is decreasing, we have \(e^{\psi}c(-\psi)\leq e^{-t}c(t)\) on \(\{\psi<-t\}\) and \[\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi}c(-\psi)\leq\int_{\{\psi<- t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi-\psi-t}c(t). \tag{3.5}\] Combining (3.5) with (3.4), we get \[\begin{split}&\int_{\{\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(- \psi)\\ =&\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi) +\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi}c(-\psi)\\ &+\int_{\{-t\leq\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(- \psi)\\ \leq&\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(- \psi)+\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi-\psi-t}c(-t)\\ &+\int_{\{-t\leq\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi )\\ \leq& G(t)+\kappa\int_{T_{1}}^{t}c(s)e^{-s}{\rm d}s \\ =& G(T_{1}).\end{split} \tag{3.6}\] Since \((\tilde{F}-F_{t})\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_ {0}})\), by Lemma 2.3, \(\tilde{F}\) is exactly the unique holomorphic \((n,0)\) form such that \((\tilde{F}-F_{t})\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_ {0}})\) and \[\int_{\{\psi<-T_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)=G(T_{1}).\] Or we can say \(\tilde{F}=F_{T_{1}}\). Now we prove that \(\tilde{F}=F_{t}\) on \(\{\psi<-t\}\). By the above discussions, the inequality (3.6) should be equality. Then the inequality (3.5) should also be equality. It means that \[\int_{\{\psi<-t\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi-\psi}(e^{-t}c(t)-e^{\psi}c (-\psi))=0. \tag{3.7}\] Note that \(\int_{T_{1}}^{+\infty}c(t)e^{-t}{\rm d}t<+\infty\). Thus we can find some \(t^{\prime}>t\) and \(\varepsilon>0\) such that \[e^{-t}c(t)-e^{\psi}c(-\psi)>\varepsilon>0,\ \forall z\in\{\psi<-t^{\prime}\}.\] Then (3.7) implies that \[\varepsilon\int_{\{\psi<-t^{\prime}\}}|\tilde{F}-F_{t}|^{2}e^{-\varphi-\psi}=0.\] Since \(\tilde{F}\) and \(F_{t}\) are holomorphic \((n,0)\) forms on \(\{\psi<-t\}\), we have \(\tilde{F}=F_{t}\), i.e. \(F_{T_{1}}=F_{t}\) on \(\{\psi<-t\}\). This is true for any \(t\in[T_{1},T_{2}]\). Finally, according to the Lebesgue dominated convergence theorem and the assumptions, we have \[\int_{\{\psi<T_{2}\}}|F_{T_{1}}|^{2}e^{-\varphi}c(-\psi)=\lim_{t\to T_{2}-0}G(t )=G(T_{2}).\] From this we get \(F_{T_{1}}=F_{T_{2}}\) on \(\{\psi<-T_{2}\}\). Then we have proved that \(F_{T_{1}}\) is exactly the unique holomorphic \((n,0)\) form \(F\) on \(\{\psi<-T_{1}\}\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \(G(t;c)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\in[T_{1},T_{2}]\). Now we prove the rest result of Theorem 1.5. As \(a(t)\) is a nonnegative measurable function on \([T_{1},T_{2}]\), we can find a sequence of functions \(\{\sum_{j=1}^{n_{i}}a_{ij}\mathbb{I}_{E_{i}j}\}_{i\in\mathbb{N}_{+}}\) (\(n_{i}<+\infty\) for any \(i\in\mathbb{N}_{+}\)) satisfying that the sequence is increasingly convergent to \(a(t)\) for a.e. \(t\in[T_{1},T_{2}]\), where \(E_{ij}\) is a Lebesgue measurable subset of \([T_{1},T_{2}]\) and \(a_{ij}>0\) is a constant for any \(i,j\). It follows from Levi's Theorem that we only need to prove the equality (1.1) under the assumption \(a(t)=\mathbb{I}_{E}(t)\), where \(E\) is a Lebesgue measurable subset of \([T_{1},T_{2}]\). Note that \[G(t)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)=G(T_{2})+\frac{G(T_{1})-G( T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{t}^{T_{2}}c(s)e^{-s} \mathrm{d}s\] for any \(t\in[T_{1},T_{2}]\). Then \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi)=\frac{G(T_{1})-G( T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{t_{1}}^{t_{2}}c(s)e^{-s} \mathrm{d}s \tag{3.8}\] holds for any \(T_{1}\leq t_{1}<t_{2}\leq T_{2}\). Then for any Lebesgue zero measure subset \(N\) of \([T_{1},T_{2}]\), we get \[\int_{\{-\psi(z)\in N\}}|F|^{2}e^{-\varphi}=0 \tag{3.9}\] from Lebesgue dominated convergence theorem. Since \(c(t)e^{-t}\) is decreasing on \([T_{1},T_{2}]\), we can find a countable subsets \(\{s_{j}\}_{j\in\mathbb{N}_{+}}\subset[T_{1},T_{2}]\) such that \(c(t)\) is continuous besides \(\{s_{j}\}\). It means that there exists a sequence of open sets \(\{U_{k}\}\) such that \(\{s_{j}\}\subset U_{k}\subset[T_{1},T_{2}]\) and \(\lim_{k\to+\infty}\mu(U_{k})=0\), where \(\mu\) is the Lebesgue measure on \(\mathbb{R}\). Then for any \([t^{\prime}_{1},t^{\prime}_{2}]\subset[T_{1},T_{2}]\), we have \[\int_{\{-t^{\prime}_{2}\leq\psi<-t^{\prime}_{1}\}}|F|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\cup U_{k}\}}|F |^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\cup U_{k}\}} |F|^{2}e^{-\varphi}\] \[= \lim_{n\to+\infty}\sum_{i=1}^{n-1}\int_{\{-\psi(z)\in I_{i,n} \setminus U_{k}\}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{ \prime}_{2}]\cup U_{k}\}}|F|^{2}e^{-\varphi},\] where \(I_{i,n}=(t_{1}^{\prime}+ia_{n},t_{1}^{\prime}+(i+1)a_{n}]\), and \(a_{n}=(t_{2}^{\prime}-t_{1}^{\prime})/n\). Using equality (3.8), we have \[\lim_{n\to+\infty}\sum_{i=1}^{n-1}\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k} \}}|F|^{2}e^{-\varphi}\] \[\leq \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n} \setminus U_{k}}c(t)}\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F|^{2}e^{- \varphi}c(-\psi)\] \[\leq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d} s}\limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n}\setminus U_{k}}c(t) }\int_{I_{i,n}\setminus U_{k}}c(s)e^{-s}\mathrm{d}s.\] In addition, according to the choice of \(U_{k}\), we have \[\limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n} \setminus U_{k}}c(t)}\int_{I_{i,n}\setminus U_{k}}c(s)e^{-s}\mathrm{d}s\] \[\leq \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{\sup_{I_{i,n}\setminus U _{k}}c(t)}{\inf_{I_{i,n}\setminus U_{k}}c(t)}\int_{I_{i,n}\setminus U_{k}}c(s )e^{-s}\mathrm{d}s\] \[= \int_{(t_{1}^{\prime},t_{2}]\setminus U_{k}}e^{-s}\mathrm{d}s.\] Now we can obtain that \[\int_{\{-t_{2}^{\prime}\leq\psi<-t_{1}^{\prime}\}}|F|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}]\setminus U_{k }\}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}] \cup U_{k}\}}|F|^{2}e^{-\varphi}\] \[\leq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d }s}\int_{(t_{1}^{\prime},t_{2}]\setminus U_{k}}e^{-s}\mathrm{d}s+\int_{\{- \psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}]\cup U_{k}\}}|F|^{2}e^{-\varphi}.\] Let \(k\to+\infty\), it follows from equality (3.9) that \[\int_{\{-t_{2}^{\prime}\leq\psi<-t_{1}^{\prime}\}}|F|^{2}e^{-\varphi}\leq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{t_{1} ^{\prime}}^{t_{2}^{\prime}}e^{-s}\mathrm{d}s.\] Using the same methods, we can also get that \[\int_{\{-t_{2}^{\prime}\leq\psi<-t_{1}^{\prime}\}}|F|^{2}e^{-\varphi}\geq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{t_{1} ^{\prime}}^{t_{2}^{\prime}}e^{-s}\mathrm{d}s.\] Then we know that \[\int_{\{-t_{2}^{\prime}\leq\psi<-t_{1}^{\prime}\}}|F|^{2}e^{-\varphi}=\frac{G( T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{t_{1}^{\prime}}^{t _{2}^{\prime}}e^{-s}\mathrm{d}s.\] Thus for any open subset \(U\subset[T_{1},T_{2}]\) and any compact subset \(K\subset[T_{1},T_{2}]\), we have \[\int_{\{-\psi(z)\in U\}}|F|^{2}e^{-\varphi}=\frac{G(T_{1})-G(T_{2})}{\int_{T_{1 }}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{U}e^{-s}\mathrm{d}s,\] and \[\int_{\{-\psi(z)\in K\}}|F|^{2}e^{-\varphi}=\frac{G(T_{1})-G(T_{2})}{\int_{T_{1 }}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\int_{K}e^{-s}\mathrm{d}s.\] Then for the Lebesgue measurable subset \(E\subset[T_{1},T_{2}]\), and \(t_{1},t_{2}\in[T_{1},T_{2}]\) with \(t_{1}<t_{2}\), we can find a sequence of compact sets \(\{K_{j}\}\) and a sequence of open sets \(\{U_{j}\}\) such that \[K_{1}\subset\ldots\subset K_{j}\subset K_{j+1}\subset\ldots E\cap(t_{1},t_{2}] \subset\ldots\subset U_{j+1}\subset U_{j}\subset\ldots\subset U_{1}\subset[ T_{1},T_{2}],\] and \(\lim_{j\to+\infty}\mu(U_{j}\setminus K_{j})=0\). Then we have \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}\mathbb{I}_{E} (-\psi)\] \[= \int_{\{-\psi\in E\cap(t_{1},t_{2}]\}}|F|^{2}e^{-\varphi}\] \[\leq \liminf_{j\to+\infty}\int_{\{-\psi\in U_{j}\cap(t_{1},t_{2}]\}} |F|^{2}e^{-\varphi}\] \[\leq \liminf_{j\to+\infty}\frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2 }}c(s)e^{-s}\mathrm{d}s}\int_{\{-\psi\in U_{j}\cap(t_{1},t_{2}]\}}e^{-s} \mathrm{d}s\] \[\leq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d} s}\int_{\{-\psi\in E\cap(t_{1},t_{2}]\}}e^{-s}\mathrm{d}s\] \[= \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d} s}\int_{t_{1}}^{t_{2}}e^{-s}\mathbb{I}_{E}\mathrm{d}s,\] and \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}\mathbb{I}_{E} (-\psi)\] \[= \int_{\{-\psi\in E\cap(t_{1},t_{2}]\}}|F|^{2}e^{-\varphi}\] \[\geq \liminf_{j\to+\infty}\int_{\{-\psi\in K_{j}\cap(t_{1},t_{2}]\}} |F|^{2}e^{-\varphi}\] \[\geq \liminf_{j\to+\infty}\frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2 }}c(s)e^{-s}\mathrm{d}s}\int_{\{-\psi\in K_{j}\cap(t_{1},t_{2}]\}}e^{-s} \mathrm{d}s\] \[\geq \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d} s}\int_{\{-\psi\in E\cap(t_{1},t_{2}]\}}e^{-s}\mathrm{d}s\] \[= \frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d} s}\int_{t_{1}}^{t_{2}}e^{-s}\mathbb{I}_{E}\mathrm{d}s.\] It means that \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}\mathbb{I}_{E} (-\psi)=\frac{G(T_{1})-G(T_{2})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s} \int_{t_{1}}^{t_{2}}e^{-s}\mathbb{I}_{E}\mathrm{d}s,\] which implies that equality (1.1) holds. Now we give the proof of Remark 1.6. Proof of Remark 1.6.: Firstly, we have \(G(t;\tilde{c})\leq\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi)\), then we may assume that \(G(t;\tilde{c})<+\infty\). Secondly, by Lemma 2.3, for any \(t\in[T_{1},T_{2}]\), there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}\tilde{c}(-\psi)=G(t;\tilde{c}).\] According to Lemma 2.3 and \(F_{t}\in\mathcal{H}(\tilde{c},t)\subset\mathcal{H}(c,t)\), we have \[\int_{\{\psi<-t^{\prime}\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<-t^{\prime}\}}|F|^{2}e^{-\varphi}c(-\psi)+\int_{\{ \psi<-t^{\prime}\}}|F_{t}-F|^{2}e^{-\varphi}c(-\psi)\] for any \(t^{\prime}\in[t,T_{2}]\). Then for any \(t_{1},t_{2}\in[t,T_{2}]\) with \(t_{1}<t_{2}\), \[\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F_{t}|^{2}e^{-\varphi}c(-\psi) \tag{3.10}\] \[= \int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi)+\int_ {\{-t_{2}\leq\psi<-t_{1}\}}|F_{t}-F|^{2}e^{-\varphi}c(-\psi).\] Thus for any Lebesgue zero measure subset \(N\) of \((t,T_{2}]\), it follows from equality (3.10), equality (3.9) and Lebesgue dominated convergence theorem that \[\int_{\{-\psi(z)\in N\}}|F_{t}|^{2}e^{-\varphi}=\int_{\{-\psi(z)\in N\}}|F_{t} -F|^{2}e^{-\varphi}. \tag{3.11}\] Since \(c(t)e^{-t}\) is decreasing on \([T_{1},T_{2}]\), we can find a countable subsets \(\{s_{j}\}_{j\in\mathbb{N}_{+}}\subset(t,T_{2}]\) such that \(c(t)\) is continuous besides \(\{s_{j}\}\). It means that there exists a sequence of open sets \(\{U_{k}\}\) such that \(\{s_{j}\}\subset U_{k}\subset[T_{1},T_{2}]\) and \(\lim\limits_{k\to+\infty}\mu(U_{k})=0\), where \(\mu\) is the Lebesgue measure on \(\mathbb{R}\). Then for any \((t^{\prime}_{1},t^{\prime}_{2}]\subset(t,T_{2}]\), we have \[\int_{\{-t^{\prime}_{2}\leq\psi<-t^{\prime}_{1}\}}|F_{t}|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus U_{k }\}}|F_{t}|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}] \cup U_{k}\}}|F_{t}|^{2}e^{-\varphi}\] \[= \lim_{n\to+\infty}\sum_{i=1}^{n-1}\int_{\{-\psi(z)\in I_{i,n} \setminus U_{k}\}}|F_{t}|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1}, t^{\prime}_{2}]\cup U_{k}\}}|F_{t}|^{2}e^{-\varphi},\] where \(I_{i,n}=(t^{\prime}_{1}+ia_{n},t^{\prime}_{1}+(i+1)a_{n}]\), and \(a_{n}=(t^{\prime}_{2}-t^{\prime}_{1})/n\). Using equality (3.10), we have \[\lim_{n\to+\infty}\sum_{i=1}^{n-1}\int_{\{-\psi(z)\in I_{i,n} \setminus U_{k}\}}|F_{t}|^{2}e^{-\varphi}\] \[\leq \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n} \setminus U_{k}}c(t)}\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F_{t}|^{2 }e^{-\varphi}c(-\psi)\] \[= \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n} \setminus U_{k}}c(t)}(\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F|^{2}e^{- \varphi}c(-\psi)\] \[+\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F_{t}-F|^{2}e^{- \varphi}c(-\psi)).\] In addition, according to the choice of \(U_{k}\), we have \[\limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n}\setminus U _{k}}c(t)}\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F_{t}|^{2}e^{-\varphi}\] \[\leq \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{1}{\inf_{I_{i,n} \setminus U_{k}}c(t)}(\int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F|^{2}e^{- \varphi}c(-\psi)\] \[+ \int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F_{t}-F|^{2}e^{- \varphi}c(-\psi))\] \[\leq \limsup_{n\to+\infty}\sum_{i=1}^{n-1}\frac{\sup_{I_{i,n}\setminus U _{k}}c(t)}{\inf_{I_{i,n}\setminus U_{k}}c(t)}(\int_{\{-\psi(z)\in I_{i,n} \setminus U_{k}\}}|F|^{2}e^{-\varphi}\] \[+ \int_{\{-\psi(z)\in I_{i,n}\setminus U_{k}\}}|F_{t}-F|^{2}e^{- \varphi})\] \[=\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus U_{ k}\}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}] \setminus U_{k}\}}|F_{t}-F|^{2}e^{-\varphi}.\] Now we can obtain that \[\int_{\{-t^{\prime}_{2}\leq\psi<-t^{\prime}_{1}\}}|F_{t}|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus U_{k} \}}|F_{t}|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}] \cup U_{k}\}}|F_{t}|^{2}e^{-\varphi}\] \[\leq \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus U_{k} \}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}] \setminus U_{k}\}}|F_{t}-F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\cup U_{k}\}}| F_{t}|^{2}e^{-\varphi}.\] Let \(k\to+\infty\), then it follows from equality (3.11) that \[\int_{\{-t^{\prime}_{2}\leq\psi<-t^{\prime}_{1}\}}|F|^{2}e^{-\varphi}\] \[\leq \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\}}|F|^{2}e^{- \varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus N\}}|F_{ t}-F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\cup N\}}|F_{t} |^{2}e^{-\varphi},\] where \(N=\bigcap_{k=1}^{+\infty}U_{k}\) and \(\mu(N)=0\). Using the same methods, we can also get that \[\int_{\{-t^{\prime}_{2}\leq\psi<-t^{\prime}_{1}\}}|F|^{2}e^{-\varphi}\] \[\geq \int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\}}|F|^{2}e^{- \varphi}+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\setminus N\}}|F_{ t}-F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in(t^{\prime}_{1},t^{\prime}_{2}]\cup N\}}|F_{t} |^{2}e^{-\varphi}.\] Then we know that \[\int_{\{-t_{2}^{\prime}\leq\psi<-t_{1}^{\prime}\}}|F|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}]\}}|F|^{2}e^{- \varphi}+\int_{\{-\psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}]\setminus N\}}|F_{t}- F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in(t_{1}^{\prime},t_{2}^{\prime}]\cup N\}}|F_{t} |^{2}e^{-\varphi}.\] Thus for any open subset \(U\subset(t,T_{2}]\) and any compact subset \(K\subset(t,T_{2}]\), we have \[\int_{\{-\psi(z)\in U\}}|F|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in U\}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in U \setminus N\}}|F_{t}-F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in U\cup N\}}|F_{t}|^{2}e^{-\varphi},\] and \[\int_{\{-\psi(z)\in K\}}|F|^{2}e^{-\varphi}\] \[= \int_{\{-\psi(z)\in K\}}|F|^{2}e^{-\varphi}+\int_{\{-\psi(z)\in K \setminus N\}}|F_{t}-F|^{2}e^{-\varphi}\] \[+\int_{\{-\psi(z)\in K\cup N\}}|F_{t}|^{2}e^{-\varphi}.\] Then for the Lebesgue measurable subset \(E\subset[T_{1},T_{2}]\), we can find a sequence of compact sets \(\{K_{j}\}\) such that \[K_{1}\subset\ldots\subset K_{j}\subset K_{j+1}\subset\ldots E\subset[T_{1},T_{ 2}],\] and \(\lim_{j\to+\infty}\mu(E\setminus K_{j})=0\). Thus we have \[\int_{\{-T_{2}\leq\psi<-t\}}|F_{t}|^{2}e^{-\varphi}\mathbb{I}_{E} (-\psi)\] \[\geq \lim_{j\to+\infty}\int_{\{-T_{2}\leq\psi<-t\}}|F_{t}|^{2}e^{- \varphi}\mathbb{I}_{K_{j}}(-\psi)\] \[\geq \lim_{j\to+\infty}\int_{\{-T_{2}\leq\psi<-t\}}|F|^{2}e^{-\varphi} \mathbb{I}_{K_{j}}(-\psi)\] \[= \int_{\{-T_{2}\leq\psi<-t\}}|F|^{2}e^{-\varphi}\mathbb{I}_{E}(- \psi).\] For the measurable function \(\tilde{c}(s)\), we can find an increasing sequence of simple functions \(\{\sum_{j=1}^{n_{i}}a_{ij}\mathbb{I}_{E_{ij}}\}_{i=1}^{+\infty}\) on \((t,T_{2}]\), such that \(\lim_{i\to+\infty}\sum_{j=1}^{n_{i}}a_{ij}\mathbb{I}_{E_{ij}}(s)=\tilde{c}(s)\) for a.e. \(s\in(t,T_{2}]\). Then we can get that \[\int_{\{-T_{2}\leq\psi<-t\}}|F_{t}|^{2}e^{-\varphi}\tilde{c}(-\psi)\geq\int_{ \{-T_{2}\leq\psi<-t\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi). \tag{3.12}\] Combining inequality (3.12) with the assumption that \[\int_{\{\psi<-T_{2}\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi)=G(t;\tilde{c}),\] which means that \[\int_{\{\psi<-T_{2}\}}|F_{t}|^{2}e^{-\varphi}\tilde{c}(-\psi)\geq\int_{\{\psi<-T _{2}\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi),\] we get that \[G(t;\tilde{c})=\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}\tilde{c}(-\psi)\geq \int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi).\] Then it follows from \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) that \(F_{t}=F|_{\{\psi<-t\}}\) and \(G(t;\tilde{c})=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}\tilde{c}(-\psi)\). The other results are clear after these. We give the proof of Theorem 1.7. Proof of Theorem 1.7.: According to Remark 1.6 and the statement (4) in Theorem 1.7, we get that \(G(h^{-1}(r);\tilde{\varphi},\tilde{\psi},c)\) is linear respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\), and \[G(t;,\tilde{\varphi},\tilde{\psi},c)=\int_{\{\tilde{\psi}<-t\}}|\tilde{F}|^{2} e^{-\tilde{\varphi}}c(-\tilde{\psi}),\ \forall t\in[T_{1},T_{2}]. \tag{3.13}\] By the statement (3) in Theorem 1.7, there is \((\tilde{F}-f)\in H^{0}(\tilde{Z}_{0},(\mathcal{O}(K_{M})\otimes\mathcal{I}( \tilde{\varphi}+\tilde{\psi}))|_{Z_{0}})\subset H^{0}(Z_{0},(\mathcal{O}(K_{M} )\otimes\mathcal{I}(\varphi+\psi))|_{Z_{0}})\subset H^{0}(Z_{0},(\mathcal{O}( K_{M})\otimes\mathcal{F})|_{Z_{0}})\). Then for any \(t\in[T_{1},T_{2}]\), and any holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), on the one hand, equality (1.2) and Lemma 2.3 shows that \[\int_{\{\psi<-T_{2}\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)\geq G(T_{2};\varphi,\psi,c)=\int_{\{\psi<-T_{2}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi). \tag{3.14}\] On the other hand, according to the statement (3), we have \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F}))|_{Z_{0}}) \subset H^{0}(\tilde{Z}_{0},(\mathcal{O}(K_{M})\otimes\tilde{\mathcal{F}})|_{ Z_{0}})\). Then it follows from the statement (1), (2), equality (3.13) and Lemma 2.3 that \[\begin{split}&\int_{\{-T_{2}\leq\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c (-\psi)\\ =&\int_{\{-T_{2}\leq\tilde{\psi}<-t\}}|F_{t}|^{2}e^{- \tilde{\varphi}}c(-\tilde{\psi})\\ =&\int_{\{\tilde{\psi}<-t\}}|F_{t}|^{2}e^{-\tilde{ \varphi}}c(-\tilde{\psi})-\int_{\{\tilde{\psi}<-T_{2}\}}|F_{t}|^{2}e^{-\tilde {\varphi}}c(-\tilde{\psi})\\ =&\int_{\{\tilde{\psi}<-t\}}|\tilde{F}|^{2}e^{- \tilde{\varphi}}c(-\tilde{\psi})+\int_{\{\tilde{\psi}<-t\}}|F_{t}-\tilde{F}|^{ 2}e^{-\tilde{\varphi}}c(-\tilde{\psi})\\ &-\int_{\{\tilde{\psi}<-T_{2}\}}|\tilde{F}|^{2}e^{-\tilde{ \varphi}}c(-\tilde{\psi})-\int_{\{\tilde{\psi}<-T_{2}\}}|F_{t}-\tilde{F}|^{2} e^{-\tilde{\varphi}}c(-\tilde{\psi})\\ =&\int_{\{-T_{2}\leq\tilde{\psi}<-t\}}|\tilde{F}|^{ 2}e^{-\tilde{\varphi}}c(-\tilde{\psi})+\int_{\{-T_{2}\leq\tilde{\psi}<-t\}}|F_ {t}-\tilde{F}|^{2}e^{-\tilde{\varphi}}c(-\tilde{\psi})\\ \geq&\int_{\{-T_{2}\leq\tilde{\psi}<-t\}}|\tilde{F}| ^{2}e^{-\tilde{\varphi}}c(-\tilde{\psi})\\ =&\int_{\{-T_{2}\leq\psi<-t\}}|\tilde{F}|^{2}e^{- \varphi}c(-\psi).\end{split} \tag{3.15}\] Combining inequality (3.14) with equality (3.15), we get that \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)\geq\int_{\{\psi<-t\}}| \tilde{F}|^{2}e^{-\varphi}c(-\psi).\] Since \((\tilde{F}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), according to the arbitrariness of \(F_{t}\), we know that \[G(t;\varphi,\psi,c)=\int_{\{\psi<-t\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi),\ t \in[T_{1},T_{2}].\] Now for any \(t\in[T_{1},T_{2}]\), Remark 1.6 implies that \[\begin{split} G(t;\varphi,\psi,c)&=\int_{\{\psi<- t\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\\ &=\int_{\{\psi<-T_{2}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)+\int_{ \{-T_{2}\leq\psi<-t\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\\ &=G(T_{2};\varphi,\psi,c)+\int_{\{-T_{2}\leq\tilde{\psi}<-t\}}| \tilde{F}|^{2}e^{-\tilde{\varphi}}c(-\tilde{\psi})\\ &=G(T_{2};\varphi,\psi,c)+\frac{G(T_{1};\tilde{\varphi},\tilde{ \psi},\tilde{c})-G(T_{2};\tilde{\varphi},\tilde{\psi},\tilde{c})}{\int_{T_{1} }^{T_{2}}\tilde{c}(s)e^{-s}\mathrm{d}s}\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s.\end{split}\] It means that \(G(h^{-1}(r);\varphi,\psi,c)\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). ## 4. Proofs of Theorem 1.8 and Theorem 1.9 In this section, we prove Theorem 1.8 and Theorem 1.9. Proof of Theorem 1.8.: Assume that \(G(h^{-1}(r);\varphi)\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s\), \(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). Then it follows from Theorem 1.5 that there exists a unique holomorphic \((n,0)\) form \(F\) on \(\{\psi<-T_{1}\}\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \[G(t;\varphi)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\] for any \(t\in[T_{1},T_{2}]\). As \(\tilde{\varphi}+\psi\) is plurisubharmonic and \(\tilde{\varphi}-\varphi\) is bounded on \(\{\psi<-T_{1}\}\), it follows from Theorem 1.2 that \(G(h^{-1}(r);\tilde{\varphi})\) is concave with respect to \(r\in[0,\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). Since \(\tilde{\varphi}+\psi\geq\varphi+\psi\) and \(\tilde{\varphi}+\psi\not\equiv\varphi+\psi\) on the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\), and both of them are plurisubharmonic on \(M\), then there exists a subset \(U\) of \(\{-T_{2}\leq\psi<-T_{1}\}\) such that \(\mu(U)>0\) and \(e^{-\tilde{\varphi}}<e^{-\varphi}\) on \(U\), where \(\mu\) is the Lebesgue measure on \(M\). Since \(\tilde{\varphi}=\varphi\) on \(\{\psi<-T_{2}\}\), we have \[\frac{G(T_{1};\tilde{\varphi})-G(T_{2};\tilde{\varphi})}{\int_{T _{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[= \frac{G(T_{1};\tilde{\varphi})-G(T_{2};\varphi)}{\int_{T_{1}}^{T _{2}}c(s)e^{-s}\mathrm{d}s}\] \[\leq \frac{\int_{\{\psi<-T_{1}\}}|F|^{2}e^{-\tilde{\varphi}}c(-\psi) -\int_{\{\psi<-T_{2}\}}|F|^{2}e^{-\varphi}c(-\psi)}{\int_{T_{1}}^{T_{2}}c(s)e ^{-s}\mathrm{d}s}\] \[< \frac{G(T_{1};\varphi)-G(T_{2};\varphi)}{\int_{T_{1}}^{T_{2}}c(s )e^{-s}\mathrm{d}s}\] for any \(t\in[T_{1},T_{2})\). By Lemma 2.3, for any \(t\in[T_{1},T_{2})\), there exists a holomorphic \((n,0)\) form \(F_{t}\) such that \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \[G(t;\tilde{\varphi})=\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\tilde{\varphi}}c(-\psi )<+\infty.\] Since \(\tilde{\varphi}-\varphi\) is bounded on \(\{\psi<-T_{1}\}\), we have \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)<+\infty.\] Then according to Lemma 2.3, for any \(t_{1},t_{2}\in[T_{1},T_{2}]\), \(t_{1}<t_{2}\), we have \[G(t_{1};\tilde{\varphi})-G(t_{2};\tilde{\varphi})\] \[\geq \int_{\{-t_{2}\leq\psi<-t_{1}\}}|F_{t_{1}}|^{2}e^{-\tilde{\varphi} }c(-\psi)\] \[\geq (\inf_{\{-t_{2}\leq\psi<-t_{1}\}}e^{\varphi-\tilde{\varphi}}) \int_{\{-t_{2}\leq\psi<-t_{1}\}}|F_{t_{1}}|^{2}e^{-\varphi}c(-\psi)\] \[= (\inf_{\{-t_{2}\leq\psi<-t_{1}\}}e^{\varphi-\tilde{\varphi}})\times\] \[\left(\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi) +\int_{\{-t_{2}\leq\psi<-t_{1}\}}|F_{t_{1}}-F|^{2}e^{-\varphi}c(-\psi)\right)\] \[\geq (\inf_{\{-t_{2}\leq\psi<-t_{1}\}}e^{\varphi-\tilde{\varphi}}) \int_{\{-t_{2}\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi).\] Then \(\lim_{t\to T_{1}+0}\sup_{\{-t\leq\psi<-T_{1}\}}(\tilde{\varphi}-\varphi)=0\) implies that \[\lim_{t\to T_{1}+0}\frac{G(T_{1};\tilde{\varphi})-G(t;\tilde{\varphi})}{\int_{T_ {1}}^{t}c(s)e^{-s}\mathrm{d}s}\] \[\geq \lim_{t\to T_{1}+0}(\inf_{\{-t\leq\psi<-T_{1}\}}e^{\varphi- \tilde{\varphi}})\frac{\int_{\{-t\leq\psi<-T_{1}\}}|F|^{2}e^{-\varphi}c(-\psi) }{\int_{T_{1}}^{t}c(s)e^{-s}\mathrm{d}s}\] \[= \frac{G(T_{1};\varphi)-G(T_{2};\varphi)}{\int_{T_{1}}^{T_{2}}c(s) e^{-s}\mathrm{d}s}\] \[> \frac{G(T_{1};\tilde{\varphi})-G(T_{2};\tilde{\varphi})}{\int_{T_ {1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s},\] which contradicts to the concavity of \(G(h^{-1}(r);\tilde{\varphi})\). It means that the assumption can not hold, i.e. \(G(h^{-1}(r);\varphi)\) is not linear with respect to \(r\) on \([\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\)\(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). Especially, if \(\varphi+\psi\) is strictly plurisubharmonic at \(z_{1}\in\mathrm{int}(\{-T_{2}\leq\psi<-T_{1}\})\), we can construct some \(\tilde{\varphi}\) satisfying the five statements in Theorem 1.8. By the assumption, there is a small open neighborhood \((U,w)\) of \(z_{1}\), such that \(U\Subset\{-T_{2}\leq\psi<-T_{1}\}\) and \(\sqrt{-1}\partial\bar{\partial}(\varphi+\psi)>\varepsilon\omega\) on \(U\), where \(w=(w_{1},\ldots,w_{n})\) is the local coordinate on \(U\), \(\omega=\sqrt{-1}\sum_{j=1}^{n}\mathrm{d}w_{j}\wedge\mathrm{d}\bar{w}_{j}\) on \(U\). Let \(\rho\) be a smooth nonnegative function on \(M\) such that \(\rho\not\equiv 0\) and \(\mathrm{Supp}\rho\Subset U\). Then we can choose a positive number \(\delta\) such that \[\sqrt{-1}\partial\bar{\partial}(\varphi+\psi+\delta\rho)\geq 0\] on \(U\). Let \(\tilde{\varphi}=\varphi+\delta\rho\), then it can be checked that \(\tilde{\varphi}\) satisfies the five statements in Theorem 1.8. It implies that \(G(h^{-1}(r))\) is not linear with respect to \(r\) on \([\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\)\(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). Now, we give the proof of Theorem 1.9. Proof of Theorem 1.9.: Let \(\tilde{\varphi}=\varphi+\psi-\tilde{\psi}\), then \(\tilde{\varphi}+\tilde{\psi}=\varphi+\psi\) is a plurisubharmonic function on \(M\). Assume that \(G(h^{-1}(r))\) is linear with respect to \(r\in[\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\int_{T_{1}}^{+\infty}c(s)e^ {-s}\mathrm{d}s]\). We can denote \[\tilde{c}(t)=\left\{\begin{array}{ll}c(T_{1}),&t\in[T_{1},T_{2}],\\ c(t),&t\in[T,+\infty]\setminus[T_{1},T_{2}].\end{array}\right.\] Then it is clear that \(G(h^{-1}(r);\tilde{c})\) is also linear with respect to \(r\) on \([\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\)\(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\) by Remark 1.6. Thus we may also assume that \(c(t)e^{-t}\) is strictly decreasing on \([T_{1},T_{2}]\) (note that \(c(T_{1})\leq c(T_{2})e^{T_{1}-T_{2}}<c(T_{2})\)), and \(c(t)\) is increasing on \([T_{1},T_{2}]\). It follows from Theorem 1.5 that there exists a unique holomorphic \((n,0)\) form \(F\) on \(\{\psi<-T_{1}\}\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \[G(t;\varphi,\psi)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\] for any \(t\in[T_{1},T_{2}]\). According to the assumptions, we have \(Z_{0}\subset\{\psi=-\infty\}=\{\tilde{\psi}=-\infty\}\). As \(c(t)e^{-t}\) is decreasing and \(\tilde{\psi}\geq\psi\), we have \(e^{-\varphi}c(-\psi)=e^{-\varphi-\tilde{\psi}}e^{\psi}c(-\psi)\leq e^{-\tilde{ \varphi}-\tilde{\psi}}e^{\tilde{\psi}}c(-\tilde{\psi})=e^{-\tilde{\varphi}}c(- \tilde{\psi})\). Then it follows from Theorem 1.2 that \(G(h^{-1}(r);\tilde{\varphi},\tilde{\psi})\) is concave with respect to \(r\). We prove the following inequality: \[\lim_{t\to T_{1}+0}\frac{G(t;\tilde{\varphi},\tilde{\psi})-G(T_{2}; \tilde{\varphi},\tilde{\psi})}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s}>\frac{G( T_{1};\varphi,\psi)-G(T_{2};\varphi,\psi)}{\int_{T_{1}}^{T_{2}}c(s)e^{-s} \mathrm{d}s}. \tag{4.1}\] We just need to prove it for the case \(G(T_{1};\tilde{\varphi},\tilde{\psi})<+\infty\). By Lemma 2.3, there exists a holomorphic \((n,0)\) form \(F_{T_{1}}\) such that \((F_{T_{1}}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\), and \[G(T_{1};\tilde{\varphi},\tilde{\psi})=\int_{\{\psi<-T_{1}\}}|F_{T_{1}}|^{2}e^{ -\tilde{\varphi}}c(-\tilde{\psi})\in(0,+\infty).\] Since \(\tilde{\psi}\geq\psi\) and \(\tilde{\psi}\not\equiv\psi\) on the interior of \(\{-T_{2}\leq\psi<-T_{1}\}\), and both of them are plurisubharmonic on \(M\), then there exists a subset \(U\) of \(\{-T_{2}\leq\psi<-T_{1}\}\) such that \(\mu(U)>0\) and \(e^{-\tilde{\psi}}<e^{-\psi}\) on \(U\), where \(\mu\) is the Lebesgue measure on \(M\). As \(F_{T_{1}}\not\equiv 0\), \(\tilde{\psi}=\psi\) on \(\{\psi<-T_{2}\}\), and \(c(t)e^{-t}\) is strictly decreasing on \([T_{1},T_{2}]\), we have \[\frac{G(T_{1};\tilde{\varphi},\tilde{\psi})-G(T_{2};\tilde{\varphi },\tilde{\psi})}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[= \frac{\int_{\{\psi<-T_{1}\}}|F_{T_{1}}e^{-\tilde{\varphi}}c(- \tilde{\psi})-G(T_{2};\varphi,\psi)}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[> \frac{\int_{\{\psi<-T_{1}\}}|F_{T_{1}}e^{-\varphi}c(-\psi)-G(T_{ 2};\varphi,\psi)}{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[\geq \frac{G(T_{1};\varphi,\psi)-G(T_{2};\varphi,\psi)}{\int_{T_{1}}^{ T_{2}}c(s)e^{-s}\mathrm{d}s}.\] Thus we get inequality (4.1). As \(c(t)\) is increasing on \([T_{1},T_{2}]\) and \(\lim_{t\to T_{2}-0}\sup_{\{-T_{2}\leq\psi<-t\}}(\tilde{\psi}-\psi)=0\), we obtain that \[\lim_{t\to T_{2}-0}\frac{G(t;\tilde{\varphi},\tilde{\psi})-G(T_{2}; \tilde{\varphi},\tilde{\psi})}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[= \lim_{t\to T_{2}-0}\frac{G(t;\tilde{\varphi},\tilde{\psi})-G(T_{2}; \varphi,\psi)}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[\leq \lim_{t\to T_{2}-0}\frac{\int_{\{-T_{2}\leq\psi<-t\}}|F|^{2}e^{- \tilde{\varphi}}c(-\psi)}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[\leq \lim_{t\to T_{2}-0}\frac{\int_{\{-T_{2}\leq\psi<-t\}}|F|^{2}e^{- \varphi-\psi}e^{\tilde{\psi}}c(-\psi)}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s} \tag{4.2}\] \[\leq \lim_{t\to T_{2}-0}(\sup_{\{-T_{2}\leq\psi<-t\}}e^{\tilde{\psi} -\psi})\frac{\int_{\{-T_{2}\leq\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)}{\int_{t }^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[= \frac{\int_{\{-T_{2}\leq\psi<-T_{1}\}}|F|^{2}e^{-\varphi}c(-\psi) }{\int_{T_{1}}^{T_{2}}c(s)e^{-s}\mathrm{d}s}\] \[= \frac{G(T_{1};\varphi,\psi)-G(T_{2};\varphi,\psi)}{\int_{T_{1}}^{ T_{2}}c(s)e^{-s}\mathrm{d}s}.\] Combining (4.2) with (4.1), we have \[\lim_{t\to T_{2}-0}\frac{G(t;\tilde{\varphi},\tilde{\psi})-G(T_{2}; \tilde{\varphi},\tilde{\psi})}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s}<\lim_{t \to T_{1}+0}\frac{G(t;\tilde{\varphi},\tilde{\psi})-G(T_{2};\tilde{\varphi}, \tilde{\psi})}{\int_{t}^{T_{2}}c(s)e^{-s}\mathrm{d}s},\] which contradicts to the concavity of \(G(h^{-1}(r);\tilde{\varphi},\tilde{\psi})\). It means that the assumptions can not hold, i.e. \(G(h^{-1}(r);\varphi,\psi)\) is not linear with respect to \(r\) on \([\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\)\(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). Especially, if \(\psi\) is strictly plurisubharmonic at \(z_{1}\in\mathrm{int}(\{-T_{2}\leq\psi<-T_{1}\})\), we can construct some \(\tilde{\psi}\) satisfying the four statements in Theorem 1.9. By the assumption, there is a small open neighborhood \((U,w)\) of \(z_{1}\), such that \(U\Subset\{-T_{2}\leq\psi<-T_{1}\}\) and \(\sqrt{-1}\partial\bar{\partial}(\varphi+\psi)>\varepsilon\omega\) on \(U\), where \(w=(w_{1},\ldots,w_{n})\) is the local coordinate on \(U\), \(\omega=\sqrt{-1}\sum_{j=1}^{n}\mathrm{d}w_{j}\wedge\mathrm{d}\bar{w}_{j}\) on \(U\). Let \(\rho\) be a smooth nonnegative function on \(M\) such that \(\rho\not\equiv 0\) and \(\mathrm{Supp}\rho\Subset U\). Then we can choose a positive number \(\delta\) such that \[\sqrt{-1}\partial\bar{\partial}(\varphi+\psi+\delta\rho)\geq 0\] and \(\psi+\delta\rho<-T_{1}\) on \(U\). Let \(\tilde{\psi}=\psi+\delta\rho\), then it can be checked that \(\tilde{\psi}\) satisfies the four statements in Theorem 1.9. It implies that \(G(h^{-1}(r))\) is not linear with respect to \(r\) on \([\int_{T_{2}}^{+\infty}c(s)e^{-s}\mathrm{d}s,\)\(\int_{T_{1}}^{+\infty}c(s)e^{-s}\mathrm{d}s]\). ## 5. Proof of Theorem 1.11 In this section, we prove Theorem 1.11. Proof of Theorem 1.11.: We proof Theorem 1.11 in two steps: Firstly, we prove the sufficiency part of characterization; Secondly, we prove the necessity part of characterization. _Step 1._ Assume that the two statements in Theorem 1.11 hold. Note that \(\psi=G_{\Omega}(\cdot,z_{0})+a\) and \(\varphi=2\log|g|+2G_{\Omega}(\cdot,z_{0})+2u\) on \(\{\psi<2a\}=\{G_{\Omega}(\cdot,z_{0})<a\}\). It follows from \(\chi_{z_{0}}=\chi_{-u}\) that there is a holomorphic function \(g_{0}\) on \(\Omega\) such that \(|g_{0}|=e^{u+G_{\Omega}(\cdot,z_{0})}\), thus \[\chi_{a,z_{0}}=\chi_{a,-u},\] where \(\chi_{a,z_{0}}\) and \(\chi_{a,-u}\) are the characters on \(\{G_{\Omega}(\cdot,z_{0})<a\}\) associated to \(G_{\Omega}(\cdot,z_{0})\) and \(-u\) respectively. Denote that \(\psi_{1}:=\psi-2a=G_{\Omega}(\cdot,z_{0})-a\) (the Green function on \(\{G_{\Omega}(\cdot,z_{0})<a\}\)) and \(c_{1}(t):=c(t-2a)\). By Theorem 2.7, \(G(h_{1}^{-1}(r);\psi_{1},c_{1})\) is linear on \([0,\int_{0}^{+\infty}c_{1}(t)e^{-t}dt]=[0,e^{-2a}\int_{-2a}^{+\infty}c(t)e^{- t}dt]\), where \(h_{1}(t)=\int_{t}^{+\infty}c(se^{-s}ds)\). Note that \[G(h_{1}^{-1}(r);\psi_{1},c_{1})=G(h^{-1}(re^{2a});\psi,c),\] then we get that \(G(h^{-1}(r);\psi,c)\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\). Denote that \(\psi_{2}:=2G_{\Omega}(\cdot,z_{0})\) and \(\varphi_{2}:=2\log|g|+2u+a\) on \(\Omega\). By Theorem 2.7, \(G(h^{-1}(r);\psi_{2},\varphi_{2})\) is linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). Denote that \[F:=b_{0}gp_{*}(f_{u}df_{z_{0}}),\] where \(b_{0}\) is a constant such that \(ord_{z_{0}}(F-f_{0})>k\). Following from Remark 2.8, we have \[G(t-2a;\psi,\varphi,c) =G(t;\psi_{1},\varphi,c_{1}) \tag{5.1}\] \[=\int_{\{\psi_{1}<-t\}}|F|^{2}e^{-\varphi}c_{1}(-\psi_{1})\] \[=\int_{\{\psi<-t+2a\}}|F|^{2}e^{-\varphi}c(-\psi)\] and \[G(t;\psi_{2},\varphi_{2},c)=\int_{\{\psi_{2}<-t\}}|F|^{2}e^{-\varphi_{2}}c(- \psi_{2}) \tag{5.2}\] for any \(t\geq 0\). Let \(t_{1}\in[0,-2a]\), and let \(\tilde{F}\) be any holomorphic \((1,0)\) form on \(\{\psi<-t_{1}\}\) satisfying \((\tilde{F}-f_{0},z_{0})\in(\mathcal{I}(\varphi+\psi)\otimes\mathcal{O}(K_{ \Omega}))_{z_{0}}\) and \(\int_{\{\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)<+\infty\). As \(c(t)e^{-t}\) is decreasing and \(\psi_{2}\leq\psi\), then \(e^{-\varphi_{2}}c(-\psi_{2})\leq e^{-\varphi}c(-\psi)\). By Lemma 2.3 and equality (5.2), we have \[\int_{\{\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{2})\] \[= \int_{\{\psi_{2}<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{ 2})\] \[= \int_{\{\psi_{2}<-t_{1}\}}|F|^{2}e^{-\varphi_{2}}c(-\psi_{2})+ \int_{\{\psi_{2}<-t_{1}\}}|F-\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{2})\] and \[\int_{\{\psi<2a\}}|\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{2})\] \[= \int_{\{\psi_{2}<2a\}}|\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{2})\] \[= \int_{\{\psi_{2}<2a\}}|F|^{2}e^{-\varphi_{2}}c(-\psi_{2})+\int_{ \{\psi_{2}<2a\}}|F-\tilde{F}|^{2}e^{-\varphi_{2}}c(-\psi_{2}),\] which shows that \[\int_{\{2a\leq\psi<-t_{1}\}}|\tilde{F}|e^{-\varphi}c(-\psi) =\int_{\{2a\leq\psi<-t_{1}\}}|\tilde{F}|e^{-\varphi}c(-\psi)\] \[\geq\int_{\{2a\leq\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi).\] Equality (5.1) implies that \[\int_{\{\psi<2a\}}|\tilde{F}|e^{-\varphi}c(-\psi)\geq\int_{\{\psi<2a\}}|F|^{2} e^{-\varphi}c(-\psi).\] Thus, we have \[G(t_{1};\psi,\varphi,c) =\int_{\{\psi<-t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi)\] \[=\int_{\{\psi<2a\}}|F|^{2}e^{-\varphi}c(-\psi)+\int_{\{2a\leq\psi< -t_{1}\}}|F|^{2}e^{-\varphi}c(-\psi)\] \[=\int_{\{\psi<2a\}}|F|^{2}e^{-\varphi}c(-\psi)+\int_{\{2a\leq\psi _{2}<-t_{1}\}}|F|^{2}e^{-\varphi_{2}}c(-\psi_{2})\] \[=\int_{\{\psi<2a\}}|F|^{2}e^{-\varphi}c(-\psi)+G(t_{1};\psi_{2}, \varphi_{2},c)-G(-2a;\psi_{2},\varphi_{2},c)\] for any \(t_{1}\in[0,-2a]\). As \(G(h^{-1}(r);\psi_{2},\varphi_{2},c)\) is linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\), then \(G(h^{-1}(r);\psi,\varphi,c)\) is linear on \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\). _Step 2._ Assume that \(G(h^{-1}(r))\) is linear with respect to \(r\) on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\) and \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\). By Theorem 1.5, there exists a holomorphic \((1,0)\) form \(F_{1}\) on \(\Omega\) such that \((F_{1}-f_{0},z_{0})\in(\mathcal{I}(\varphi+\psi)\otimes\mathcal{O}(K_{\Omega} ))_{z_{0}}\) and \[G(t)=\int_{\{\psi<-t\}}|F_{1}|^{2}e^{-\varphi}c(-\psi) \tag{5.3}\] for any \(t\geq 0\). Denote that \[h:=\frac{F_{1}}{p_{s}(df_{z_{0}})}\] is a multi-valued meromorphic function on \(\Omega\), and \(|h|\) is single-valued. As \(G(h^{-1}(r))\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\), it follows from Remark 2.8 that \[2\log|h|=\varphi_{0}+b_{1}\] on \(\{G_{\Omega}(\cdot,z_{0})<a\}\), where \(b_{1}\) is a constant. As \(\varphi_{0}\) is subharmonic on \(\Omega\), \(h\) has no pole in \(\{G_{\Omega}(\cdot,z_{0})\leq a\}\). Since \(\{G_{\Omega}(\cdot,z_{0})=a\}\) is a compact set, by using Lemma 2.12, we know that there exists \(a_{1}\in(a,0)\) such that \(h\) has no zero point in \(\{a<G_{\Omega}(\cdot,z_{0})<a_{1}\}\). It follows from the Weierstrass Theorem on open Riemann surfaces (see [13]), that there is a holomorphic function \(g_{1}\) on \(\Omega\) such that \[u_{1}:=\log|h|-\log|g_{1}|-\frac{b_{1}}{2}\] is harmonic on \(\{G_{\Omega}(\cdot,z_{0})<a_{1}\}\). Thus, \(g_{1}\) has no zero point in \(\{a<G_{\Omega}(\cdot,z_{0})<a_{1}\}\), and \[u_{1}=\frac{\varphi_{0}}{2}-\log|g_{1}|\] on \(\{G_{\Omega}(\cdot,z_{0})<a\}\). Note that \(2\log|h|=\varphi_{0}+b_{1}\) on \(\{G_{\Omega}(\cdot,z_{0})<a\}\) and \(\{G_{\Omega}(\cdot,z_{0})=a\}\) is a closed real analytic curve, it follows from Lemma 2.16 that the Lelong number \(v(dd^{c}\varphi_{0},z)\geq 2ord_{z}(h)=2ord_{z}(g_{1})\) for any \(z\in\{G_{\Omega}(\cdot,z_{0})=a\}\), hence \[v_{1}:=\frac{\varphi_{0}}{2}-\log|g_{1}|\] is subharmonic on \(\{G_{\Omega}(\cdot,z_{0})<a_{1}\}\). Note that \(v_{1}=u_{1}\) on \(\{G_{\Omega}(\cdot,z_{0})<a\}\), then \[v_{1}=u_{1}\] on \(\{G_{\Omega}(\cdot,z_{0})\leq a\}\) by Lemma 2.17. As \(G(h^{-1}(r))\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\), it follows from Theorem 2.7 that \[ord_{z_{0}}(g_{1})=ord_{z_{0}}(f_{0})=ord_{z_{0}}(F_{1}).\] Denote that \(\psi_{3}:=2G_{\Omega}(\cdot,z_{0})\) and \(\varphi_{3}:=2\log|g_{1}|+2u_{1}+a\) on \(\{G_{\Omega}(\cdot,z_{0})<a_{1}\}\). Since \(\frac{h}{g_{1}}dp_{*}(f_{z_{0}})\) is a single-value holomorphic \((1,0)\) form and \(u_{1}=\log|\frac{h}{g_{1}}|-\frac{b_{1}}{2}\) on \(\{G_{\Omega}(\cdot,z_{0})<a_{1}\}\), we know that \[\chi_{a_{1},-u_{1}}=\chi_{a_{1},z_{0}}.\] Following from Theorem 2.7 and Remark 2.8, \(G(h^{-1}(r);\psi_{3},\varphi_{3})\) is linear with respect to \(r\) on \([0,\int_{-2a_{1}}^{+\infty}c(t)e^{-t}dt]\) and \[G(t;\psi_{3},\varphi_{3})=\int_{\{\psi_{3}<-t\}}|F_{1}|^{2}e^{-\varphi_{3}}c( -\psi_{3}) \tag{5.4}\] for any \(t\geq-2a_{1}\). Denote that \(\tilde{\varphi}_{3}:=\varphi_{0}+a\) on \(\Omega\), hence \(\tilde{\varphi}_{3}=\varphi_{3}\) on \(\{G_{\Omega}(\cdot,z_{0})<a\}\) and \(\tilde{\varphi}_{3}=2\log|g_{1}|+2v_{1}+a\) on \(\{G_{\Omega}(\cdot,z_{0})<a_{1}\}\). Theorem 1.2 shows that \(G(h^{-1}(r);\psi_{3},\tilde{\varphi}_{3})\) is concave on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). Let \(t_{1}\in[0,-2a]\), and let \(\tilde{F}\) be any holomorphic \((1,0)\) form on \(\{\psi_{3}<-t_{1}\}=\{\psi<-t_{1}\}\) satisfying \(ord_{z_{0}}(\tilde{F}-f_{0})>ord_{z_{0}}(g_{1})\) and \[\int_{\{\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\tilde{\varphi}_{3}}c(-\psi_{3})<+\infty.\] As \(ord_{z_{0}}(\tilde{F}-F_{1})>ord_{z_{0}}(g_{1})\) and \(c(t)e^{-t}\) is decreasing, it follows from Lemma 2.10 that there exists \(t_{2}>t_{1}\) such that \[\int_{\{\psi_{3}<-t_{2}\}}|F_{1}-\tilde{F}|^{2}e^{-\varphi}c(-\psi)\leq c(t_{ 2})e^{-t_{2}}\int_{\{\psi_{3}<-t_{2}\}}|F_{1}-\tilde{F}|^{2}e^{-\varphi-\psi}<+\infty.\] Note that \(e^{-\varphi}c(-\psi)\leq C_{1}e^{-\varphi_{0}+a}c(-2G_{\Omega}(\cdot,z_{0}))= C_{1}e^{-\tilde{\varphi}_{3}}c(-\psi_{3})\) on \(\{-t_{2}\leq\psi_{3}<-t_{1}\}=\{-t_{2}\leq 2G_{\Omega}(\cdot,z_{0})<-t_{1}\}\), where \(C_{1}>0\) is a constant. Then we have \[\int_{\{\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[\leq \int_{\{\psi_{3}<-t_{2}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)+ \int_{\{-t_{2}\leq\psi_{3}<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[\leq 2\int_{\{\psi_{3}<-t_{2}\}}|F_{1}-\tilde{F}|^{2}e^{-\varphi}c(- \psi)+\int_{\{\psi_{3}<-t_{2}\}}|F_{1}|^{2}e^{-\varphi}c(-\psi)\] \[+C_{1}\int_{\{\psi_{3}<-t_{2}\}}|\tilde{F}|^{2}e^{-\tilde{\varphi} _{3}}c(-\psi_{3})\] \[< +\infty.\] By Lemma 2.3 and equality (5.3), we have \[\int_{\{\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<-t_{1}\}}|F_{1}|^{2}e^{-\varphi}c(-\psi)+\int_{\{\psi<- t_{1}\}}|F_{1}-\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] and \[\int_{\{\psi<2a\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<2a\}}|F_{1}|^{2}e^{-\varphi}c(-\psi)+\int_{\{\psi<2a \}}|F_{1}-\tilde{F}|^{2}e^{-\varphi}c(-\psi),\] which shows that \[\int_{\{2a\leq\psi<-t_{1}\}}|\tilde{F}|e^{-\varphi}c(-\psi)\geq\int_{\{2a\leq \psi<-t_{1}\}}|F_{1}|^{2}e^{-\varphi}c(-\psi).\] Combining equality (5.4), we have \[\int_{\{\psi_{3}<-t_{1}\}}|\tilde{F}|^{2}e^{-\tilde{\varphi}_{3}} c(-\psi_{3})\] \[= \int_{\{\psi_{3}<2a\}}|\tilde{F}|^{2}e^{-\varphi_{3}}c(-\psi_{3}) +\int_{\{2a\leq\psi<-t_{1}\}}|\tilde{F}|^{2}e^{-\varphi}c(-\psi)\] \[\geq \int_{\{\psi_{3}<2a\}}|F_{1}|^{2}e^{-\varphi_{3}}c(-\psi_{3})+ \int_{\{2a\leq\psi<-t_{1}\}}|F_{1}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi_{3}<-t_{1}\}}|F_{1}|^{2}e^{-\tilde{\varphi}_{3}}c(- \psi_{3})\] for any \(t_{1}\in[0,-2a]\). Note that \(G(t;\psi_{3},\varphi_{3})=G(t;\psi_{3},\tilde{\varphi}_{3})\) for any \(t\geq-2a\), then \[G(t;\psi_{3},\tilde{\varphi}_{3})=\int_{\{\psi_{3}<-t_{1}\}}|F_{1}|^{2}e^{- \tilde{\varphi}_{3}}c(-\psi_{3}) \tag{5.5}\] for any \(t\geq 0\). As \(G(h^{-1}(r);\psi_{3},\varphi_{3})\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\) and \(G(h^{-1}(r);\psi,\varphi)\) is linear on \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\), we get that \(G(h^{-1}(r);\psi_{3},\tilde{\varphi}_{3})\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\) and \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\). As \(G(h^{-1}(r);\psi_{3},\varphi_{3})\) is linear on \([0,\int_{-2a_{1}}^{+\infty}c(t)e^{-t}dt]\), we have \[\lim_{s\to a+0}\frac{G(-2s;\psi_{3},\varphi_{3})-G(-2a;\psi_{3},\varphi_{3})} {\int_{-2s}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+\infty}c(t)e^{-t}dt}=\lim_{s_{1 }\to a-0}\frac{G(-2s_{1};\psi_{3},\varphi_{3})-G(-2a;\psi_{3},\varphi_{3})}{ \int_{-2s_{1}}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+\infty}c(t)e^{-t}dt},\] which shows that \[\begin{split}&\lim_{s\to a+0}\frac{\int_{\{2a\leq\psi_{3}<2s\}}|p_{*}( df_{z_{0}})|^{2}e^{b_{1}-a}c(-\psi_{3})}{\int_{-2s}^{-2a}c(t)e^{-t}dt}\\ =&\lim_{s\to a+0}\frac{\int_{\{2a\leq\psi_{3}<2s\}}|F_{1}|^{2}e ^{-2u_{1}-2\log|g_{1}|-a}c(-\psi_{3})}{\int_{-2s}^{-2a}c(t)e^{-t}dt}\\ =&\lim_{s\to a+0}\frac{G(-2s;\psi_{3},\varphi_{3})-G(-2a; \psi_{3},\varphi_{3})}{\int_{-2s}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+\infty}c(t) e^{-t}dt}\\ =&\lim_{s_{1}\to a-0}\frac{G(-2s_{1};\psi_{3}, \varphi_{3})-G(-2a;\psi_{3},\varphi_{3})}{\int_{-2s_{1}}^{+\infty}c(t)e^{-t}dt -\int_{-2a}^{+\infty}c(t)e^{-t}dt}\\ =&\lim_{s_{1}\to a-0}\frac{\int_{\{2s_{1}\leq\psi_{3}<2a\}}|p_{ *}(df_{z_{0}})|^{2}e^{b_{1}-a}c(-\psi_{3})}{\int_{-2a}^{-2s_{1}}c(t)e^{-t}dt}. \end{split} \tag{5.6}\] For any \(\epsilon>0\), as \(\{G_{\Omega}(\cdot,z_{0})=a\}\) is a compact set and \(v_{1}=u_{1}\) on \(\{G_{\Omega}(\cdot,z_{0})\leq a\}\), it follows from the upper Continuity of \(v_{1}-u_{1}\) and Lemma 2.12 that there exists \(a_{2}\in(a,a_{1})\) such that \[v_{1}-u_{1}<\epsilon \tag{5.7}\] on \(\{a\leq G_{\Omega}(\cdot,z_{0})\leq a_{2}\}\). By equality (5.5), (5.6) and inequality (5.7), \[\begin{split}&\lim_{s\to a+0}\frac{G(-2s;\psi_{3},\tilde{\varphi}_{3 })-G(-2a;\psi_{3},\tilde{\varphi}_{3})}{\int_{-2s}^{+\infty}c(t)e^{-t}dt-\int_ {-2a}^{+\infty}c(t)e^{-t}dt}\\ =&\lim_{s\to a+0}\frac{\int_{\{2a\leq\psi_{3}<2s\}}|F_{1}|^{2 }e^{-2v_{1}-2\log|g_{1}|-a}c(-\psi_{3})}{\int_{-2s}^{-2a}c(t)e^{-t}dt}\\ =&\lim_{s\to a+0}\frac{\int_{\{2a\leq\psi_{3}<2s\}}|p_{ *}(df_{z_{0}})|^{2}e^{2u_{1}-2v_{1}+b_{1}-a}c(-\psi_{3})}{\int_{-2s}^{-2a}c(t) e^{-t}dt}\\ \geq& e^{-2\epsilon}\lim_{s\to a+0}\frac{\int_{\{2a \leq\psi_{3}<2s\}}|p_{*}(df_{z_{0}})|^{2}e^{b_{1}-a}c(-\psi_{3})}{\int_{-2s}^{- 2a}c(t)e^{-t}dt}\\ =& e^{-2\epsilon}\lim_{s_{1}\to a-0}\frac{G(-2s_{1}; \psi_{3},\varphi_{3})-G(-2a;\psi_{3},\varphi_{3})}{\int_{-2s_{1}}^{+\infty}c(t )e^{-t}dt-\int_{-2a}^{+\infty}c(t)e^{-t}dt},\end{split}\] which implies \[\lim_{s\to a+0}\frac{G(-2s;\psi_{3},\tilde{\varphi}_{3})-G(-2a;\psi_{3}, \tilde{\varphi}_{3})}{\int_{-2s}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+\infty}c(t )e^{-t}dt}\geq\lim_{s_{1}\to a-0}\frac{G(-2s_{1};\psi_{3},\varphi_{3})-G(-2a; \psi_{3},\varphi_{3})}{\int_{-2s_{1}}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+ \infty}c(t)e^{-t}dt}.\] As \(G(h^{-1}(r);\psi_{3},\tilde{\varphi}_{3})\) is concave on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\) and \(G(t;\psi_{3},\tilde{\varphi}_{3})=G(t;\psi_{3},\varphi_{3})\) for any \(t\geq-2a\), then we have \[\lim_{s\to a+0}\frac{G(-2s;\psi_{3},\tilde{\varphi}_{3})-G(-2a;\psi_{3}, \tilde{\varphi}_{3})}{\int_{-2s}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^{+\infty}c( t)e^{-t}dt}=\lim_{s_{1}\to a-0}\frac{G(-2s_{1};\psi_{3},\tilde{\varphi}_{3})-G(-2a; \psi_{3},\tilde{\varphi}_{3})}{\int_{-2s_{1}}^{+\infty}c(t)e^{-t}dt-\int_{-2a}^ {+\infty}c(t)e^{-t}dt}. \tag{5.8}\] As \(G(h^{-1}(r);\psi_{3},\tilde{\varphi}_{3})\) is linear on \([0,\int_{-2a}^{+\infty}c(t)e^{-t}dt]\) and \([\int_{-2a}^{+\infty}c(t)e^{-t}dt,\int_{0}^{+\infty}c(t)e^{-t}dt]\), Equality (5.8) deduces that \(G(h^{-1}(r);\psi_{3},\tilde{\varphi}_{3})\) is linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). Then Theorem 2.7 shows that the two statements in Theorem 1.11 hold. ## 6. Proofs of Theorem 1.14, Theorem 1.15 and Example 1.16 In this section, we prove Theorem 1.14, Theorem 1.15 and Example 1.16. Proof of Theorem 1.14.: By Theorem 1.11 and Remark 1.12, \(G(-\log r)\) is linear on \([0,e^{2a}]\) and \([e^{2a},+\infty)\), but \(G(-\log r)\) is not linear on \([0,1]\). Following from Lemma 2.19, we get that \(-\log G(t)\) is not convex on \([0,+\infty)\). Now we prove Theorem 1.15. Proof of Theorem 1.15.: We prove Theorem 1.15 by contradiction: if not, then \(-\log G_{k}(t)\) is convex on \([0,+\infty)\) for any \(k\geq 0\). Denote that \[\tilde{G}_{k}(t)=G_{k}(2(k+1)t).\] By definition, \[\tilde{G}_{k}(t)\leq\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\}) \tag{6.1}\] holds for any \(t\geq 0\) and any \(k\geq 0\). Note that there exists \(F_{k,t}\in\mathcal{O}(\{G_{\Omega}(\cdot,z_{0})<-t\})\) such that \((F_{k,t}-1,z_{0})\in\mathcal{I}(2(k+1)G_{\Omega}(\cdot,z_{0}))_{z_{0}}\) and \[\tilde{G}_{k}(t)=\int_{\{G_{\Omega}(\cdot,z_{0})<-t\}}|F_{k,t}|^{2}. \tag{6.2}\] As \(\tilde{G}_{k}(t)\) is increasing with respect to \(k\) and \(\lim_{t\to+\infty}\tilde{G}_{k}(t)\leq\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\), we know that there exists a subsequence of \(\{F_{k,t}\}_{k\geq 0}\) denoted by \(\{F_{k_{l},t}\}_{l\geq 0}\), which uniformly converges to a holomorphic function \(F\) on \(\{G_{\Omega}(\cdot,z_{0})<-t\}\) on any compact subset of \(\{G_{\Omega}(\cdot,z_{0})<-t\}\). Since \((F_{k_{l},t}-1,z_{0})\in\mathcal{I}(2(k_{l}+1)G_{\Omega}(\cdot,z_{0}))_{z_{0}}\) and \(\{G_{\Omega}(\cdot,z_{0})<-t\}\) is connected, we know that \(F\equiv 1\). By Fatou's Lemma, inequality (6.1) and equality (6.2), we have \[\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\}) =\int_{\{G_{\Omega}(\cdot,z_{0})<-t\}}1\] \[=\int_{\{G_{\Omega}(\cdot,z_{0})<-t\}}\lim_{l\to+\infty}|F_{k_{l},t}|^{2}\] \[\leq\liminf_{l\to+\infty}\int_{\{G_{\Omega}(\cdot,z_{0})<-t\}}|F_ {k_{l},t}|^{2}\] \[=\liminf_{l\to+\infty}\tilde{G}_{k_{l}}(t)\] \[\leq\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\}),\] which shows \(\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})=\lim_{t\to+\infty}\tilde{G}_{k}(t)\). As \(-\log G_{k}(t)\) is convex on \([0,+\infty)\), we get that \(-\log\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\) is convex on \([0,+\infty)\). By the concavity of \(G_{0}(-\log r)\), \[G_{0}(2t)\geq e^{-2t}G_{0}(0),\] which implies that \[-2t-\log\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\leq-2t-\log G_{0}(2t)\leq-\log G _{0}(0)<+\infty. \tag{6.3}\] As \(-2t-\log\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\) is a convex function on \([0,+\infty)\), inequality (6.3) shows that \(-2t-\log\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\) is decreasing on \([0,+\infty)\). Hence, \(s(t):=e^{2t}\lambda(\{G_{\Omega}(\cdot,z_{0})<-t\})\) is increasing on \([0,+\infty)\). Combining Lemma 2.14, we get that \(s(t)\) is a constant function. Lemma 2.15 shows that \(\Omega\) is a disc, which contradicts to the assumption in Theorem 1.15. Thus, there exists \(k\geq 0\) such that \(-\log G_{k}(t)\) is not convex on \([0,+\infty)\). Finally, we prove Example 1.16. Proof of Example 1.16.: Let \(F\) be any holomorphic function on \(\{\psi<-t\}\) satisfying \(F^{(j)}(o)=j!a_{j}\), then \(F=\sum_{0\leq j\leq k}a_{j}z^{j}+\sum_{j>k}b_{j}z^{j}\) on \(\{\psi<-t\}\) (Taylor expansion). Thus, \[\int_{\{\psi<-t\}}|F|^{2} =\int_{\{2(k+1)\log|z|<-t\}}\bigg{|}\sum_{0\leq j\leq k}a_{j}z^{j }+\sum_{j>k}b_{j}z^{j}\bigg{|}^{2}\] \[\geq\int_{\{2(k+1)\log|z|<-t\}}\bigg{|}\sum_{0\leq j\leq k}a_{j}z ^{j}\bigg{|}^{2}\] \[=\sum_{0\leq j\leq k}\frac{a_{j}\pi}{j+1}e^{-\frac{j+1}{k+1}t},\] which shows that \(G(t)=\sum_{0\leq j\leq k}\frac{a_{j}\pi}{j+1}e^{-\frac{j+1}{k+1}t}\) for any \(t\geq 0\). Denote that \(h(t):=-\log G(t)\) on \([0,+\infty)\). Then we have \[h^{\prime\prime}(t)=\frac{\left(\sum_{0\leq j\leq k}c_{j}d_{j}e^{-d_{j}t} \right)^{2}-\left(\sum_{0\leq j\leq k}c_{j}d_{j}^{2}e^{-d_{j}t}\right)\left( \sum_{0\leq j\leq k}c_{j}e^{-d_{j}t}\right)}{\left(\sum_{0\leq j\leq k}c_{j}e^ {-d_{j}t}\right)^{2}},\] where \(c_{j}=\frac{a_{j}\pi}{j+1}\) and \(d_{j}=\frac{j+1}{k+1}\). By Cauchy-Schwarz inequality, \(h^{\prime\prime}\leq 0\). Note that exist \(j_{1}\) and \(j_{2}\) satisfying \(j_{1}\neq j_{2}\), \(a_{j_{1}}\neq 0\) and \(a_{j_{2}}\neq 0\). Since \(\frac{c_{j}d_{j}^{2}e^{-d_{j}t}}{c_{j}e^{-d_{j}t}}=d_{j}^{2}\) and \(d_{j_{1}}\neq d_{j_{2}}\), then \(h^{\prime\prime}(t)<0\) for any \(t\geq 0\). Thus, \(-\log G(t)\) is concave on \([0,+\infty)\). _Acknowledgements._ The authors would like to thank Dr. Zhitong Mi for checking the manuscript. The second author was supported by National Key R&D Program of China 2021YFA1003100 and NSFC-11825101.
2306.08108
A Quantum Fingerprinting Algorithm for Next Generation Cellular Positioning
The recent release of the third generation partnership project, Release 17, calls for sub-meter cellular positioning accuracy with reduced latency in calculation. To provide such high accuracy on a worldwide scale, leveraging the received signal strength (RSS) for positioning promises ubiquitous availability in the current and future equipment. RSS Fingerprint-based techniques have shown a great potential for providing high accuracy in both indoor and outdoor environments. However, fingerprint-based positioning faces the challenge of providing a fast matching algorithm that can scale worldwide. In this paper, we propose a cosine similarity-based quantum algorithm for enabling fingerprint-based high accuracy and worldwide positioning that can be integrated with the next generation of 5G and 6G networks and beyond. By entangling the test RSS vector with the fingerprint RSS vectors, the proposed quantum algorithm has a complexity that is exponentially better than its classical version as well as the state-of-the-art quantum fingerprint positioning systems, both in the storage space and the running time. We implement the proposed quantum algorithm and evaluate it in a cellular testbed on a real IBM quantum machine. Results show the exponential saving in both time and space for the proposed quantum algorithm while keeping the same positioning accuracy compared to the traditional classical fingerprinting techniques and the state-of-the-art quantum algorithms.
Yousef Zook, Ahmed Shokry, Moustafa Youssef
2023-06-13T19:54:26Z
http://arxiv.org/abs/2306.08108v1
# A Quantum Fingerprinting Algorithm for Next Generation Cellular Positioning ###### Abstract The recent release of the third generation partnership project, Release 17, calls for sub-meter cellular positioning accuracy with reduced latency in calculation. To provide such high accuracy on a worldwide scale, leveraging the received signal strength (RSS) for positioning promises ubiquitous availability in the current and future equipment. RSS Fingerprint-based techniques have shown a great potential for providing high accuracy in both indoor and outdoor environments. However, fingerprint-based positioning faces the challenge of providing a fast matching algorithm that can scale worldwide. In this paper, we propose a cosine similarity-based quantum algorithm for enabling fingerprint-based high accuracy and worldwide positioning that can be integrated with the next generation of 5G and 6G networks and beyond. By entangling the test RSS vector with the fingerprint RSS vectors, the proposed quantum algorithm has a complexity that is exponentially better than its classical version as well as the state-of-the-art quantum fingerprint positioning systems, both in the storage space and the running time. We implement the proposed quantum algorithm and evaluate it in a cellular testbed on a real IBM quantum machine. Results show the exponential saving in both time and space for the proposed quantum algorithm while keeping the same positioning accuracy compared to the traditional classical fingerprinting techniques and the state-of-the-art quantum algorithms. Cellular positioning systems, quantum computing, quantum position determination, next generation positioning systems, 5G and 6G positioning. ## I Introduction Nowadays, location determination services are crucial for many applications in both outdoor [1, 2] and indoor [3, 4, 5] environments; such as emergency services, navigation, location-based analytics, among many others. Supporting various positioning methods to provide accurate user equipment (UE)'s position has been one of the main features of the 3rd generation partnership project (3GPP) [6]. Release-17 further provides support for improved positioning in specific use cases such as factory automation by targeting sub-meter accuracy. In addition, Release-17 also introduces enhancements to latency reduction, enabling positioning in time-critical use cases such as remote-control applications [6]. Although different signals have been introduced for positioning [7, 8, 9] such as round trip time, downlink and uplink time difference of arrival, and angle of arrival and departure, their current and future **ubiquitous** deployment in cellular equipment is hard to be achieved. In contrast, the received signal strength (RSS) measurements are available in all cellular equipment to help in different decisions, e.g., handoff. Therefore, RSS-based positioning techniques can provide a basis for ubiquitous cellular positioning on a worldwide scale for both indoor and outdoor environments. Fingerprinting-based positioning is one of the mainstream technologies for RSS-based positioning [10, 11, 12, 13, 14, 15, 16, 17] due to its accuracy that can meet the recent Release-17 requirements. The fingerprint-based positioning technique is generally composed of two main phases: the offline phase and the online phase. In the offline phase, the signals received from the different base stations (BS) are scanned at different known locations in the environment. To construct the fingerprint, the received signal strength values (RSS's) are stored in a database along with the user's location for each RSS value. Then, in the online phase, the online heard RSS at this time is matched with the collected fingerprint records in the database. Finally, the estimated location of the UE is the location in the fingerprint database that has the highest matching score with the current heard RSS. The number of fingerprint records collected and the number of base stations affect the overall positioning accuracy: the higher the fingerprint locations and the BSs number are, the more precise the positioning will be [16, 17]. However, the time needed to match the heard RSS with the fingerprint data also significantly increases with the number of fingerprint records and BSs. Strictly speaking, the classical fingerprint-based positioning systems (e.g. [10, 11, 12, 13, 16, 17]) need \(o(MN)\) space and their matching process runs in \(o(MN)\), i.e. quadratic complexity, where \(N\) is the number of BSs in the environment and \(M\) is the number of locations in the fingerprint. This complexity hinders both the scalability and accuracy of the current positioning systems to be deployed on a worldwide scale. Recently, quantum fingerprinting positioning techniques have been proposed to overcome the classical techniques' limitations [18, 19, 20, 21, 22]. They can achieve \(o(M\log(N))\) time and space complexity, i.e. **sub-quadratic complexity**. In this paper, we propose a cosine similarity-based quantum algorithm that achieves \(o(\log(MN))\) time and space complexity, i.e. **sub-linear complexity**, providing a promising technique that can scale to the huge number of BSs and fingerprinting locations for worldwide next generation cellular positioning. This is exponentially better than its classical counterpart in both the number of BSs (\(N\)) and the size of the fingerprint (\(M\)). Moreover, it is exponentially better than the current quantum positioning algorithms in the fingerprint size (\(M\)). Evaluation of the proposed quantum algorithm in a real cellular testbed using a real IBM quantum machine shows that it can achieve the same accuracy as the classical techniques. This comes with a better than exponential reduction in time and space requirements, i.e. the time and space complexity for the proposed quantum algorithm is \(o(\log(MN))\) compared to \(o(MN)\) complexity for the classical counterpart, where \(M\) is the number of fingerprint locations and \(N\) is the number of BSs. Moreover, we compare the proposed algorithm with the state-of-the-art quantum positioning algorithms [18, 19, 20, 21, 22] and the classical techniques using a quantum machine simulator on a larger testbed. The results show that the proposed quantum algorithm can provide further exponential saving in the number of fingerprint locations, \(M\), for both time and space. The rest of the paper is organized as follows: in Section II, we discuss related work. Then in Section III, we give a background on quantum computing. After that, we discuss the proposed quantum positioning algorithm and quantum circuit implementation in Section IV. Then, we evaluate our system against other classical and quantum positioning systems in Section V. Finally, we conclude our work in Section VI. ## II Related Work In this section, we discuss the different classical and quantum positioning algorithms. ### _Classical Positioning Systems_ Many classical algorithms have been developed for cellular-based positioning for both indoors [3, 11, 12, 13] and outdoors [14, 15, 23, 2, 10]. Generally, cellular-based positioning algorithms are based on the Cell-ID, where the UE position is estimated using the serving cellular base station coordinates; using time-based and angle-based techniques, [7, 8, 9]; or hybrid techniques [24, 25]. However, these methods depend on using the network infrastructure such as the BSs' coordinates or using external hardware (e.g. antenna array). On the other hand, using the received signal strength (RSS) values does not require extra hardware and is available from both the BS and the UE, making it more widely deployable than other methods [10, 11, 12, 13, 14, 15]. Compared to propagation model-based systems, the majority of RSS-based algorithms are based on fingerprint-matching techniques to provide higher accuracy [16, 17, 23, 10]. Those algorithms often use distance or similarity-based measurements like Euclidean distance, and cosine similarity [17, 26, 27]. For example, [15] provides a cellular-based positioning algorithm where a RSS vector is collected from different cellular towers for each location and stores the vectors in the fingerprint database. Then the unknown location is estimated by averaging the k-nearest fingerprint locations based on the Euclidean distance. Other systems use probabilistic techniques [16, 10] where they store signal information distribution in the fingerprint during the offline phase, and try to estimate the most probable location in the online phase using this information. A common functionality between these different techniques is the need for matching the UE's RSS measurements in the online phase with the fingerprint data collected in the offline phase, which takes \(o(MN)\) in both time and space. In contrast, this paper proposes a cosine similarity fingerprinting _quantum algorithm_ that takes \(o(\log(MN))\) time and space. ### _Quantum Positioning Systems_ Quantum algorithms have shown exponential speed gain in different areas recently. They leverage quantum properties to enhance the classical algorithms' time and space. Examples include the Grover's algorithm [28] that provides an unstructured search technique in \(o(\sqrt{n})\) instead of \(o(n)\) provided by the classical version. Similarly, the well-known Shor's algorithm uses quantum computing to efficiently factor large numbers in polynomial time, raising the possibility of breaking the commonly-used RSA encryption technique [29]. Quantum algorithms for positioning have gained momentum recently [18, 19, 20, 21, 22, 30, 31]. In [30], the author proposes a quantum version of the classical GPS that can provide a user with his/her coordinates. However, it leverages the quantum entanglement for clock synchronization. In [31], the authors discuss using quantum inertial sensors to locate the user equipment. However, the previous quantum techniques require special hardware and synchronization. In contrast, the authors of [20], propose a device-independent quantum fingerprint-matching algorithm that can work for heterogeneous standard WiFi devices with space and running time complexity of \(o(M\log(N))\). In [21], the authors discuss the challenges and opportunities of using quantum computing in positioning techniques. In [18, 19, 22], the authors propose a cosine similarity-based quantum algorithm for a positioning system that is exponentially faster than its classical version in the dimension of the number of BSs in the space and time complexity (\(o(M\log(N))\)). Unlike the previous quantum positioning algorithms, the proposed quantum algorithm pushes the space and time complexity to be better than exponentially more efficient than the state-of-the-art quantum algorithms with a time and space complexity of \(o(\log(MN))\). ## III Quantum Computing Background and Notation In this section, we will provide a background on the quantum computing basic concepts that will be used in our algorithm. Quantum computing [32] is an area of computing based on quantum mechanics theories. It mixes three fields: mathematics, physics and computer science. The basic unit of processing in quantum computing is the _qubit_. Qubits are the quantum version of the classical bits that are used to store information. Similar to classical registers, quantum registers are used to hold a group of qubits. A qubit can be presented as a photon polarization or an electron spin, which allows a qubit to be in any state of an infinite number of states between zero and one. This phenomenon is called _quantum superposition_. It can be leveraged along with other quantum properties to speed universal classical computations. Mathematically, the state of a qubit can be represented as a vector using the _Dirac Notation_[33]. Ket (\(\left|.\right\rangle\)) is the Dirac's name for a _column_ vector, and it is used to represent a qubit state \(\left|\phi\right\rangle\) as vector of sums of the basis states \(\left|0\right\rangle=\begin{bmatrix}1\\ 0\end{bmatrix},\left|1\right\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\) as \(\left|\phi\right\rangle=\gamma\left|0\right\rangle+\eta\left|1\right\rangle= \begin{bmatrix}\gamma\\ \eta\end{bmatrix}\), where \(\gamma\) and \(\eta\) are complex numbers representing the probability amplitudes of measuring the qubit with value 0 and value 1 respectively [32], i.e. \(\left|\gamma\right|^{2}+\left|\eta\right|^{2}=1\). On the other hand, a row vector is represented as a Bra (\(\left\langle.\right|\)), for example, if we want to represent the transpose of ket \(\left|\phi\right\rangle\), we can write it as \(\left\langle\phi\right|=\begin{bmatrix}\gamma&\eta\end{bmatrix}\). The dot product of two vectors can be written as a bra-ket in Dirac's notation, e.g., \(\left\langle\psi|\phi\right\rangle\). For \(n\) qubits, we can use the tensor product symbol (\({}^{\otimes}\)). For example, we use \(\left|0\right\rangle^{\otimes n}\) to mean \(n\) qubits in the state \(\left|0\right\rangle\). A quantum algorithm is represented as a quantum circuit, composed of _quantum gates_. Quantum gates are similar to classical gates but are applied on qubits and are represented mathematically as unitary matrices. For example, the NOT gate (X-gate) can be represented as a unitary matrix \(\text{NOT}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\). Hence, for a state \(\left|\phi\right\rangle=\gamma\left|0\right\rangle+\eta\left|1\right\rangle\), applying the NOT operation leads to the state: NOT \(\left|\phi\right\rangle=\eta\left|0\right\rangle+\gamma\left|1\right\rangle\). Another example is the Hadamard gate (H-gate) which produces an equal superposition state of \(\left|0\right\rangle\) and \(\left|1\right\rangle\) when it acts on a single qubit in either of the basis states, i.e. it maps the basis states \(\left|0\right\rangle\) to \(\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1\right\rangle)\), and \(\left|1\right\rangle\) to \(\frac{1}{\sqrt{2}}(\left|0\right\rangle-\left|1\right\rangle)\). The H gate can be represented as \(\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\). Figure 1 shows an example of a quantum circuit with a single qubit. Similarly, gates can also be applied to multiple qubits. For example, the SWAP gate exchanges the quantum state of two qubits as shown in Figure 2. _Quantum entanglement_ between qubits means that there is a dependency between them. For example, if we have two qubits and one qubit is measured and collapses to a specific state, then the other one will immediately jump to a certain state depending on the measured value of the first one. This can be achieved, e.g., using multi-qubit gates. An example of these gates is the Controlled NOT (CNOT) gate shown in Figure 3, where the control qubit controls the NOT operation on the target qubit. This entanglement phenomenon is used in different quantum applications like quantum teleportation [34] and superdense coding [35]. We use the quantum entanglement phenomenon to add dependency between the collected cellular RSS vectors at known locations (i.e. the fingerprint) and the UE's RSS vector at an unknown location. _Quantum interference_ refers to changing the probability amplitudes of a certain qubit using quantum gates. We use this to bias the probabilities of certain qubits to reflect the cosine similarity between the UE's RSS at an unknown location and the fingerprint. ## IV The Quantum Positioning Algorithm In this section, we discuss the details of the proposed quantum cellular positioning algorithm. The basic idea behind the proposed algorithm is to calculate the quantum cosine similarity between the online RSS vector at an unknown location and each RSS vector in the cellular fingerprint. This can be achieved by putting the fingerprint samples into a superposition quantum state and entangling them with the UE's RSS vector by applying a sequence of quantum gates. We start by discussing the quantum circuit for positioning. Then, we explain the implementation details of the circuit. Fig. 1: A quantum circuit that inverts a single qubit and puts it into a superposition state. The \(X\) block represents the NOT gate which inverts the qubit state (\(\left|0\right\rangle\rightarrow\left|1\right\rangle\)), the \(H\) block represents the Hadamard gate which changes the qubit state \(\left|1\right\rangle\) into an equal superposition state (\(\left|1\right\rangle\rightarrow\frac{1}{\sqrt{2}}(\left|0\right\rangle-\left|1 \right\rangle)\)), and the third block represents the measurement. Single lines carry quantum information while double lines carry classical information. Fig. 4: General system architecture for the fingerprint-based positioning technique. Fig. 3: An example of a controlled gate. Qubit “\(control\)” controls the NOT operation on another qubit, “\(target\)”. The target qubit will be negated \(\Longleftrightarrow\) the control is measured to be 1. Finally, we give a numerical example that explains the steps of the proposed algorithm. ### _Positioning Algorithm_ The proposed quantum algorithm works in two phases: the offline quantum fingerprint building phase and the online user tracking phase as shown in Figure 4. In the offline phase, we collect the fingerprint RSS data at known ground-truth (GT) locations. Each fingerprint sample contains the RSS vector from different BSs, e.g. cellular towers, in the environment and the location where the RSS vector is collected. In the online phase, the user's current location is queried based on the signals heard by the user equipment (UE). The UE scans the set of RSS from the different base stations in the environment and passes it along with the fingerprint to the quantum cosine similarity algorithm to find the cosine similarities between the current UE's RSS and the collected fingerprint data at each location. Finally, the fingerprint location with the highest similarity score is returned as the estimated location. Without loss of generality, assume that we have \(N\) base stations that can be heard at \(M\) fingerprint locations. Also, assume that the online normalized RSS vector is \(\psi\) and the offline normalized RSS vector at location \(j\) is \(\phi_{j}\), where \(j\in\left\{0,..,M-1\right\}\). The cosine similarity between \(\psi\) and \(\phi_{j}\) for each \(j\) is: \[\cos(\psi,\phi_{\mathbf{j}})=\big{|}\left<\psi|\phi_{\mathbf{j}}\right>\big{|}= \sum_{i=0}^{N-1}\psi_{j}\phi_{\mathbf{j_{i}}} \tag{1}\] The quantum circuit in Figure 5 calculates the cosine similarity between \(\psi\) and **all \(\mathbf{\phi}_{j}\)'s in parallel**. It consists of three stages: the Initialization, Swap Test, and Measurement stages. #### Iii-A1 The Initialization Stage The input to the circuit is four quantum registers: a single ancilla qubit, an \(n\)-qubits register for encoding the UE's collected RSS sample \(\ket{\psi}\), another \(n\)-qubits register to encode the fingerprint data \(\ket{\phi}\) at each fingerprint location, and finally an \(m\)-qubits register to reflect the fingerprint location index \(\ket{i}\). Initially, all registers are in the following state, \[\ket{\gamma_{0}}=\ket{0}\ket{0}^{\otimes n}\ket{0}^{\otimes n}\ket{0}^{ \otimes m} \tag{2}\] Where \(n=\log(N)\) and \(m=\log(M)\). The first step during the initialization is to convert the classical RSS data to quantum data. To do this for the current UE's RSS vector, we apply the oracle \(O_{\psi}\) to the \(n\)-qubits register in the zero state \(\ket{0}^{\otimes n}\), where \(O_{\psi}\) is a gate/circuit that converts the register to the state \(\ket{\psi}\) as shown in Equation 3. We explain how to implement \(O_{\psi}\) later in this section. \[O_{\psi}\ket{0}^{\otimes n}=\ket{\psi} \tag{3}\] Similarly, we encode the RSS vector at each fingerprint location \(j\) with its index using the Hadamard gate and oracle \(O_{\phi}\). The Hadamard gate converts the index register \(\ket{i}\) to a superposition state with equal probabilities as shown in Equation 4. \[H\ket{0}^{\otimes m}=\frac{1}{\sqrt{M}}\sum_{j=0}^{M-1}\ket{j}=\ket{i} \tag{4}\] The oracle \(O_{\phi}\) is another gate/circuit that converts qubits to \(\ket{\phi_{j}}\) entangled with the index register as \[O_{\phi}\ket{0}^{\otimes n}\ket{i}=\frac{1}{\sqrt{M}}\sum_{j=0}^{M-1}\ket{ \phi_{j}}\ket{j} \tag{5}\] where \(j\) represents numbers from 0 to \(M-1\) (index of fingerprint data at location \(j\) with size \(M\)). We give the details of the \(O_{\phi}\) oracle later in this Section. After the initialization stage, the system becomes in the following state, Fig. 5: The quantum circuit to calculate the cosine similarity between UE’s RSS sample vector \(\ket{\psi}\), and all fingerprint records in parallel \(\ket{\phi}\). \(\ket{\gamma_{i}}\) represents the joint system state at different positions in the circuit. \[\left|\gamma_{1}\right\rangle=\frac{1}{\sqrt{M}}\sum_{j=0}^{M-1}\left|0\right\rangle \left|\psi\right\rangle\left|\phi_{j}\right\rangle\left|j\right\rangle \tag{6}\] #### Iii-B2 The Swap Test Stage The next step is to do a swap test [36]. The goal is to entangle the ancilla qubit, the UE's RSS sample register and the fingerprint register to calculate the similarity score in parallel for all fingerprint locations. We start by applying the Hadamard gate to the ancilla qubit, which leads to the following state \[\left|\gamma_{2}\right\rangle=\frac{1}{\sqrt{2M}}\sum_{j=0}^{M-1}(\left|0 \right\rangle+\left|1\right\rangle)\left|\psi\right\rangle\left|\phi_{j} \right\rangle\left|j\right\rangle \tag{7}\] Then, the ancilla is entangled with \(\left|\psi\right\rangle\) and \(\left|\phi\right\rangle\) registers using a controlled swap gate which leads to the following state, \[\left|\gamma_{3}\right\rangle=\frac{1}{\sqrt{2M}}\sum_{j=0}^{M-1}(\left|0 \right\rangle\left|\psi\right\rangle\left|\phi_{j}\right\rangle\left|j \right\rangle+\left|1\right\rangle\left|\phi_{j}\right\rangle\left|\psi \right\rangle\left|j\right\rangle) \tag{8}\] The last step in the swap test is applying the Hadamard gate again to the ancilla qubit which leads to the following final state, \[\left|\gamma_{4}\right\rangle=\frac{1}{2\sqrt{M}}\sum_{j=0}^{M-1}(\left|0 \right\rangle\left[\left|\psi\right\rangle\left|\phi_{j}\right\rangle+\left| \phi_{j}\right\rangle\left|\psi\right\rangle] \tag{9}\] \[+\left|1\right\rangle\left[\left|\psi\right\rangle\left|\phi_{j} \right\rangle-\left|\phi_{j}\right\rangle\left|\psi\right\rangle\right])\left|j\right\rangle\] #### Iii-B3 The Measurement Stage The final stage is the measurement stage, where we measure the ancilla qubit conditioned on the index register being in state \(\left|j\right\rangle\), i.e. \(p(a|i=j)\), where \(j\) represents the index number in the set \(\{0,..,M-1\}\). This probability, \(p(a|i=j)\), is a function of the required cosine similarity between the UE's RSS sample and fingerprint sample \(j\). In particular, to find this probability \(p(a|i=j)\); we measure the index register first (since the probability is conditioned on the index value). This moves the state of the unmeasured quantum system to (see Appendix A-A for details): \[\frac{1}{2}(\left|0\right\rangle\left[\left|\psi\right\rangle\left|\phi_{j} \right\rangle+\left|\phi_{j}\right\rangle\left|\psi\right\rangle)+\left|1 \right\rangle\left[\left|\psi\right\rangle\left|\phi_{j}\right\rangle-\left| \phi_{j}\right\rangle\left|\psi\right\rangle] \tag{10}\] Then, we can calculate the probability that the ancilla is zero given that the index is \(j\) as follows (this can be obtained by normalizing the \(\psi\) and \(\phi\) states, see Appendix A-B for details): \[\begin{split} p(a=0|i=j)=\big{(}\frac{1}{2}\times\sqrt{2+2\big{|} \left\langle\psi|\phi_{j}\right\rangle\big{|}^{2}}\big{)}^{2}\\ =\frac{1}{2}+\frac{1}{2}\big{|}\left\langle\psi|\phi_{j}\right\rangle \big{|}^{2}\end{split} \tag{11}\] Hence, the cosine similarity can be obtained as: \[\mathit{cos}(\psi,\phi_{j})=\big{|}\left\langle\psi|\phi_{j}\right\rangle \big{|}=\sqrt{2\times p(a=0|i=j)-1} \tag{12}\] where \(p(a=0|i=j)\) is the conditional probability of measuring the ancilla qubit to be \(0\) conditioned on that the index register \(i\) is equal to \(j\), where \(j\in\{0,..,M-1\}\), and \(M\) is the size of the fingerprint. The probability \(p(a=0|i=j)\) can be found by running the circuit for \(K\) times (the number of shots in the quantum terminology) and calculating \[p(a=0|i=j)=\frac{\text{count}(a=0\cap i=j)}{\text{count}(i=j)} \tag{13}\] The estimated location is the location of the fingerprint sample \(j\) with the highest cosine similarity, i.e. \(j=\operatorname{argmax}_{j}(cos(\psi,\phi_{j}))\). Since this cosine similarity is directly proportional to count(\(a=0\cap i=j\)) (see Appendix B for details), then the estimated location is the location of the fingerprint sample that has the maximum count of measuring the ancilla qubit output as \(0\). Algorithm 1 summarizes the proposed quantum positioning algorithm. ``` 0: - Two \(n\)-qubits quantum registers \(\left|\phi\right\rangle\) and \(\left|\psi\right\rangle\), where \(\left|\phi\right\rangle\) is used to hold the fingerprint data, and \(\left|\psi\right\rangle\) is used to hold the test sample, \(n=\log(N)\), \(N\) is the number of BSs. - An ancilla qubit, \(\left|a\right\rangle=\left|0\right\rangle\). - A quantum register with \(m\) qubits, \(\left|i\right\rangle\), that holds the index value. \(m=\log(M)\), where \(M\) is the number of fingerprint samples. - Number of shots \(K\). - gt_loc[]: the ground truth locations from the fingerprint data. 0: The user equipment position. 1: \(\text{counts}[]\leftarrow\text{zeros}(M)\)\(\triangleright\) Array to count \(\left|a\right\rangle=0\) at each fingerprint index 2: \(\text{max\_count}\leftarrow\text{0}\) 3: \(\text{max\_index}\leftarrow\text{0}\) 4:for\(k\gets 1\) to \(K\)do 5:\(\prime\)\(\prime\)\(\prime\) Initialization stage */ 6:\(\left|\psi\right\rangle\leftarrow\text{Apply }O_{\psi}(\left|0\right\rangle^{\otimes n})\)\(\triangleright\) UE's sample initialization 7:\(\left|i\right\rangle\leftarrow\text{Apply }H(\left|0\right\rangle^{\otimes m})\)\(\triangleright\) Index initialization 8:\(\left|\phi\right\rangle\leftarrow\text{Apply }O_{\phi}(\left|0\right\rangle^{\otimes n},\left|i\right\rangle)\)\(\triangleright\) Fingerprint initialization /* Swap Test stage */ 9: \(\text{Apply }H(\left|a\right\rangle)\) 10: \(\text{Apply }CSWAP(\left|a\right\rangle,\left|\psi\right\rangle,\left|\phi \right\rangle)\) 11: \(\text{Apply }H(\left|a\right\rangle)\) /* Measurement stage */ 12:\(j\leftarrow\text{measure}(\left|i\right\rangle)\)\(\triangleright\) j is the measured value of the index register \((\left|i\right\rangle)\) 13:if\(\text{measure}(\left|a\right\rangle)=0\)then 14:\(\text{counts}[j]\leftarrow\text{counts}[j]+1\)\(\triangleright\) count(\(a=0\cap i=j\)) 15:\(\text{if\ count}[j]>\text{max\_count}\)then 16:\(\text{max\_count}\leftarrow\text{counts}[j]\) 17:\(\text{max\_index}\gets j\) 18:return gt_loc[\(\text{max\_index}\)] ``` **Algorithm 1**\(o(\log MN)\) Quantum Positioning ### _Quantum Circuit Implementation_ The quantum circuit in Figure 5 contains two oracles. The first oracle, \(O_{\psi}\), is used to initialize the qubit register with the UE's current RSS values as shown in Equation 3. Specifically, given a classical RSS vector \(\begin{bmatrix}\alpha_{0}&\alpha_{1}&...&\alpha_{N-1}\end{bmatrix}\), the goal of the oracle is to convert the vector \(\begin{bmatrix}0&0&...&0\end{bmatrix}\) to \(\begin{bmatrix}\alpha_{0}&\alpha_{1}&...&\alpha_{N-1}\end{bmatrix}\), i.e., encode the classical RSS vector into the probability amplitudes of the \(n\)-qubits register \(\ket{\psi}\). This amplitude encoding can be done generally using quantum state preparation [37, 38], where rotational gates are applied to load the required amplitudes into the qubits. Note that an \(N\)-dimensional classical RSS vector can be encoded in \(n=log(N)\)-dimensional quantum register, which is an exponential saving is space. As a simple example, a classical 2D RSS vector \(\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\) can be encoded into a single qubit as \(\alpha\ket{0}+\beta\ket{1}\). This can be achieved by moving the state of a qubit in state \(\ket{0}\) to be in state \(\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\) by rotating the qubit with angle \(\theta\) around the Y-axis, where \(\alpha=\cos\frac{\theta}{2}\) and \(\beta=\sin\frac{\theta}{2}\). This can be achieved using the \(R_{y}(\theta)\) gate for rotation around the Y-axis of the Bloch sphere [39]. In a similar manner, the oracle \(O_{\phi}\) is not only responsible for initializing the fingerprint register with the fingerprint data, it is also responsible for entangling the fingerprint data with the index so that it becomes in state \(\frac{1}{\sqrt{M}}\sum_{j=0}^{\mathcal{H}-1}\ket{\phi_{j}}\ket{j}\) as shown in Equation 5. The goal is to enter all fingerprint locations to the quantum circuit so that the similarity calculations can be performed in parallel. To do this we can use quantum state preparation [37, 38] to prepare the registers with \(\frac{1}{\sqrt{M}}\begin{bmatrix}\alpha_{0,0}&..&\alpha_{0,N-1}&..&\alpha_{M -1,N-1}\end{bmatrix}^{T}\), where \(M\) is the number of fingerprint locations, and \(N\) is the number of BSs. Note also that a fingerprint with data from \(N\) BS's at \(M\) locations can be stored in a quantum registers of size \(n+m\), where \(n=log(N)\) and \(m=log(M)\), as compared to the classical fingerprint size of \(N\times M\). ### _Example_ In this section, we give a simple numerical example of the proposed quantum circuit that is used to get the cosine similarity between the UE's RSS sample vector (\(\psi\)) and a fingerprint at two different locations (\(\phi_{j}\), \(j\in\{0,1\}\), \(M\)=2). Each RSS vector has the RSS from two different BS's (\(N\)=2). All vectors are unit vectors. Figure 6 shows the quantum circuit used to obtain the cosine similarity between the online UE's RSS sample vector \(\psi=[0.899\quad 0.437]\) (encoded in a quantum register \(\ket{\psi}\)), and each fingerprint vector \(\phi_{j}\) at each index \(j\), where \(\phi_{0}=[0.800\quad 0.599]\) and \(\phi_{1}=[0.543\quad 0.839]\) (encoded in a quantum register \(\ket{\phi}\)). Oracle \(O_{\psi}\) is implemented using a rotational gate around the Y-axis (\(R_{y}\)) to set the UE's RSS sample register state to be \(\ket{\psi}=\begin{bmatrix}0.899\\ 0.437\end{bmatrix}\). This is done by applying a rotation with angle \(\theta_{\psi}=2\times\arctan(\frac{b}{a})\), where \(a\),\(b\) are the required probability amplitudes of the test sample \(\ket{\psi}\), i.e. \(a=0.899\) and \(b=0.437\), which results in \(\theta_{\psi}=0.905\). Oracle \(O_{\phi}\) is implemented with the same idea as Oracle \(O_{\psi}\), but here we are initializing the fingerprint register depending on the index value. So to initialize the fingerprint register with the first training sample \(\phi_{0}=[0.800,0.599]\), we need to apply an \(R_{y}\) gate with \(\theta_{\phi_{0}}=2\times\arctan(\frac{0.599}{0.800})=1.285\), and to initialize it with the second training sample \(\phi_{1}=[0.543,0.839]\), we need to apply another \(R_{y}\) gate with \(\theta_{\phi_{1}}=2\times\arctan(\frac{0.839}{0.543})=1.992\). One way to do that is by finding the difference between the two rotations \(\text{diff}=(\theta_{\phi_{0}}-\theta_{\phi_{1}})\) and then applying the rotation in two steps, first we rotate the register with \(\theta_{\phi_{0}}-\frac{\text{diff}}{2}=1.638\); then if the index register has value \(0\), we add this difference again by applying rotation with angle \(\frac{\text{diff}}{2}=-0.353\); and if it has value \(1\), we should apply the same rotation but in the opposite direction to reach the state \(\langle\phi_{1}|=[0.543,0.839]\). We use CNOT to flip the rotation direction controlled on the index register as shown in the initialization block in Figure 6. We ran this circuit with number of shots equals 1024 (\(K=1024\)), and we counted the cases where index register equals 0 (i.e. \(\text{count}(i=0)\)) and the cases where it equals 1 (i.e. \(\text{count}(i=1)\)), and for each case we counted the number of cases where ancilla qubit equals 0 (i.e. \(\text{count}(a=0\cap i=j)\)). We measured \(\text{count}(a=0\cap i=0)=502\), and \(\text{count}(a=0\cap i=1)=436\). Since \(\text{count}(a=0\cap i=0)>\text{count}(a=0\cap i=1)\), then we can estimate the UE's location as the location of \(\phi_{0}\) stored in the fingerprint. Fig. 6: Example quantum circuit for cosine similarity. ## V Evaluation In this section, we implement our quantum algorithm in a _real_ testbed and evaluate its performance side-by-side against its classical counterpart and the state-of-the-art quantum positioning algorithms. We start by describing our testbed. Then, we show the accuracy using a real IBM quantum machine in a small testbed followed by a larger scale experiment on the IBM quantum simulator. After that, we quantify the theoretical space and time complexity of the proposed algorithm. Finally, we present experiments to evaluate different practical aspects of the algorithm. We end the section with a discussion of the different aspects of the algorithm. ### _Testbed Setup_ We use a real cellular testbed in an \(0.2\)Km\({}^{2}\) urban area (Figure 7). The area is covered by \(21\) different cell-towers (\(N=21\)). In the offline phase, we use different Android devices to collect the fingerprint data at \(44\) different locations uniformly distributed over the area of interest. Each device has a data collector software that collects GPS ground-truth locations, the base stations received signal strengths (RSS), and timestamps. We also collect an independent dataset of another \(44\) samples to use as the online phase samples. In the online phase, the user's sample is sent to a quantum computer along with the fingerprint samples where the quantum circuit is run for \(K\) times. Then we retrieve the results and calculate the fingerprint index with the maximum similarity to the user's sample and return the location of this fingerprint sample as the estimated user location. ### _Accuracy Evaluation_ We evaluate the performance of our algorithm on a _real_ 5-qubits quantum IBM machine as well as the IBM quantum machine simulator. For the real quantum machine, we used the _ibmq_mainla_ machine with 5 qubits. Given the limited number of available qubits, we can only process a small testbed with two base stations (\(N=2\)) and four fingerprint locations (\(M=4\)): one qubit for the ancilla, one qubit for encoding the RSS from two base stations for the unknown testing location, three qubits for encoding the four fingerprint locations with their indices. To do that, we selected the two base stations that are most commonly heard in the testbed area as well as four fingerprint locations uniformly spread over the area. For simulation, we use the IBM Quantum Machine Simulator with the total testbed samples. Figure 8 shows a comparison between the positioning error distribution of our proposed quantum algorithm which runs in \(o(\log(MN))\) implemented on _ibmq_mainla_ real machine and simulator, the state-of-the-art quantum positioning algorithms [18, 19, 20, 21, 22] which runs in \(o(M\log(N))\), and the classical version of the cosine-similarity positioning algorithm which runs in \(o(MN)\). The figure confirms that the proposed quantum algorithm can achieve the same accuracy as the classical counterpart and state-of-the-art quantum algorithms. This comes with the exponential enhancement in space and time in both \(N\) and \(M\) compared to the classical version. Moreover, it comes with the exponential gain in \(M\) over the state-of-the-art quantum algorithms [18, 19, 20, 21, 22] as we quantify in Section V-C. The figure further _validates_ that the performance obtained from the simulator matches the performance of the real quantum machine, which we discuss next. To show the scalability of the algorithm, in the rest of this section we used the total testbed samples and implemented it over the IBM quantum machine simulator. Figure 9 shows the positioning error distribution for our proposed quantum algorithm, the state-of-the-art quantum positioning [18, 19, 20, 21, 22], and the classical algorithm. The figure confirms that our algorithm has the same accuracy as the state-of-the-art quantum algorithms and the classical version over the larger testbed. Figure 10 further compares the proposed quantum algorithm with the state-of-the-art quantum algorithms while increasing the number of fingerprint locations. The figure shows that, as expected, the higher the density of the fingerprint locations, the higher the accuracy will be. The figure also confirms that the proposed quantum algorithm can achieve the same accuracy as the \(o(M\log(N))\) algorithms while having an **exponential saving** in the number of shots (\(K\)) needed to run the circuit as we show in the next section. Figure 11, shows the median positioning error for the proposed quantum algorithm with the state-of-the-art algorithms at different numbers of base stations (\(N\)). The figure highlights that the higher the number of base stations used in the positioning, the better the accuracy will be. It shows also that our algorithm gives similar results as the state-of-the-art at different numbers of base stations but with the exponential time and space enhancement. ### _Complexity Analysis_ In general, the complexity of the fingerprint-based techniques depends on the number of fingerprint locations (\(M\)) and the number of BSs (\(N\)). The first stage of the proposed algorithm, the Initialization Stage, uses quantum state preparation techniques to initialize the quantum registers with the classical user sample and fingerprint data. Efficient state preparation techniques can be used for state preparation to load the data in logarithmic complexity, such as Quantum Random Access Memory (QRAM) [38], Fig. 7: Cellular outdoor testbed area. [40, 41], where a vector with size \(N\) can be loaded in \(o(\log(N))\)[40] in parallel into a qubit register and conditional rotations are performed to encode the vector data as amplitudes in the quantum registers. Therefore, loading \(M\) fingerprint vectors each with \(N\) values will require \(o(\log(MN))\) complexity. Moreover, quantum sensors [42, 43, 44, 45] are evolving over time and the proposed algorithm can leverage this development by taking the data as input directly from quantum sensors, which can make the quantum state preparation step complexity \(o(1)\). The second stage is the quantum similarity matching, which is done using the Swap Test where two \(H\) gates are used on the ancilla qubit, and \(\log(N)\) CSWAP gates are used to entangle the ancilla qubit, with the UE's RSS register and the fingerprint register. This leads to \(o(\log(N))\) complexity used in this stage. Finally, to find the fingerprint index with the maximum cosine similarity, we run the circuit for \(K\) times and observe the Fig. 11: Median positioning error for different numbers of base stations (N). Fig. 12: The complexity of \(o(M\log(N))\) algorithms [18, 19, 20, 21, 22] and our proposed \(o(\log(MN))\) quantum algorithms as a function of the fingerprint size (\(M\)) at different number of BSs (\(N\)). Fig. 10: Median positioning error for different numbers of fingerprint locations. Fig. 9: IBM Quantum Simulator: positioning error distributions comparison on a larger testbed for the different algorithms. circuit output to find the fingerprint index with the maximum count of measuring the ancilla qubit output as zero, i.e. finding index value \(j\) where \(count(a=0\cap i=j)\) is maximum. The index register length is \(\log(M)\), therefore, measuring the output of the register has a complexity of \(o(\log(M))\). Hence, the algorithm has an overall complexity of \(o(\log(MN))\), compared to the state-of-the-art quantum algorithms [18, 19, 20, 21, 22] that take \(o(M\log(N))\). Note that the proposed quantum algorithm complexity of \(o(\log(MN))\) is **more than exponentially better** than the other-state-of-the-art quantum algorithms complexity of \(o(M\log(N))\), as an exponential enhancement would reduce the complexity to \(o(\log(M)\log(N))\) only. Figure 12 shows the difference between the complexity of the state-of-the-art quantum algorithms (\(o(M\log(N))\)) [18, 19, 20, 21, 22] and our proposed quantum algorithm (\(o(\log(MN))\)) for different values of \(N\) and \(M\). The figure highlights that there is a significant better than-exponential gain of the proposed algorithm compared to the other state-of-the-art quantum algorithm. Figure 13 shows the number of shots \(K\) used for the state-of-the-art quantum algorithms [18, 19, 20, 21, 22] and our proposed algorithm at different number of fingerprint sizes (\(M\)). The figure highlights that the proposed algorithm can have an exponential saving in the required number of circuit runs. ### _Practical Considerations_ In this section, we evaluate some practical considerations of the proposed algorithm: the effect of machine noise on accuracy and the effect of the number of shots on the complexity. We use three quantum machines that have \(5\) physical qubits each: _ibmq_manila_, _ibmq_quito_, and _ibmq_lima_. Each quantum machine has different characteristics that can affect the overall accuracy [46, 47, 48]. To capture all these different characteristics in one metric, the quantum volume (QV) has been proposed [48, 49], which measures a quantum computer's performance taking into account gates' errors, measurement errors, quality of the circuit compiler, among others [32]. The higher the QV is, the less error-prone the machine is. Figure 14 shows that, as expected, the localization accuracy increases as the quantum volume increases (i.e. the quantum machine's noise is lower). Finally, Figure 15 compares the total number of operations required for our proposed quantum algorithm and its classical counterpart, considering number of shots \(K=2^{14}=16384\) for the quantum algorithm at different numbers of base stations (\(N\)). The black circles show the point where the quantum algorithm has the same number of operations as the classical version. The figure highlights that the proposed quantum algorithm can perform much better than the classical algorithm at high fingerprint data size and high number of base stations, taking into account the number of shots required in the quantum algorithm. ### _Discussion_ The cosine similarity-based **classical** positioning algorithm has a **quadratic** complexity (\(o(MN)\)) in both space and time, where \(M\) is the number of fingerprint locations and \(N\) is the number of BSs in the environment. In contrast, the proposed quantum positioning algorithm has a **sub-linear** complexity (\(o(\log(MN))\)) in space and time. Unlike the state-of-the-art quantum algorithms that need sub-quadratic space and time (\(o(M\log(N))\)), the proposed quantum algorithm sub-linear complexity offers an exponential enhancement in the number of fingerprint locations for both space and running time. The exponential improvement in the number of fingerprint locations (\(M\)) enables us to get more accurate positioning using larger fingerprint data. On the other hand, the exponential improvement in the number of BSs (\(N\)) enables us to build a fingerprint with a large-scale of heterogeneous BSs (e.g. cellular towers, WiFi APs, BLE), which has the potential of higher positioning accuracy. This can be used for different scenarios, e.g., where each device can be used as a reference point for positioning, as in intelligent transportation systems, connected and automated vehicles, and industrial internet of things (IIOT) applications. All of these are potential targets for 5G/6G high-accuracy low-latency positioning as defined by 3GPP Release-17 [6]. On the other hand, the storage space required for offline quantum fingerprint building is reduced exponentially, as the fingerprint size is reduced from \(o(MN)\) to \(o(\log(MN))\). Fig. 14: Effect of the quantum volume (QV) of different quantum machines on the positioning accuracy. Fig. 15: Number of operations required for our quantum algorithm at \(K=2^{14}\) vs the classical algorithm at different values of base stations \(N\). With the emergence of quantum co-processors (similar to GPUs) [50], the proposed quantum algorithm can be completely run on the user equipment, e.g., for privacy issues. For this, the fingerprint data can be downloaded from a server to the UE. In such cases, the proposed quantum algorithm saves both the storage space required on the UE as well as the required download bandwidth. We show experimental results for the algorithm implementation on a real quantum machine with \(5\) qubits. However, more advanced quantum machines are available with a higher number of qubits reaching up to \(433\) qubits as in the _ibm_seattle_ machine [51] (though not freely accessible). Given that the algorithm circuit needs \(1+\log(M)+2\log(N)\) qubits, this machine can accommodate a very large number of fingerprint samples and base stations. The quantum machines have different error characteristics due to different sources of noise such as the quantum decoherence, gates errors and readout error [46, 47, 48], and all need to be taken into consideration. However, The development of quantum computers is growing fast, allowing quantum algorithms to be more practical and feasible in the foreseeable future [52]. Furthermore, our results show that the number of shots \(K\) affects the overall positioning accuracy. Although the number of shots \(K\) is considered constant in the theoretical analysis, it should be taken into account in practical consideration since the required number of shots may be high. ## VI Conclusion In this paper, we have presented a cosine similarity-based quantum algorithm for enabling large-scale worldwide positioning. Unlike the classical techniques, which need \(o(MN)\) time and space, the proposed quantum algorithm requires \(o(\log(MN))\) time and space for a fingerprint with \(M\) locations and \(N\) BSs. We implemented the proposed algorithm on a real IBM quantum machine as well as a simulator and evaluated it in a real cellular outdoor testbed. We also compared the proposed algorithm with the state-of-the-art quantum algorithms for positioning, showed how the algorithm accuracy changes across different quantum machines with different noise profiles, quantified its complexity, and discussed its practicality. The proposed quantum algorithm can provide an exponential saving in both the number of fingerprint locations and the number of BSs, taking positioning systems a step toward a more accurate and ubiquitous positioning that can work on a worldwide scale and meet the requirements of the next generation 5G, 6G, and beyond. Currently, we are working on multiple research directions for positioning systems using quantum computing including exploring different quantum similarity metrics, using quantum computing techniques for floor detection, among others. ## Appendix A Derivations of Equation 10 and 11 ### _Derivation of Equation 10_ Before measuring the ancilla qubit and index register in circuit shown in Figure 5, the state of the quantum system is: \[\ket{\gamma_{4}}=\frac{1}{2\sqrt{M}}\sum_{j=0}^{M-1}\ket{\ket{ \psi}\ket{\phi_{j}}+\ket{\phi_{j}}\ket{\psi}}\] \[+\ket{1}\ket{\ket{\psi}\ket{\phi_{j}}-\ket{\phi_{j}}\ket{\psi}} \ket{\psi})\ket{j}\] which can be rewritten for simplicity as: \[\ket{\gamma_{4}}=\frac{1}{2\sqrt{M}}\sum_{j=0}^{M-1}\ket{\zeta_{j}}\ket{j}\] where \(\ket{\zeta_{j}}=\ket{0}\left[\ket{\psi}\ket{\phi_{j}}+\ket{\phi_{j}}\ket{\psi} \right]+\ket{1}\left[\ket{\psi}\ket{\phi_{j}}-\ket{\phi_{j}}\ket{\psi}\right]= \ket{0\psi\phi_{j}}+\ket{0\phi_{j}}\psi+\ket{1\psi\phi_{j}}-\ket{1\phi_{j}\psi}\). After measuring index register, the quantum system moves to the normalized state: \(\frac{1}{\left|\zeta_{j}\right|}\left|\zeta_{j}\right\rangle\), where \(\left|\zeta_{j}\right|\) is the euclidean norm of \(\left|\zeta_{j}\right\rangle\). The value of \(\left|\zeta_{j}\right|\) is calculated as follows: \[\left|\zeta_{j}\right|^{2}=\langle\zeta_{j}|\zeta_{j}\rangle\] \[=\langle 0\psi\phi_{j}|0\psi\phi_{j}\rangle+\langle 0\psi\phi_{j}|0 \phi_{j}\psi\rangle+\langle 0\psi\phi_{j}|1\psi\phi_{j}\rangle-\langle 0\psi\phi_{j}|1 \phi_{j}\psi\rangle\] \[+\langle 0\phi_{j}\psi|0\psi\phi_{j}\rangle+\langle 0\phi_{j}\psi|0 \phi_{j}\psi\rangle+\langle 0\phi_{j}\psi|1\psi\phi_{j}\rangle-\langle 0\phi_{j} \psi|1\phi_{j}\psi\rangle\] \[+\langle 1\psi\phi_{j}|0\phi_{j}\phi_{j}\rangle+\langle 1\psi\phi_{j}|0 \phi_{j}\psi\rangle+\langle 1\psi\phi_{j}|1\psi\phi_{j}\rangle-\langle 1\psi\phi_{j}|1 \phi_{j}\psi\rangle\] \[-\langle 1\phi_{j}\psi|0\psi\phi_{j}\rangle-\langle 1\phi_{j}\psi|0 \phi_{j}\psi\rangle-\langle 1\phi_{j}\psi|1\psi\phi_{j}\rangle+\langle 1\phi_{j} \psi|1\phi_{j}\psi\rangle\] And since the dot product of normalized vector by itself equals 1, and the dot product of orthogonal vectors \(\ket{0}\) and \(\ket{1}\) equals 0, we can say that: \[\left|\zeta_{j}\right|^{2}\] \[=1+\langle 0\psi\phi_{j}|0\phi_{j}\psi\rangle+0-0+\langle 0\phi_{j} \psi|0\psi\phi_{j}\rangle+1+0-0\] \[+0+0+1-\langle 1\psi\phi_{j}|1\phi_{j}\psi\rangle-0-0-\langle 1 \phi_{j}\psi|1\psi\phi_{j}\rangle+1\] And it can be proved mathematically that \(\langle 0\psi\phi_{j}|0\phi_{j}\psi\rangle=\langle 0\phi_{j}\psi|0\psi\phi_{j} \rangle=\langle 1\psi\phi_{j}|1\phi_{j}\psi\rangle=\langle 1\phi_{j}\psi|1\psi\phi_{j}\rangle\), hence we get \(\left\|\zeta_{j}\right\|=2\). Now we can write the quantum system state after measuring index register as in Equation 10: ### _Derivation of Equation 11_ To find the conditional probability \(p(a=0|i=j)\) we need to find the probability that the ancilla qubit equals zero. For simplicity, we can write the state obtained in Equation 10 as: \[\frac{1}{2}(\ket{0}\left[\ket{\psi}\ket{\phi_{j}}+\ket{\phi_{j}} \ket{\psi}\right]+\ket{1}\left[\ket{\psi}\ket{\phi_{j}}-\ket{\phi_{j}}\ket{ \psi}\right])\] \[=\frac{1}{2}(\left\|\eta_{0}\right\|\left|0\right\rangle\frac{ \left|\eta_{0}\right\rangle}{\left\|\eta_{0}\right\|}+\left\|\eta_{1}\right\| \left|1\right\rangle\frac{\left|\eta_{1}\right\rangle}{\left\|\eta_{1}\right\|})\] where \(\left|\eta_{0}\right\rangle=\left[\ket{\psi}\ket{\phi_{j}}+\ket{\phi_{j}}\ket{ \psi}\right]\) and \(\left|\eta_{1}\right\rangle\) \(=\left[\ket{\psi}\ket{\phi_{j}}-\ket{\phi_{j}}\ket{\psi}\right]\), and \(\left|\eta_{i}\right\|\) is the euclidean norm of \(\left|\eta_{i}\right\rangle\). The probability that the ancilla qubit is in state \(\left|0\right\rangle\) equals \(p(a=0|i=j)=(\frac{\left\|\eta_{0}\right\|}{2})^{2}\). Note that this is after measuring index register (conditional probability). The value of \(\left\|\eta_{0}\right\|\) is found as follows: \[\left\|\eta_{0}\right\|^{2}=\left\langle\eta_{0}|\eta_{0}\right\rangle\] \[=\left\langle\psi\phi_{j}|\psi\phi_{j}\right\rangle+\left\langle \psi\phi_{j}|\phi_{j}\psi\right\rangle+\left\langle\phi_{j}\psi|\psi\phi_{j} \right\rangle+\left\langle\phi_{j}\psi|\phi_{j}\psi\right\rangle\] \[=2+2\left\langle\psi\phi_{j}|\phi_{j}\psi\right\rangle=2+2\big{|} \left\langle\psi|\phi_{j}\right\rangle\big{|}^{2}\] So the probability of \(a=0\) given that \(i=j\) is (as shown in Equation 11): \[p(a=0|i=j)=(\frac{\left\|\eta_{0}\right\|}{2})^{2}=\frac{\left\| \eta_{0}\right\|^{2}}{4}\] \[=\frac{1}{4}(2+2\big{|}\left\langle\psi|\phi_{j}\right\rangle \big{|}^{2})=\frac{1}{2}+\frac{1}{2}\big{|}\left\langle\psi|\phi_{j}\right\rangle \big{|}^{2}\] ## Appendix B Proof that \(cos(\psi,\phi_{j})\propto\text{count}(a=0\cap i=j)\) Since the cosine similarity is directly proportional to \(p(a=0|i=j)\) (from Equation 12). And since \(p(a=0|i=j)=\frac{\text{count}(a=0\cap i=j)}{\text{count}(i=j)}\) (from Equation 13), then we can say that the cosine similarity is directly proportional to \(\text{count}(a=0\cap i=j)\) if \(\text{count}(i=j)\) is equal for all \(j\in\{0,..,M-1\}\). To prove that \(\text{count}(i=j)\) is equal for all \(j\in\{0,..,M-1\}\), we start from Equation 9, where the quantum system is in the state: \[\left|\gamma_{4}\right\rangle=\frac{1}{2\sqrt{M}}\sum_{j=0}^{M-1}\left|\zeta _{j}\right\rangle\left|j\right\rangle\] where \(\left|\zeta_{j}\right\rangle=\left|0\right\rangle\left[\left|\psi\right\rangle \left|\phi_{j}\right\rangle+\left|\phi_{j}\right\rangle\left|\psi\right\rangle \right]+\left|1\right\rangle\left[\left|\psi\right\rangle\left|\phi_{j}\right\rangle \left|\psi\right\rangle\right]=\left|0\psi\phi_{j}\right\rangle+\left|0\phi_{j} \psi\right\rangle+\left|1\psi\phi_{j}\right\rangle-\left|1\phi_{j}\psi\right\rangle\). As shown in Appendix A, the euclidean norm of \(\left|\zeta_{j}\right\rangle\) is \(\left\|\zeta_{j}\right\|=2\). Therefore, the system is in the normalized state: \[\left|\gamma_{4}\right\rangle=\frac{1}{\sqrt{M}}\sum_{j=0}^{M-1}\left(\frac{ 1}{2}\left|\zeta_{j}\right\rangle\right)\left|j\right\rangle\] which means that the probability of measuring the index register is \(p(i=j)=\frac{1}{M}\) for all values of \(j\in\{0,..,M-1\}\), i.e. \(\text{count}(i=j)\) is equal for all \(j\in\{0,..,M-1\}\), which proves that \(cos(\psi,\phi_{j})\propto\text{count}(a=0\cap i=j)\), i.e. \(\operatorname*{argmax}_{j}(cos(\psi,\phi_{j}))=\operatorname*{argmax}_{j}( \text{count}(a=0\cap i=j))\).
2305.06475
A Model for Translation of Text from Indian Languages to Bharti Braille Characters
People who are visually impaired face a lot of difficulties while studying. One of the major causes to this is lack of available text in Bharti Braille script. In this paper, we have suggested a scheme to convert text in major Indian languages into Bharti Braille. The system uses a hybrid approach where at first the text in Indian language is given to a rule based system and in case if there is any ambiguity then it is resolved by applying a LSTM based model. The developed model has also been tested and found to have produced near accurate results.
Nisheeth Joshi, Pragya Katyayan
2023-05-05T09:21:13Z
http://arxiv.org/abs/2305.06475v1
# A Model for Translation of Text from Indian Languages to Bharti Braille Characters ###### Abstract People who are visually impaired face a lot of difficulties while studying. One of the major causes to this is lack of available text in Bharti Braille script. In this paper, we have suggested a scheme to convert text in major Indian languages into Bharti Braille. The system uses a hybrid approach where at first the text in Indian language is given to a rule based system and in case if there is any ambiguity then it is resolved by applying a LSTM based model. The developed model has also been tested and found to have produced near accurate results. Bharti Braille, Indian Languages, Transliteration, LSTM, Deep Learning ## I Introduction Braille is a system of raised dots that can be felt with the fingertips and used to represent letters, numbers, and symbols. It was invented by Louis Braille, a French educator who was blind himself, in the early 19th century as a way for people who are blind or visually impaired to read and write. Braille consists of cells of six dots arranged in two columns of three dots each. Different combinations of dots represent different letters, numbers, and symbols. Braille is used worldwide as a standard writing system for people who are blind or visually impaired, and it is widely used for reading and writing in a variety of languages. The Braille system allows people who are blind or visually impaired to participate in activities such as reading, writing, and studying, which would otherwise be difficult or impossible for them, and it helps to break down barriers to information and education that are faced by this community. Bharti Braille is a form of Braille script used in India to write the languages of India in Braille script. It is based on the standard Braille script but includes additional characters to represent the unique sounds and characters found in Indian languages. The literary script is based on the standard Braille script but includes additional characters to represent the unique sounds and characters found in Indian languages. This includes characters for the retroflex sounds found in many Indian languages, as well as special characters to represent the unique conjunct consonants found in Indian languages. Text to Braille Conversion is the process of converting written text into a system of raised dots called Braille, which can be felt with the fingertips and used by people who are blind or visually impaired to read. The Braille system consists of cells of six dots arranged in two columns of three dots each. Different combinations of dots represent different letters, numbers, and symbols. Text to Braille Conversion is typically done using specialized software or hardware devices. The software or devices can either be simple transliteration tools that follow a set of rules for mapping letters to Braille cells, or more advanced systems that use machine learning algorithms to improve the accuracy of the Braille representation. Text to Braille Conversion systems are important because they provide a means for people who are blind or visually impaired to access written text and information. Braille is a tactile writing system that can be read by touch, and it allows people who are blind or visually impaired to participate in activities such as reading, writing, and studying, which would otherwise be difficult or impossible for them. Text to Braille Conversion systems make it possible to convert written text into Braille, so that it can be read by people who are blind or visually impaired, and they help to break down barriers to information and education that are faced by this community. In addition, Text to Braille Conversion systems can also be useful for proofreading Braille text, as well as for creating Braille materials for education, rehabilitation, and other purposes. ## II Literature review Hossain et al. [1] have identified rules and conventions for Bangla Braille translation based on rules. They proposed a DFA based computational model for MT, which gave acceptable translations. The results were tested by members of a visually impaired community. Al-Salman et al. [2] have built a Braille copier machine which produced Braille documents in their respective languages. The machine worked as both copier as well as printing system using optical recognition and techniques from image processing. Yamaguchi et al. [3] have highlighted the problem of accuracy while translating technical notations to Braille. To solve this problem, they have developed a assistive solution for people from STEM background who are not capable of printing. Damit et al. [4] have mediated a new way of interlinking keyboard inputs from translations to commonly used Braille characters. This enabled visually blessed people to interact with visually impaired people. Rupanagudi et al. [5] introduced a new technique of translating Kannada braille to Kannada language. They devised a new algorithm to segment Braille dots and identify characters. Choudhary et al. [6] have suggested a new approach for supporting communication amongst deaf-blind people. The technique included the use of smart gloves capable of translating Braille alphabets and can communicate the message via SMS. Due to this the user can convey simple messages using touch sensors. Guerra et al. [7] have developed a prototype using Java which can convert Braille text to digital text. Jariwala and Patel [8] have developed tool for translation of Gujarati, English and Hindi text to Braille and save it as a data file which can be directly printed via embosser. Saxena et al. [9] have provided a real-time solution (hardware and software) for helping blind people. They developed a Braille hand glove which helped in communication for sending and receiving messages in real time. Nam and Min [10] have developed a music braille converted capable of converting the musical notations such as octaves, key signature, tie repeat, slur, time signature etc. successfully to Braille. Park et al. [11] have suggested a method of automatic translation of scanned images of books to digital Braille books. They implemented character identification and recognized images in Books while automatically translating them to text. This method reduced the time and cost required for producing books in Braille. Alufaisan et al. [12] designed an application that identifies Braille numerals in Arabic and converts it to plain text using CNN-based Residual Networks. The system also gave speech assistance to the generated text. Apu et al. [13] proposed user and budget friendly braille device that can translate text and voice to braille for blind students. It works for different languages and converts based on 'text' or 'voice' command given by the user. Yoo and Baek [14] have proposed a device that can print braille documents for blind. They implemented a raspberry Pi camera to save documents as images stored in device. Characters were extracted from the images and converted to Braille which is then processed to output braille. Their proposed device was portable and could be created using 3D printing. Zhang et al. [15] have used n-gram language model to implement Chinese-braille inter translation system. This system integrates Chinese and Braille word segmentation with concatenation rules. They have also proposed an experimental plan to improve Braille word segmentation and concatenation rules with a word corpus of Chinese-Braille. ## III Challenges in Translation of Text into Bharti Braille There are several issues that can arise during the conversion of text to Bharti Braille: 1. **Complex script structure:** The Devanagari script used in India has a complex structure, with multiple elements, including vowels, consonants, and diacritical marks, which can make the process of converting text to Bharti Braille challenging. 2. **Lack of standardization:** There is no standard set of rules for converting text to Bharti Braille, and different organizations and institutions may have their own variations in the mapping of Devanagari script characters to Braille cells. 3. **Incomplete or inaccurate transliteration rules:** If the rules for mapping Devanagari script characters to Braille cells are incomplete or inaccurate, the resulting Braille text may be incorrect or difficult to read. 4. **Technical limitations:** Text to Braille Conversion systems may have technical limitations, such as memory constraints, processing speed, or compatibility with certain software or hardware, which can affect the accuracy and efficiency of the conversion process. 5. **User training and awareness:** The success of Text to Braille Conversion systems also depends on user training and awareness. Users need to be trained on how to use the software or hardware correctly, and they need to be aware of the limitations and limitations of the system in order to use it effectively. To overcome these issues, it is important to develop complete and accurate transliteration rules, to use software or hardware systems that are reliable and efficient, and to provide user training and awareness programs. ## IV Proposed System To implement a translation system for Indian languages to Bharti Braille, we have used a hybrid approach which is a mix of rule-based and deep learning approaches. As a first step to our system, we have extracted phonemes from the input text and then created a rule base for mapping of Indian language-characters into Bharti Braille characters. Thus, we created rules for three characters classes viz. Consonants, Vowels and Vowel Symbols (matrix/diacritics). The languages that we covered were: Hindi (Devnagari Script), Marathi (Devnagari Script), Nepali (Devnagari Script), Bengali (Bengali Script), Assamese ((Bengali Script), Gujarati (Gujarati Script), Punjabi (Guurmukhi Script), Odia (Odia Script), Tamil (Tamil Script), Telugu (Telugu Script), Kannada (Kannada Script), Malayalam (Malayalam Script), and Urdu (Pexo-Arabic Script). Table 1 shows text in the respective languages and their Bharti Braille encoding. The working of the system is shown in figure 1 where first we take input in an Indian language. Phonemes from the input text are extracted and based further the phonemes are divided into characters. These characters are mapped to Bharti Braille encodings using a rule base. If a particular character has more than two encodings, then the text is sent for disambiguation where a LSTM model tries to check the correct encoding based on the context. The model takes in a sequence of tokens, \(X\)=\(\{x1,x2,...,xT\}\), passes them through an embedding layer, \(e\), to get the token embeddings, \(e(X)\)=\(\{e(x1),e(x2),...,e(xT)\}\). These embeddings are processed - one per time-step - by the forward and backward LSTMs. The forward LSTM processes the sequence from left-to-right, while the backward LSTM processes the sequence right-to-left, i.e. the first input to the forward LSTM is \(x\) and the first input to the backward LSTM is \(xT\). The LSTMs also take in the hidden, \(h\), and cell, \(c\), states from the previous time-step. \[\small\begin{split}&\small\begin{split} h_{t}^{\rightarrow}=\\ & h_{t}^{\leftarrow}=\\ & LSTM^{\leftarrow}(e(x_{t}^{\leftarrow}),h_{t-1}^{\leftarrow}c_{t- 1}^{\leftarrow}\end{split} \tag{2}\] After the whole sequence has been processed, the hidden and cell states are then passed to the next layer of the LSTM. The initial hidden and cell states, \(h0\) and \(c0\), for each direction and layer are initialized to a tensor full of zeros. We then concatenate both the forward and backward hidden states from the final layer of the LSTM, \(H\)=\(\{h1,h2,...,hT\}\), where \(h1\)=\(\{h1\rightarrow\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;} \text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{ ;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;} \text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{ ;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{;} \text{;}\text{;}\text{;}\text{;}\text{;}\text{;}\text{ ## Acknowledgment This work is supported by the funding received from SERB, GoI through grant number CRG/2020/004246 for project entitled, "Development of English to Bharti Braille Machine Assisted Translation System".
2306.00554
ShaRP: Shape-Regularized Multidimensional Projections
Projections, or dimensionality reduction methods, are techniques of choice for the visual exploration of high-dimensional data. Many such techniques exist, each one of them having a distinct visual signature - i.e., a recognizable way to arrange points in the resulting scatterplot. Such signatures are implicit consequences of algorithm design, such as whether the method focuses on local vs global data pattern preservation; optimization techniques; and hyperparameter settings. We present a novel projection technique - ShaRP - that provides users explicit control over the visual signature of the created scatterplot, which can cater better to interactive visualization scenarios. ShaRP scales well with dimensionality and dataset size, generically handles any quantitative dataset, and provides this extended functionality of controlling projection shapes at a small, user-controllable cost in terms of quality metrics.
Alister Machado, Alexandru Telea, Michael Behrisch
2023-06-01T11:16:58Z
http://arxiv.org/abs/2306.00554v1
# ShaRP: Shape-Regularized Multidimensional Projections ###### Abstract Projections, or dimensionality reduction methods, are techniques of choice for the visual exploration of high-dimensional data. Many such techniques exist, each one of them having a distinct visual signature -- i.e., a recognizable way to arrange points in the resulting scatterplot. Such signatures are implicit consequences of algorithm design, such as whether the method focuses on local vs global data pattern preservation; optimization techniques; and hyperparameter settings. We present a novel projection technique -- ShaRP -- that provides users explicit control over the visual signature of the created scatterplot, which can cater better to interactive visualization scenarios. ShaRP scales well with dimensionality and dataset size, generically handles any quantitative dataset, and provides this extended functionality of controlling projection shapes at a small, user-controllable cost in terms of quality metrics. **CCS Concepts** \(\bullet\)**Human-centered computing**\(\rightarrow\) Visualization techniques; \(\star\)**Mathematics of computing**\(\rightarrow\) Dimensionality reduction; ## 1 Introduction Projection, also called Dimensionality Reduction (DR), methods are popular tools for exploring high-dimensional datasets. They transform the task of discovering data patterns in high-dimensional spaces into a perceptually-driven search and inspection task of _visual patterns_ in 2D or 3D through scatterplots. Prior research has shown that such scatterplots help uncovering topological aspects, such as groupings, outliers, and correlations in the data [BBK\({}^{*}\)18, PKF\({}^{*}\)16, WFC\({}^{*}\)18]. However, visual patterns in a projection depend not only on the underlying _data_, but also on how the DR technique is designed. For example, for the same dataset, t-SNE tends to create organic, round, structures; Auto-Encoders create starburst-like clusters; and UMAP creates very dense, round, clusters, to mention just a few [NA18, EMK\({}^{*}\)21]. We further call such aspects the _visual signature_ of a projection technique. We believe that users can benefit from having _direct_ control over the visual signatures of a projection technique. For instance, when performing interactive data labeling using rectangular selections or displaying image thumbnails over data clusters (see Figure 2), a projection whose clusters resemble rectangles would be more suitable than one creating various-shaped clusters (if all other aspects of the two projections, _e.g._, quality, are similar). However, controlling such visual signatures is typically hard with current projection methods. To fill that gap, we present _ShaRP_ (standing for **Shape** Regularized Neural **P**rojection), to the best of our knowledge the first algorithm that provides users with direct control over cluster shapes in their projection scatterplots. We next describe the technique, illustrate this new shape regularization ability, show that it comes at a user-controlled penalty to standard quality metrics, and point towards avenues for further exploration. ## 2 Background and Related Work We first introduce a few notations: A dataset \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1,\ldots,n}\) has \(m\) samples \(\mathbf{x}_{i}=[x_{i1},\ldots,x_{in}]^{T}\), where \(\mathbf{x}_{i}\) is a point in \(\mathbb{R}^{n}\) with components \(x_{ij}\), \(1\leq j\leq n\) and an optional label \(y_{i}\in\{1,\ldots,K\}\). We use capitals to denote the set of all elements for the corresponding small letter, _e.g._, \(\hat{P}=\{\bar{y}_{i}\}_{i=1,\ldots,m}\). We denote the Euclidean norm by \(\|\mathbf{x}\|_{2}=\sqrt{\mathbf{x}^{T}}\mathbf{x}\) and the expected value of a function of a random variable \(\mathbf{z}\) distributed according to \(p\) by \(\mathbb{E}_{\mathbf{z}\sim p}[f(\mathbf{z})]\). Further, we use \(\theta\) to denote probability distribution parameters, for example \(\theta=(\tilde{\mu}\in\mathbb{R}^{2},\tilde{\Theta}^{2}\in\mathbb{R}^{2})\) for a 2D Diagonal Gaussian distribution. **Dimensionality reduction:** Projection algorithms are formally functions \(P_{\eta}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{q}\) where \(q\ll n\) and \(\eta\) denote (hyper)parameters. In this work we focus on 2D scatterplots (\(q=2\)) and use the term "projections" to refer to both such 2D scatterplots and the DR algorithms that create them. Many projection algorithms are available nowadays. These are described from technical perspectives (how they differ design-wise) in several surveys [VP09, HG02, Yin07, BBH12, SVPM14, CG15, LMW\({}^{*}\)15], and from the perspective of _local_ quality metrics [NA18, EMK\({}^{*}\)21]. Well-known projection methods include Principal Component Analysis (PCA) -- a simple, easy to code, but qualitatively limited method especially for complex non-planar data structures embedded in high dimensions [F:R01]; Isomap -- a technique which works well if the data resides on a (single) high-dimensional manifold [TSL00]; t-SNE, which works well for arbi tary high-dimensional data distributions but has challenges in controlling (and predicting) the shapes of the emerging visual clusters [22, 23, 24, 25]; and UMAP, similar to t-SNE in terms of ease and of visual cluster control [16]. **Visual signatures:** Significant work aimed to develop ways to control or adapt the visual signatures of projections [26, 27, 28] and studying whether it is _possible_ to do so for existing projection methods, as follows. Cutura et al. [26] use space-filling curves to adapt the position of data points in image thumbnail scatterplots such that they are non-overlapping. This idea is effective but limited to image datasets, while our proposed technique is designed to be generic. The perplexity parameter in t-SNE is responsible for, among other things, the visual appearance of the projection. Its effect is intricately emmeshed within t-SNE, leading to cluster shapes, sizes, and distances that do not necessarily convey meaning [25]. Our technique minimizes the variability of this visual appearance. Since fully doing away with hyperparameters might be infeasible, techniques such as HyperNP [27] learn to simulate their effects on the resulting projection. Our technique instead works well for several datasets using a single setting of hyperparameters, reducing the need for such simulation. Makhzani _et al._[28] propose an approach similar to ours. They adapt an auto-encoder into an adversarial setting so as to increase the quality of the projection (_e.g._, better cluster separability) that would have been created by the vanilla auto-encoder. Our method instead aims mainly at putting cluster shapes under the user's control. ## 3 ShaRP: Shape-Regularized Neural Projection We now introduce ShaRP, our novel DR technique, which is based on deep neural networks. Such networks are, in general, able to approximate complex non-linear functions and have several desirable features that ShaRP inherits: **Scalable:** ShaRP scales linearly in the number of samples because it avoids precomputing pairwise distances or covariance matrices, like in PCA or t-SNE, and lends itself to hardware-acceleration through GPUs or TPUs tailored for fast deep learning. **Parametric:** ShaRP operates in a "learn once, project as needed" fashion. It learns to parameterize a projection function instead of only outputting the projected points, such as t-SNE or UMAP. This allows ShaRP to project data it was not trained on along with existing data (out-of-sample ability). **Generic:** ShaRP handles any dataset comprised of numeric features and can be applied to a wide range of datasets using the same or only slightly adapted hyperparameter settings. **Sound:** ShaRP scores comparably to state of the art techniques in relevant projection quality metrics. To these, ShaRP adds two flavors of **Shape Regularization**: * _Intra-projection_: ShaRP creates point clusters having shapes coming from the same _family_: ellipses, rectangles, triangles. * _Inter-projection_: Running ShaRP over different datasets produces a consistent visual signature where differences in the projections are driven mainly by data patterns. ShaRP is implemented in Python using Keras (Tensorflow back-end) [26], Tensorflow Probability [30] for sampling and calculating log-probabilities under different distributions and is publicly available at [https://github.com/amreis/sharp](https://github.com/amreis/sharp). ### Method description ShaRP belongs to the family of Representation Learning [1] techniques, _i.e._, it learns a _latent encoding_ for input data. A latent encoding is a vector \(\mathbf{r}\in\mathbb{R}^{q}\), where \(\mathbf{r}=f(\mathbf{x})\) is a low-dimensional representation of the input \(\mathbf{x}\in\mathbb{R}^{n}\) that enables a reconstruction of \(\mathbf{x}\) with minimal errors. As said earlier, we aim to create 2D projections, so \(q=2\). ShaRP builds atop of the recent DR method SSNP [1]. SSNP extends a vanilla auto-encoder with loss \(\mathcal{L}_{\text{AE}}\) with a classifier head (with an accompanying loss \(\mathcal{L}_{\text{class}}\)), yielding the total loss to be optimized as \[\mathcal{L}_{\text{SSNP}}(\mathbf{X},\mathbf{\hat{X}},\hat{Y},\hat{Y})= \mathcal{L}_{\text{AE}}(\mathbf{X},\mathbf{\hat{X}})+\rho\mathcal{L}_{\text{ class}}(\hat{Y},\hat{Y}). \tag{1}\] The projection \(\mathbf{r}_{i}\in\mathbb{R}^{2}\) of each input \(\mathbf{x}_{i}\) is generated by the bottleneck layer of the network. The classification loss \(\mathcal{L}_{\text{class}}\), together with target labels or _pseudolabels_ generated by a clustering algorithm, enables SSNP to separate data clusters better than plain auto-encoders (see Figure 1). Yet, as the figure shows, SSNP collapses some clusters into elongated shapes, which we argue is (a) unnatural, as it suggests some anisotropy in the sample distribution; (b) space-inefficient, as much white space is not used to depict data; and (c) suboptimal for visualization as we cannot _e.g._ easily select a cluster by rubberband tools or annotate it with a square-like icon. ShaRP overcomes these shortcomings of SSNP by an explicit user-controlled shape regularization mechanism, described next (see also Section 3.2 for examples). ShaRP replaces SSNP's Auto-Encoder (AE) with a Variational AE (VAE) [24]. The key AE-VAE difference is the latter's use of a _sampling_ process in the network's bottleneck layer. This, coupled with a necessary KL-Divergence regularization term \[\mathcal{L}_{\text{reg}}(\theta)=D_{\text{KL}}(q_{\theta}|\mathbf{|}p)\mathbb{ E}\mathbb{E}_{\mathbf{z}\sim q_{\theta}}[\log(q_{\theta}(\mathbf{z})/p(\mathbf{z}))]. \tag{2}\] has as an immediate effect on the regularization of the learned latent space: Using \(\mathcal{L}_{\text{reg}}\) pushes the learned probability distributions \(q_{\theta}\) toward a standard form \(p\) defined a priori (e.g., a standard Gaussian distribution) which prevents learning degenerate distributions. Also, crucially for our goals, this loss can be _exploited_ to model different shape regularization constraints (see next Section 3.2). The complete loss function for ShaRP then reads as Figure 1: Comparison of projections of the MNIST dataset learned using (a) Auto-encoders, (b) SSNP [1], and (c) ShaRP. SSNP and ShaRP were trained using the _ground truth_ labels as class information — encoded, here and next, by colors. Values in brackets are Distance Consistency scores (DSC [29]), a quality metric that measures separability of clusters, with 1 being a perfect score. \[\mathcal{L}_{\text{ShaRP}}(\mathbf{X},\mathbf{\hat{X}},\hat{Y},\hat{Y}, \Theta)= \mathcal{L}_{\text{recon}}(\mathbf{X},\mathbf{\hat{X}})+\rho \mathcal{L}_{\text{class}}(\hat{Y},\hat{Y})+\beta\mathcal{L}_{\text{reg}}( \Theta) \tag{3}\] \[= \mathcal{L}_{\text{SSNP}}(\mathbf{X},\mathbf{\hat{X}},\hat{Y}, \hat{Y})+\beta\mathcal{L}_{\text{reg}}(\Theta),\] where we make the connection to the SSNP loss explicit. By using a suitable _sampling_ process, the clusters emerging in the projection will be shape-regularized. For instance, using a 2D Gaussian sampling distribution yields _elliptical_ shapes (see Figure 1c) because the equidensity contours of a 2D Gaussian are ellipses. This is dependent on \(\mathcal{L}_{\text{reg}}\) preventing the degenerate learning of low (respectively, high) variances, which would give rise to point-like (resp., line-like) shapes in the projection. ### Controlling cluster shapes We use as regularization targets the following shapes. **Ellipses.** Consider a diagonal Multivariate Normal distribution, _i.e._, \(\mathbf{z}_{i}\sim\mathcal{N}(\vec{\mu},\text{diag}(\vec{\sigma}^{2}))\). The natural prior to use here is the standard Multivariate Normal distribution \(\mathcal{N}(\vec{\mathbf{0}},\mathbf{I})\) which simplifies sampling, propagating gradients, and calculating the KL-Divergence loss -- see [13] for more details. By using this prior, we encourage learned probability distributions to be as close as possible to a standard Gaussian. Hence, the learned projection will output data clusters that resemble circles or ellipses (see Figure 4). Using a Gaussian sampling distribution is standard for VAEs. For our projection goals, tweaking the sampling distribution and using suitable priors allows favoring different cluster shapes. We can use _any_ distribution, as long as we have (i) access to log-probabilities of samples under the learned distribution and the prior; (ii) a way to propagate gradients through the sampling procedure (using a reparametrization trick or otherwise). Access to the log-probabilities of samples under learned distributions and the prior removes the (constraining) need to analytically calculate the KL-Divergence since we can re-express it as a sample-based computation as \[D_{\text{KL}}(q_{\Theta}||p)\approx\frac{1}{m}\sum_{i=1}^{m}\left(\log q_{ \Theta}(\mathbf{z}_{i})-\log p(\mathbf{z}_{i})\right), \tag{4}\] where the approximation holds if \(\mathbf{z}_{i}\sim q_{\Theta}(\cdot)\). Our next examples of regularization shape targets use Equation 4 of computing the (approximate) KL-Divergence. **Rectangles.** To create rectangular clusters, we use a generalized Normal (\(\mathcal{GN}\)) probability distribution. It introduces an additional shape parameter to the Gaussian (here denoted \(\omega\)) and has a density function of the form \[p(x|\mu,\alpha,\omega)\propto\exp\left(-(|x-\mu|/\alpha)^{\omega}\right).\] Tuning \(\omega\) makes the tails of the distribution heavier or lighter. This is similar to the Minkowski p-norm where higher \(p\) values (analogous to \(\omega\)) make sets of equidistant points approach axis-aligned squares as \(p\to\infty\) instead of circles (\(p=2\)). Using this distribution for sampling, with a high \(\omega\) value, yields cluster shapes that resemble squares/rectangles instead of ellipses (see Figure 2). **Convex polygons.** If \(\mathbf{V}\in\mathbb{R}^{2\times v}\) is a matrix of a base convex polygon's \(v\) vertices in \(\mathbb{R}^{2}\) and \(\mathbf{w}\in[0,1]^{v}\) is a vector such that \(w_{i}\geq 0\ \forall i,\sum_{i=1}^{v}w_{i}=1\), then \(\mathbf{p}=\mathbf{V}\mathbf{w}\) is a point inside the base polygon with barycentric coordinates \(w_{i}\). To sample points inside this polygon, we use the Dirichlet probability distribution \[\mathbf{w}\sim\text{Dir}(\alpha_{1},\alpha_{2},\ldots,\alpha_{v})\Rightarrow \mathbf{w}\in[0,1]^{v},\quad\sum_{i=1}^{v}w_{i}=1\quad(\alpha_{i}>0,\ \forall i)\] which generates vectors with the same properties as \(\mathbf{w}\) above. This sampling scheme alone is not enough to learn a useful embedding since all data points will draw samples from the same region in space. Hence, we augment this scheme with rotation, scaling, and translation. Figure 3 shows this scheme for triangles, _i.e._\(v=3\), using as prior the "uniform" distribution on the triangle, which corresponds to \(\text{Dir}(1,1,1)\). Table 1 summarizes our proposed mechanisms for controlling cluster shapes by sampling distributions. For more technical details, see the supplemental material. ## 4 Evaluation We next discuss how ShaRP gives direct control over cluster shapes while learning to project data (Section 4.1), the quality of ShaRP projections (Section 4.2), and how tuning a single hyperparameter \begin{table} \begin{tabular}{l l|c} **Sampling** & **Prior** & **Shape** \\ \hline \(\mathbf{z}\sim\mathcal{N}(\mu,\text{diag}(\vec{\sigma}^{2}))\) & \(\mathcal{N}(0,I)\) & \(\bigcirc\) \\ \hline \(\mathbf{z}\sim\mathcal{GN}(\mu,\alpha,\omega)\) & \(\mathcal{GN}(0,1,\omega)\) & \(\square\) \\ \hline \(\mathbf{z}\sim\text{Dir}(\alpha_{1},\alpha_{2},\alpha_{3})\) & \(\begin{bmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{bmatrix}\begin{bmatrix}s_{x}&0\\ 0&s_{y}\end{bmatrix}\mathbf{v}\mathbf{z}+\begin{bmatrix}t_{x}\\ t_{y}\end{bmatrix}\) & \(\text{Dir}(1,1,1)\) & \(\bigtriangle\) \\ \end{tabular} \end{table} Table 1: Correspondences between sampling schemes and shapes. Figure 3: The results of our Triangular shaping sampling scheme over 3 different datasets. DSC values (in brackets) are close to the best value possible, indicating that we do not harm class separability. Figure 2: Shaping clusters as rectangles can be convenient for data labeling tasks, as illustrated by the right image where class image representatives are overlaid atop their respective clusters. We achieve this using a Generalized Normal distribution for sampling, here shown on the MNIST dataset for \(\omega=10\) (left). controls the shape regularization strength (Section 4.3). Finally, we discuss ShaRP's computational scalability (Section 4.4). **Datasets.** We use 5 datasets for evaluation (Table 2) which have different levels of classification difficulty, dimensionality, data type (images, motion data, text), and are often used in DR evaluations [1]. **Techniques.** We compare ShaRP with t-SNE, UMAP, and Isomap, due to their wide adoption in the DR arena. We also compare with Auto-Encoders since they are a key building block of our technique; with SSNP since we are extending it; and with NNP [1], a technique that learns to imitate projections, here trained to imitate t-SNE. For both ShaRP and SSNP, we use three different label sources: (1) from the ground truth of the dataset (GT); and pseudolabels created by the K-Means (2, KM) [11] and Agglomerative (3, AG) [12] clustering techniques. **Hyperparameter settings.** We train ShaRP with the Adam optimizer using default parameter settings. We add L2 regularization to the bottleneck layer of the network with a coefficient of 0.5. We use \(\rho=1\) and \(\beta=0.1\) and train using mini-batches of 256 data points. ### Generating shape-regularized projections Figure 4 shows examples of how ShaRP can regularize learned projections. Instead of producing scatterplots where cluster shapes, sizes, and intercluster spacing are widely different (as with t-SNE and UMAP), ShaRP generates a more similar representation of the high-dimensional data in each 2D projection (intra-projection regularization). Also, the visual signature obtained is consistent throughout datasets (inter-projection regularization). The learned projections do well with respect to quality metrics (see Table 3 and its discussion in Section 4.2). All images were generated using the same hyperparameter values, which shows the robustness of ShaRP to different datasets. ### Measuring the projection quality We evaluate ShaRP by a set of established projection quality metrics (trustworthiness, continuity, Shepard correlation, normalized stress, neighborhood hit, and distance consistency) following Espadoto et al. [1]. Precise metric definitions are listed in the supplemental material. We compute metrics over all datasets using a Gaussian sampling layer which produces ellipse-like clusters. Table 3 shows these mean and standard deviations of the metrics over all datasets for ShaRP and the other six evaluated techniques. We see that ShaRP avoids very high Stress values (present in t-SNE, _all_ studied datasets; UMAP and AEs, some datasets). We do, however, have higher Stress than SSNP, since we _force_ clusters into desired shapes, which can require projected (2D) distances to be quite different from data-space distances. Given that our Stress is still lower than t-SNE, UMAP, and AE, we believe this is a reasonable trade-off. For the other metrics, ShaRP performs comparably to t-SNE and UMAP. Overall, we claim that ShaRP offers its capability of shape regularization without negatively impacting quality. It is worth noting that, for their AG and KM versions, both SSNP and ShaRP can be held back by the clustering algorithm's ability to properly group the dataset into classes. To test how ShaRP's support of different regularization shapes affects projection quality, we asked ShaRP to produce clusters in five shapes - ellipses (using Gaussian sampling); rectangles (\(\omega=5\) and \(\omega=15\), see Section 3.2); and triangles (translated in projected space and respectively forced to \(t_{x}=t_{y}=0\)), for all 5 tested datasets. Table 4 shows the mean and standard deviation of quality metrics per dataset. We see little variation in these metrics. This points to the robustness of ShaRP and further supports our claim that controlling the visual signatures of projections can be done without (strongly) influencing quality metric values. ### Control of shape regularization intensity We adjust the amount of shape regularization through the \(\beta\) multiplier in the loss function (Equation 3). Figure 5 shows this: Larger \(\beta\) values force clusters to conform to the shape generated in the sampling layer -- ellipses, in this case. Exaggerated shape regularization (high \(\beta\)), however, makes ShaRP favor'shape over data' too much and creates projections which cannot properly depict data -- sampling from a distribution similar to the prior overshadows producing a sensible embedding. In our tests, we have found a value of \(\beta=0.1\) to give consistently good results. \begin{table} \begin{tabular}{l r r} \hline \hline **Dataset** & **Dimensionality (\(\omega\))** & **\# classes (\(\mathcal{L}\))** \\ \hline U(S) [1][2] & 256 & 10 \\ UMAP [1] & 561 & 6 \\ MNIST [1] & 784 & 10 \\ PathNormal [1] & 784 & 10 \\ Rourens [1] & 5000 & 6 \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets used in our evaluation. Figure 4: Our ShaRP method produces cluster shapes regularized towards a user-chosen target — here, ellipses — and can handle diverse data distributions. We demonstrate this here for the cases where we use ground truth labels (GT) or K-Means-generated pseudolabels (KM). We compare our results to SSNP (GT, KM) and to t-SNE and UMAP. More comparisons are present in the supplemental material. ### Computational performance Figure 6 shows how ShaRP fares compared to other projection techniques _vs._ computational time. Tests were run on a PC with an AMD Ryzen 9 5900HK 3.3GHz 8-core processor and an NVIDIA RTX 3080 GPU. ShaRP is much faster than t-SNE (50-80% speedup) and Isomap (50-60%). It is also faster than AE, UMAP, and only slightly slower than SSNP, its predecessor. The used batch size (256 data points) is largely responsible for ShaRP's speed. Also, since ShaRP has out-of-sample ability, we can train it on a representative data subsample to next project an entire dataset with high quality. ## 5 Discussion and Future Work ShaRP introduces a novel level of pattern steerability for a projection algorithm, all while performing comparably to state-of-the-art methods in relevant (visual) quality metrics. However, ShaRP also has some limitations which frame our future work directions. Currently, we only support numerical features as these work directly with Auto-Encoders. One-hot encoding or Categorical Variational Auto-Encoders [16] can overcome this limitation with only slight adaptions to our network architecture. Also, the sampling schemes we devised support shaping clusters into ellipses and convex polygons -- with two different possibilities for shaping clusters into rectangles. A wider variety of shapes can be obtained by devising new sampling schemes. However, obtaining log-probabilities for samples of complex sampling schemes can be computationally intensive. We next aim to study more sampling schemes that naturally encourage further visual aspects of interest of projections, _e.g._, cluster separability. Shape-regularized projections also can help interaction and visualization tasks. For example, rectangular shaped clusters can help as a clutter-reduction mechanism whenever (annotation) overlays with text and images should be added to projection (cf Fig. 2). We aim to further study this aspect, including the ease of interactive hierarchical navigation of thumbnail-annotated squarified projections to support various analysis tasks of high-dimensional data.
2306.17028
Accurate PET Reconstruction from Reduced Set of Measurements based on GMM
In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing.
Tomislav Matulić, Damir Seršić
2023-06-29T15:23:00Z
http://arxiv.org/abs/2306.17028v1
# Accurate PET Reconstruction from Reduced Set of Measurements based on GMM ###### Abstract In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing. **Keywords:** Positron emission tomography, Gaussian mixture model, Method of moment, Iterative algorithm, Reduced measurements ## 1 Introduction Positron emission tomography is a medical imaging modality that measures metabolic activity of the observed tissue. It is based on electron-positron annihilation. The annihilation happens due to the radioactive \(\beta^{+}\) decay of the radioactive tracer that is injected into the measured tissue. The result of the annihilation is two high-energy photons (511 keV each) that travel along the same line but in opposite directions. If two photons hit detectors in a short-coincidence time-window, then the data is being stored and it is considered as a valid measurement event. Virtual path between the two detectors is called line of response (LoR). The main challenge in medical imaging is to reconstruct image from such measurement data. In PET imaging systems, a list of all coincidence events corresponds to a measurement data set.[31][3] Analytical reconstruction algorithms [23][4][15][21] (filtered back-projection, back-projection filtering) are rarely used nowadays. They are fast and computationally inexpensive, but often do not include information about the real PET imaging systems and, therefore, produce images of lesser quality. Iterative methods are the golden standard in PET image reconstruction.[33][37] The core of iterative algorithms is the system matrix, which represents the PET imaging system.[13][17] Maximum-likelihood expectation-maximization (MLEM) algorithm [29] is the most widely used iterative method. A major drawback of the MLEM algorithm is its slow convergence. To accelerate the convergence, the ordered subset expectation-maximization (OSEM) was introduced in [11]. It significantly reduces the time needed for image reconstruction (more subsets = faster convergence), but it does not guarantee the maximum likelihood solution. More recently, deep neural networks are exploited in modeling of the PET imaging systems [9][2][32] and for image reconstruction [7][35][8]. Pay attention that all mentioned algorithms result in a spatially-discrete reconstruction model, namely, a grid of pixels or voxels. The Gaussian mixture model [27] (GMM) is a well-investigated approach in a variety of classification and segmentation problems.[6][26] The application of the GMM can be found in many problems in biometrics, signal processing, and speech modeling. [27][28][20][36][18][19][30][25] Two different ways are often used for the estimation of parameters of GMM: the expectation-maximization method [10][22] and the method of moments [12][14]. Method of moments relies on tensor moments of higher order to estimate the unknown parameters of the GMM. The theory of such a method is well known and thoroughly investigated. The major drawbacks are: * high-order moments are computationally inefficient [24], since the \(k\)-th moment of an \(n\)-dimensional random variable is a tensor of size \(n^{k}\); * the method of moments leads to a multivariate polynomial system. Statistically meaningful solutions of a such system often do not exist or are not unique.[34] A minimum order of the moment needed for the estimation of unknown parameters increases with the number of components in the Gaussian mixture. Hence, Expectation-Maximization (EM) algorithm is studied more intensely than the method of moments. Although the EM algorithm converges slowly [1], its drawbacks are far less concerning. Notice that the obtained GMM model is spatially-continuous, and virtually of infinite resolution. It motivates our research. In this paper, we present a new method for estimation of the GMM parameters in 2D PET imaging. We can state this problem in another way - how to estimate the unknown GMM parameters from the lines of response that originate from some point that follows Gaussian distribution, but the line is fired under an arbitrary angle? All the methods mentioned in the previous paragraph deal with point-wise samples from the GMM distribution itself, while we deal with the lines that originate from those point-wise sources. To the best of our knowledge, the mentioned issue is unexplored or under-explored [30], [16], and this paper gives a novel insight into it. In this work, the problem and its solution is set in the projection domain. Each point in the image gives a sine function in the projection domain. Thus, the projection domain is often called sinogram: a collection of sines. Vice-versa, a single point in the projection domain corresponds to a line of response in the image. Notice that centers of each Gaussian component are points in the image domain, too. In our work, we exploit the following facts: 1. the mean vector (i. e. the center \(\boldsymbol{\mu}\)) corresponds to the sine function in the projection domain; 2. projection under an arbitrary angle of a bivariate normal distribution is a univariate normal distribution. This is a special case of the marginal distribution theorem of the multivariate normal distribution; 3. expressions for calculating higher moments of the Gaussians are well known, and have shown to be useful for estimation of the GMM parameters. In Section 3 we exploit the mentioned facts, and using some tricks arrange it to get a solvable system of equations. Then, a two-step iterative process that resembles the EM algorithm is exploited for determining all of the unknown parameters of the GMM. The first step updates membership probabilities between the lines of response and the GMM components. The second step reevaluates the parameters of each Gaussian component according to the updated memberships. Resulting GMM model has several advantages over the usual pixel or voxel based approach. Essentially, it is an infinite resolution continuous model obtained from a reduced set of measurements. Widespread digital processing methods are based on difference equations that approximate the underlying differential equations. In our case, we can apply them directly on the GMM model without approximations. This paper is divided into six sections. In Section 2 we explain a 2D PET imaging system. We present an estimation method of the mean vector and the covariance matrix for one component of the GMM in Section 3. In Section 4 we show the proposed iterative, EM-like algorithm for the estimation of the unknown parameters of the whole GMM model. The results are given in Section 5. Finally, in Section 6 we conclude the paper. ## 2 Two-Dimensional PET Imaging The mathematical background of PET imaging is based on the Radon transform. The expression for the Radon transform is \[p(s,\theta)=\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty}f(x, y)\,\delta(x\cos(\theta)+y\sin(\theta)-s)\,dxdy, \tag{1}\] where function \(f(x,y)\) represents metabolic activity of the object that is being scanned by the PET system, and \(p(s,\theta)\) models projection at the angle \(\theta\) - an angle between the projection line and the x-axis. Value \(p(s,\theta)\) corresponds to the line integral that is perpendicular to the projection line, and its distance from the origin is \(s\). Described setup is illustrated in Fig. 1. of some GMM. Each sample of the GMM is a position of electron-positron annihilation. The samples fire lines of response under an uniformly distributed random angle. Actual positions of annihilation (samples) are latent (hidden). In Fig. 2a, we can see a simulation of the PET measurement. We generated \(N=2000\) lines firing in both directions from some hidden point-wise samples that follow the GMM distribution. Contours are used to indicate the two Gaussian components, and only a subset of LoR-s is displayed for better visibility. A line of response can be represented as a pair \((s,\varphi)\), where \(s\in\mathbb{R}\) is the oriented distance from the origin (see \(s\)-axis in Fig. 1 or Fig. 3), and \(\varphi\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\) is the angle between the line and the x-axis. LoR-s correspond to points \((s,\varphi)\) in the projection domain. The integral line and projection line are perpendicular, i.e. the angles \(\theta=\varphi\pm\frac{\pi}{2}\). Measurement is illustrated as a set of points \((s,\varphi)\) in Fig. 2b, or as a set of lines of the response in Fig 2a. The mean vector of the Gaussian component traces a sine function in the projection domain, shown as a solid curve. Dashed curves stand for \(\pm 3\sigma\)-neighborhood around the mean. In the next section, we focus our attention on how to estimate the mean vector \(\mathbf{\mu}\) and covariance matrix \(\mathbf{\Sigma}\) of a single component in the mixture. Estimation of the mean vector is done by fitting _the best_ sinusoid on pairs \((s,\varphi)\). Covariance matrix \(\mathbf{\Sigma}\) is estimated from the moments of the probability density function. ## 3 Estimation of Parameters of the Gaussian Mixture Model The Gaussian mixture model with \(K\) components is given by \[g(\mathbf{x};\tau_{k},\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})=\sum_{k=1}^{K}\tau_{k}f_{G}( \mathbf{x};\mathbf{\mu}_{k},\mathbf{\Sigma}_{k}), \tag{2}\] where \(\mathbf{\mu}_{k}\) is the mean vector, \(\mathbf{\Sigma}_{k}\) is the covariance matrix and \(\tau_{k}\) is the weight, all associated to the \(k\)-th component. In general, function \(f_{G}(\mathbf{x};\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\) is a \(d\)-variate normal (Gaussian) Figure 1: Radon transform distribution: \[f_{G}(\mathbf{x};\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})=\frac{1}{\sqrt{(2\pi)^{d}|\Sigma_{k}|} }\exp(-\frac{1}{2}\left(\mathbf{x}-\mathbf{\mu}_{k}\right)^{\intercal}\mathbf{\Sigma}_{k}^ {-1}\left(\mathbf{x}-\mathbf{\mu}_{k}\right)). \tag{3}\] Weights \(\tau_{k}>0\) should suffice the condition \(\sum_{k=1}^{K}\tau_{k}=1\), since \(g\) defines the probability density. Now, we focus only on one component in the mixture. Since we deal with 2D PET imaging, we have bivariate (\(d=2\)) normal distribution. Also, we assume that a number of components \(K\) in the mixture is known in advance. Estimation is possible, but it is beyond the scope of this paper. ### Estimation of Mean Vector First, we take a look of the univariate normal distribution. Let \((x_{i})_{i=1}^{N}\) be the samples of some univariate normal distribution. Then, the least square problem for \[L_{uni}(c;x_{i})=\sum_{n=1}^{N}(c-x_{i})^{2} \tag{4}\] has the solution \[c_{LS}=\frac{1}{N}\sum_{n=1}^{N}x_{i}\quad=\hat{\mu}. \tag{5}\] Obviously, least square solution \(c_{LS}\) corresponds to the known mean value estimate of the univariate normal distribution \(\hat{\mu}\). Figure 2: 2D PET measurement The point \(\mathbf{\mu}_{k}\) (i.e. the mean vector) corresponds to a sinusoidal function in the projection domain: \[m(\varphi;A,B)=A\sin(\varphi)+B\cos(\varphi), \tag{6}\] where \(A\in\mathbb{R}\) and \(B\in\mathbb{R}\) are unknown parameters yet to be determined. As depicted in Fig. (a)a, we have: \[\mu_{y}^{(k)}=m(\varphi=0;A,B)=B,\] and Fig. (b)b gives: \[\mu_{x}^{(k)}=m(\varphi=-\frac{\pi}{2};A,B)=-A.\] Notice that both \(\varphi=-\frac{\pi}{2}\) and \(\varphi=\frac{\pi}{2}\) correspond to LoR-s parallel to \(y\)-axis, but differ in orientation of \(s\)-axis. In Fig. (b)b, we denoted by \(s\) direction when \(\varphi=-\frac{\pi}{2}\), and by \(s^{\prime}\) when \(\varphi=\frac{\pi}{2}\). We can rewrite Eq. 6 as \[m(\varphi;\mu_{x}^{(k)},\mu_{y}^{(k)})=-\mu_{x}^{(k)}\sin(\varphi)+\mu_{y}^{( k)}\cos(\varphi), \tag{7}\] where \(\mathbf{\mu}_{k}=\left(\begin{smallmatrix}\mu_{x}^{(k)}\\ \mu_{y}^{(k)}\end{smallmatrix}\right)\) is the mean vector of \(k\)-th Gaussian component. We know that the projection under an arbitrary angle of a bivariate normal distribution is a univariate normal distribution. Combining Eq. 7 and Eq. 4 results in a weighted least square problem: \[\begin{split} L(\mu_{x}^{(k)},\mu_{y}^{(k)};s_{i},\varphi_{i})= \sum_{i=1}^{N}p_{ij}(m(\varphi_{i};\mu_{x}^{(k)},\mu_{y}^{(k)})-s_{i})^{2}=\\ \sum_{i=1}^{N}p_{ik}(-\mu_{x}^{(k)}\sin(\varphi_{i})+\mu_{y}^{(k) }\cos(\varphi_{i})-s_{i})^{2},\end{split} \tag{8}\] Figure 3: Estimation of mean vector where \((s_{i},\varphi_{i})\) represent \(i\)-th LoR, \(N\) is the total number of LoR-s, and weights \(p_{ik}\) are membership probabilities between \(i\)-th LoR and \(k\)-th Gaussian component. The calculation of the membership probabilities is given in Section 4. To solve the least square problem, we set the gradient of the \(L\) function to zero. We get \(2\times 2\) linear system \(\mathbf{M}_{k}\mathbf{\mu}_{k}=\mathbf{b}_{k}\), where: \[\mathbf{M}_{k}=\begin{pmatrix}-\sum_{i=1}^{N}p_{ik}\sin^{2}(\varphi_ {i})&\sum_{i=1}^{N}p_{ik}\sin(\varphi_{i})\cos(\varphi_{i})\\ -\sum_{i=1}^{N}p_{ik}\sin(\varphi_{i})\cos(\varphi_{i})&\sum_{i =1}^{N}p_{ik}\cos^{2}(\varphi_{i})\end{pmatrix} \tag{9}\] \[\mathbf{b}_{k}=\begin{pmatrix}\sum_{i=1}^{N}p_{ik}s_{i}\sin(\varphi_ {i})\\ \sum_{i=1}^{N}p_{ik}s_{i}\cos(\varphi_{i})\end{pmatrix}.\] As in (5), the solution is estimated mean \(\mathbf{\hat{\mu}}_{k}=\left(\begin{smallmatrix}\hat{\mu}_{x}^{(k)}\\ \hat{\mu}_{y}^{(k)}\end{smallmatrix}\right):\) \[\mathbf{\hat{\mu}}_{k}=\mathbf{M}_{k}^{-1}\mathbf{b}_{k}. \tag{10}\] ### Estimation of Covariance Matrix Once we have the mean \(\mathbf{\hat{\mu}}_{k}\), we need to estimate the covariance matrix \(\mathbf{\Sigma}_{k}\). Still, we focus only on one component in the mixture, and our approach is based on the variance of the projected univariate normal distribution. We observe a centered (\(\mathbf{\mu}=\mathbf{0}\)) bivariate normal distribution whose principal axes are the \(x\) and \(y\), without loss of generality. For any bivariate normal distribution, we simply define a new coordinate system where the new origin is the mean vector of the distribution (translation), and axes that are the eigenvectors of the covariance matrix (rotation). A centered bivariate normal distribution whose principal axes are the \(x\) and \(y\) can be expressed as: \[f_{Gc}(x,y)=\frac{1}{2\pi\sigma_{1}\sigma_{2}}\exp(-(\frac{x^{2}}{2\sigma_{1} ^{2}}+\frac{y^{2}}{2\sigma_{2}^{2}})).\] We want to find the distribution under some angle \(\varphi\) as seen in Fig. 4a. We must calculate integral: \[f_{p_{0\varphi}}(s)=\int_{\gamma_{\varphi}}f_{Gc}(x,y)d\gamma_{\varphi}, \tag{11}\] where \(\gamma_{\varphi}(t)=(t,kt+l)\), \(k=\tan(\varphi)\) and \(l=\frac{s}{\cos(\varphi)}\). Mapping \(\gamma_{\varphi}\) parameterizes the set of all lines under the angle \(\varphi\). Fig. 4b explains the connection between y-intercept (\(l\)) and \(s\)-axis. After integration we get \[f_{p_{0\varphi}}(s)=\frac{1}{\sqrt{2\pi}\sigma_{p_{0}}(\varphi)}\exp(-\frac{s ^{2}}{2\sigma_{p_{0}}^{2}(\varphi)}), \tag{12}\] where \(\sigma_{p_{0}}^{2}(\varphi)=\sigma_{1}^{2}\sin^{2}(\varphi)+\sigma_{2}^{2}\cos^{2 }(\varphi)\). A detailed calculation can be found in A. Function \(\sigma_{p_{0}}(\varphi)\) can be interpreted as the distance from the origin to one of the tangent lines on the ellipse \(\frac{x^{2}}{\sigma_{1}^{2}}+\frac{y^{2}}{\sigma_{2}^{2}}=1\), under angle \(\varphi\), as shown in Fig. 4c. If we substitute \(\varphi\) with \(\varphi-\varphi_{0}\) in \(\sigma_{p_{0}}(\varphi)\), we get more general expression for the variance of some rotated Gaussian (i.e. ellipse). It corresponds to the rotation of the entire coordinate system by angle \(\varphi_{0}\). The expression is now: \[\sigma_{p}^{2}(\varphi)=\sigma_{1}^{2}\sin^{2}(\varphi-\varphi_{0})+\sigma_{2} ^{2}\cos^{2}(\varphi-\varphi_{0}),\] as shown in Fig. 4d. The probability density function of the projected and centered (\(\mathbf{\mu}=\mathbf{0}\)) bivariate normal distribution under angle \(\varphi_{0}\) is: \[f_{p_{\varphi}}(s;\sigma_{1},\sigma_{2},\varphi_{0})=\frac{1}{\sqrt{2\pi} \sigma_{p}(\varphi)}\exp(-\frac{s^{2}}{2\sigma_{p}^{2}(\varphi)}). \tag{13}\] For \(k\)-th component, additionally we need to translate the coordinate system by mean vector \(\mathbf{\mu}_{k}\). In the projection space it is: \[(s_{c}^{(k)},\varphi)=(s-m(\varphi;\mu_{x}^{(k)},\mu_{y}^{(k)}),\varphi). \tag{14}\] Figure 4: Projection of bivariate normal distribution and elliptical illustration of its variance A point \((s_{c}^{(k)},\varphi)\) is a LoR in the centered coordinate system. In Fig. 5, we can see points \((s,\varphi)\) and the mean before (Fig. 5a) and after translation (Fig. 5b). Consider \(N\) LoR-s \((s_{c_{i}}^{(k)},\varphi_{i})\). Samples \(\varphi_{i}\) follow a uniform distribution on the interval \([-\frac{\pi}{2},\frac{\pi}{2}]\), due to the nature of electron-positron annihilation. Interestingly, \(s_{c_{i}}^{(k)}\) follow the distribution whose probability density function is: \[f_{s_{c}^{(k)}}(s_{c};\sigma_{1}^{(k)},\sigma_{2}^{(k)},\varphi_{0}^{(k)})= \frac{1}{\pi}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{1}{\sqrt{2\pi} \sigma_{p}(\varphi)}e^{-\frac{s_{c}^{2}}{2\sigma_{p}^{2}(\varphi)}}d\varphi. \tag{15}\] This expression comes from the fact that for fixed angle \(\varphi\) we have a probability density function \(f_{p_{\varphi}}(s_{c};\sigma_{1}^{(k)},\sigma_{2}^{(k)},\varphi_{0}^{(k)})\). By averaging all contributions for all angles \(\varphi\), we get the expression (15), which is the marginal distribution of \(s_{c}^{(k)}\). Similarly to the problem from Eq. 4, let \((x_{i})_{i=1}^{N}\) be the samples that follow distribution \(f_{p_{\varphi}}\) from Eq. 13 for some fixed angle \(\varphi\). The least square problem: \[L_{sq}(d;x_{i})=\sum_{n=1}^{N}(d-x_{i}^{2})^{2} \tag{16}\] has the solution \[d_{LS}=\frac{1}{N}\sum_{n=1}^{N}x_{i}^{2}\approx E[X^{2}]=\sigma_{p}^{2}( \varphi), \tag{17}\] where \(\sigma_{p}^{2}(\varphi)\) is the variance of the distribution with the probability density function \(f_{p_{\varphi}}\). Therefore, we can state the weighted least square problem: \[\begin{split} L_{sq}(\varphi_{0}^{(k)},\sigma_{1}^{(k)},\sigma_{ 2}^{(k)})=\\ \sum_{i=1}^{N}p_{ik}\cdot[(\sigma_{1}^{(k)})^{2}\sin^{2}(\varphi_ {i}-\varphi_{0}^{(k)})+(\sigma_{2}^{(k)})^{2}\cos^{2}(\varphi_{i}-\varphi_{0} ^{(k)})-(s_{c_{i}}^{(k)})^{2}]^{2}.\end{split} \tag{18}\] Figure 5: Estimation of mean value and LoR-s in the original (left) and centered (right) coordinate systems Calculating \(\nabla L_{sq}(\varphi_{0}^{(k)},\sigma_{1}^{(k)},\sigma_{2}^{(k)})=\mathbf{0}\) leads to a nonlinear system of equations with no closed-form solution. To solve this system, we need to find the roots of a polynomial with an order greater than four, which can only be solved numerically. Therefore, in this paper, we propose a different approach. If we assume that \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\) are known, and focus only on minimizing (18) with respect to rotation \(\varphi_{0}^{(k)}\), we get a simple polynomial system with a closed-form solution. Similarly, once we know \(\varphi_{0}^{(k)}\), and want to find \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\) that solve the problem stated by the Eq. 18, we get a linear \(2\times 2\) system of equations. Let us put the moments of the probability density function \(f_{s_{c}}\) in relation with the variances \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\). Notice that odd moments are zero since \(f_{s_{c}}\) is an even function. We calculate the second and the fourth moment. The second moment is \[\begin{split} E[S_{c}^{2}]=\frac{1}{\pi}\int\limits_{-\infty}^{+ \infty}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}s_{c}^{2}f_{s_{c}}(s_{c}; \sigma_{1}^{(k)},\sigma_{2}^{(k)},\varphi_{0}^{(k)})d\varphi ds_{c}=\frac{1}{ \pi}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\sigma_{p}^{2}(\varphi)=\\ \frac{1}{\pi}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}[(\sigma _{1}^{(k)})^{2}\cos^{2}(\varphi-\varphi_{0}^{(k)})+(\sigma_{2}^{(k)})^{2}\sin ^{2}(\varphi-\varphi_{0}^{(k)})]d\varphi=\frac{(\sigma_{1}^{(k)})^{2}}{2}+ \frac{(\sigma_{2}^{(k)})^{2}}{2},\end{split} \tag{19}\] while the fourth moment is \[\begin{split} E[S_{c}^{4}]=\frac{1}{\pi}\int\limits_{-\infty}^{+ \infty}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}s_{c}^{4}f_{s_{c}}(s_{c}; \sigma_{1}^{(k)},\sigma_{2}^{(k)},\varphi_{0}^{(k)})d\varphi ds_{c}=\frac{1}{ \pi}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}3\sigma_{p}^{4}(\varphi)=\\ \frac{3}{\pi}\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}[(\sigma _{1}^{(k)})^{2}\cos^{2}(\varphi-\varphi_{0}^{(k)})+(\sigma_{2}^{(k)})^{2}\sin ^{2}(\varphi-\varphi_{0}^{(k)})]^{2}d\varphi=\\ \frac{9(\sigma_{1}^{(k)})^{4}}{8}+\frac{3(\sigma_{1}^{(k)})^{2}( \sigma_{2}^{(k)})^{2}}{4}+\frac{9(\sigma_{2}^{(k)})^{4}}{8}.\end{split} \tag{20}\] But, second and fourth moments can be estimated from: \[\begin{split} M_{2w}^{(k)}=\frac{\sum_{i=1}^{N}p_{ik}(s_{c_{i}}^{ (k)})^{2}}{\sum_{n=1}^{N}p_{ik}}\\ M_{4w}^{(k)}=\frac{\sum_{i=1}^{N}p_{ik}(s_{c_{i}}^{(k)})^{4}}{ \sum_{n=1}^{N}p_{ik}}\end{split}, \tag{21}\] where \(s_{c_{i}}^{(k)}\) are samples as described in Eq. 14 and \(p_{ik}\) is the probability that \(i\)-th LoR belongs to the \(k\)-th component of the mixture. From Eq. 19, 20, and 21 we can estimate \((\sigma_{1}^{(k)})^{2}\) and \((\sigma_{2}^{(k)})^{2}\). By taking the square root, we get \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\). The solution of system: \[\begin{gathered}\frac{(\hat{\sigma}_{1}^{(k)})^{2}}{2}+\frac{( \hat{\sigma}_{2}^{(k)})^{2}}{2}=M_{2w}^{(k)}\\ \frac{9(\hat{\sigma}_{1}^{(k)})^{4}}{8}+\frac{3(\hat{\sigma}_{1} ^{(k)})^{2}(\hat{\sigma}_{2}^{(k)})^{2}}{4}+\frac{9(\hat{\sigma}_{2}^{(k)})^{ 4}}{8}=M_{4w}^{(k)}\end{gathered} \tag{22}\] gives estimates \((\hat{\sigma}_{1}^{(k)})^{2}\) and \((\hat{\sigma}_{2}^{(k)})^{2}\). By expressing \((\hat{\sigma}_{2}^{(k)})^{2}\) from the first equation and inserting it into the second one, we get a quadratic equation in variable \((\hat{\sigma}_{1}^{(k)})^{2}\): \[3(\hat{\sigma}_{1}^{(k)})^{4}-6M_{2w}(\hat{\sigma}_{1}^{(k)})^{2}+9M_{2w}^{(k) }-2M_{4w}^{(k)}=0. \tag{23}\] Notice that the system in Eq. 22 is symmetric, so we can choose \((\hat{\sigma}_{1}^{(k)})^{2}\geq(\hat{\sigma}_{2}^{(k)})^{2}\). The solution of the system (22) is given by Eq. 23, and the final solution is: \[\begin{gathered}(\hat{\sigma}_{1}^{(k)})^{2}=M_{2w}^{(k)}+\sqrt{ 2}\cdot\sqrt{\frac{M_{4w}^{(k)}}{3}-(M_{2w}^{(k)})^{2}},\\ (\hat{\sigma}_{2}^{(k)})^{2}=M_{2w}^{(k)}-\sqrt{2}\cdot\sqrt{\frac {M_{4w}^{(k)}}{3}-(M_{2w}^{(k)})^{2}}.\end{gathered} \tag{24}\] Now, we can calculate \(\varphi_{0}^{(k)}\) by solving the weighed least square problem in (18). By using trigonometric power-reduction formulae we can rewrite Eq. 18 as: \[\begin{gathered} L_{sq}(\varphi_{0}^{(k)},\sigma_{1}^{(k)}, \sigma_{2}^{(k)}))=\\ \sum_{i=1}^{N}p_{ik}\cdot\left(\frac{(\sigma_{2}^{(k)})^{2}-( \sigma_{2}^{(k)})^{2}}{2}\cos(2\varphi_{0}^{(k)}-2\varphi_{i})+\frac{(\sigma_ {1}^{(k)})^{2}+(\sigma_{2}^{(k)})^{2}}{2}-(s_{c_{i}}^{(k)})^{2}\right)^{2}. \end{gathered} \tag{25}\] We set the partial derivative with respect to \(\varphi_{0}^{(k)}\) to zero, i.e. \(\frac{\partial L_{sq}}{\partial\varphi_{0}^{(k)}}(\varphi_{0}^{(k)})=0\): \[\sum_{i=1}^{N}(M_{ik}\cos(\alpha_{0}^{(k)}-\alpha_{i})+N_{ik})\sin(\alpha_{0}^ {(k)}-\alpha_{i})=0, \tag{26}\] where \[\begin{gathered} M_{ik}=p_{ik}((\sigma_{2}^{(k)})^{2}-(\sigma_ {2}^{(k)})^{2}),\\ N_{ik}=p_{ik}((\sigma_{1}^{(k)})^{2}+(\sigma_{2}^{(k)})^{2}-2(s_{c_{i }}^{(k)})^{2}),\\ \alpha_{0}^{(k)}=2\varphi_{0}^{(k)},\text{and}\\ \alpha_{i}=2\varphi_{i}.\end{gathered} \tag{27}\] The equation can be written in a more useful form by using angle difference identities: \[\begin{gathered} A_{s2}\sin^{2}(\alpha_{0}^{(k)})+A_{c2}\cos^{2}( \alpha_{0}^{(k)})+A_{sc}\sin(\alpha_{0}^{(k)})\cos(\alpha_{0}^{(k)})+\\ A_{s}\sin(\alpha_{0}^{(k)})+A_{c}\cos(\alpha_{0}^{(k)})=0,\end{gathered} \tag{28}\] where \[A_{s2}=\sum_{i=1}^{N}M_{ik}\sin(\alpha_{i})\cos(\alpha_{i}),\] \[A_{c2}=-\sum_{i=1}^{N}M_{ik}\sin(\alpha_{i})\cos(\alpha_{i})=-A_{s2},\] \[A_{sc}=\sum_{i=1}^{N}M_{ik}\cos^{2}(\alpha_{i})-\sum_{i=1}^{N}M_{ik}\sin^{2}( \alpha_{i}), \tag{29}\] \[A_{s}=\sum_{i=1}^{N}N_{ik}\cos(\alpha_{i}),\] \[A_{c}=-\sum_{i=1}^{N}N_{ik}\sin(\alpha_{i}).\] Introducing substitution \(x=\cos(\alpha_{0}^{(k)})\) and \(y=\sin(\alpha_{0}^{(k)})\), we get a system of two polynomials equations: \[\begin{cases}x^{2}+y^{2}=1,\\ A_{s2}y^{2}-A_{s2}x^{2}+A_{sc}xy+A_{s}y+A_{c}x=0.\end{cases} \tag{30}\] Now, we substitute \(y=\pm\sqrt{1-x^{2}}\) in the second equation and calculate: \[\begin{split}\pm(A_{sc}x+A_{s})\sqrt{1-x^{2}}=2A_{s2}x^{2}-A_{c} x+A_{s2}\Big{/}^{2}\Rightarrow\\ (4A_{s2}^{2}+A_{sc}^{2})x^{4}+(2A_{sc}A_{s}-4A_{s2}A_{c})x^{3}+(A_{c}^{2}+A_ {s}^{2}-A_{sc}^{2}-4A_{s2}^{2})x^{2}+\\ (2A_{s2}A_{c}-2A_{sc}A_{s})x+A_{s2}^{2}-A_{s}^{2}=0\end{split} \tag{31}\] Therefore, we get a quartic equation that has an algebraic (close-form) solution as stated by Abel-Ruffini theorem. More details on how to find roots of a polynomial of the forth order analytically can be found in [5]. There are eight pairs of solutions. Four of them do not satisfy the second equation in (30), leaving us with only four solutions. Two of the remaining candidates are complex conjugates. Thus, only two real solutions remain - global minimum and global maximum. We pick the solution \(\hat{\varphi}_{0}^{(k)}=\frac{\hat{\alpha}_{0}^{(k)}}{2}=\frac{1}{2}\arctan( \frac{y}{x})\) with the minimum value of \(L_{sq}(\hat{\varphi}_{0}^{(k)},\hat{\sigma}_{1}^{(k)},\hat{\sigma}_{2}^{(k)})\), which is the solution of the least square problem stated in Eq. 25. Pay attention that now we have all data needed for construction of the covariance matrix, as shown in Eq. 33. The last step is to reevaluate \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\) from the Eq. 18 when angle \(\varphi_{0}^{(k)}\) is known. This step is optional, but useful, especially when the components significantly overlap. To solve the least square problem from Eq. 18, we set partial derivatives with respect to \((\sigma_{1}^{(k)})^{2}\) and \((\sigma_{2}^{(k)})^{2}\) to zero ( \(\frac{\partial L_{sq}}{\partial(\sigma_{1}^{(k)})^{2}}=\frac{\partial L_{sq}}{ \partial(\sigma_{2}^{(k)})^{2}}=0\) ). We get a linear \(2\times 2\) system: \(\underbrace{\begin{pmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{pmatrix}}_{\mathbf{M}_{k}}\mathbf{\sigma}_{k}^{2}=\mathbf{b }_{k}\) where \[\begin{split} M_{11}=\sum_{i=1}^{N}p_{ik}\sin^{4}(\varphi_{0}^{(k)}- \varphi_{i}),\\ M_{12}=M_{21}=\sum_{i=1}^{N}p_{ik}\sin^{2}(\varphi_{0}^{(k)}- \varphi_{i})\cos^{2}(\varphi_{0}^{(k)}-\varphi_{i}),\\ M_{22}=\sum_{i=1}^{N}p_{ik}\cos^{4}(\varphi_{0}^{(k)}-\varphi_{i }),\\ \mathbf{b}_{k}=\left(\begin{array}{c}\sum_{i=1}^{N}p_{ik}(s_{c _{i}}^{(k)})^{2}\sin^{2}(\varphi_{0}^{(k)}-\varphi_{i})\\ \sum_{i=1}^{N}p_{ik}(s_{c_{i}}^{(k)})^{2}\cos^{2}(\varphi_{0}^{(k)}- \varphi_{i})^{2}\end{array}\right),\\ \mathbf{\sigma}_{k}^{2}=\begin{pmatrix}(\sigma_{1}^{(k)})^{2}\\ (\sigma_{2}^{(k)})^{2}\end{pmatrix}.\end{split} \tag{32}\] The solution of the above system is \(\hat{\mathbf{\sigma}}_{k}^{2}=\mathbf{M}_{k}^{-1}\mathbf{b}_{k}\). Finally, from known estimates \(\hat{\sigma_{1}}^{(k)}\), \(\hat{\sigma}_{2}^{(k)}\) and \(\hat{\varphi}_{0}^{(k)}\) we can construct covariance matrix \(\mathbf{\Sigma}_{k}\). The covariance matrix can be decomposed as: \[\mathbf{\Sigma}_{k}=\mathbf{U}_{k}\mathbf{D}_{k}\mathbf{U}_{k}^{T} \tag{33}\] where \(\mathbf{D}_{k}=diag((\hat{\sigma}_{1}^{(k)})^{2},(\hat{\sigma}_{2}^{(k)})^{2})\) is a diagonal matrix and \(\mathbf{U}_{k}=[\mathbf{u}_{1}\quad\mathbf{u}_{2}]\) is a unitary (rotation) matrix. Corresponding eigenvectors \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) are unit vectors in direction of the principal axes. Eigenvectors can be expressed in terms of angle \(\varphi_{0}^{(k)}\) as \[\begin{split}\mathbf{u}_{1}=\begin{pmatrix}\cos(\hat{\varphi}_{0}^{ (k)})\\ \sin(\hat{\varphi}_{0}^{(k)})\end{pmatrix},\\ \mathbf{u}_{2}=\begin{pmatrix}\cos(\hat{\varphi}_{0}^{(k)}+\frac{ \pi}{2})\\ \sin(\hat{\varphi}_{0}^{(k)}+\frac{\pi}{2})\end{pmatrix}.\end{split} \tag{34}\] In Fig. 6, we can see eigenvectors and the corresponding square root of eigenvalues \(\hat{\sigma_{1}}^{(k)}\), \(\hat{\sigma}_{2}^{(k)}\). ## 4 Proposed Algorithm At the beginning, we arbitrarily initialize mean vectors, and randomly assign LoR-s to a subset of Gaussian components. We remind the reader that the number of components \(K\) is known in advance. The proposed algorithm consists of two steps. Both steps are iterative, and the first step provides for a rough estimate of the unknown parameters. In each iteration of the second step we recalculate the mean vectors, the covariance matrices, as well as the membership probabilities. ### First step - initialization Initially, we assign lines of response randomly to Gaussian components. Each component should have a roughly equal number of associated LoR-s \(\approx\frac{N}{K}\). In this step, we do the hard (or modal) classification of LoR-s. That means that each LoR belongs to only one of the components, i.e. the membership probabilities are binary, and the number of LoR-s of \(k\)-th component is \[\hat{L}_{k}=\sum_{i=1}^{N}p_{ik}. \tag{35}\] We calculate the mean vectors for each Gaussian component as described by Eq. 8 and 10. Then, we reassign the LoR-s by pairing them to the least distanced mean vectors. Notice that the membership probabilities are still binary. We repeat the iterative procedure (mean, reassignment), until the mean vectors changes are sufficiently small. From known probabilities \(p_{ik}\) (zeros and ones) and mean vectors for each component in the mixture, we calculate the initial estimate of the covariance matrices. Thanks to the analytical results presented in the previous Section, the calculation is straightforward. We estimate \(\sigma_{1}^{(k)}\) and \(\sigma_{2}^{(k)}\) from moments for each component separately as described in Eq. 21, 22 and 24. Afterward, we calculate \(\hat{\varphi}_{0}^{(k)}\) as discussed in the previous Section. Then we reevaluate \(\hat{\sigma}_{1}^{(k)}\) and \(\hat{\sigma}_{2}^{(k)}\) as stated in Eq. 32 and update \(\hat{\varphi}_{0}^{(k)}\) with new \(\hat{\sigma}_{1}^{(k)}\) and \(\hat{\sigma}_{2}^{(k)}\). Finally, we calculate each covariance matrix via eigenvalue decomposition (Eq. 33). ### Second step The second step is similar to the first step. The main difference is the way we calculate probabilities \(p_{ik}\). Instead of hard classification, in this step we utilize soft (proportional) classification. Let us focus on only one LoR with a total of \(K\) components in the mixture. Figure 6: Interpretation of eigenvectors and eigenvalues of covariance matrix We want to determine \(p_{ik}\) - the probability that \(i\)-th line belongs to \(k\)-th component. For each component, we calculate the line integral: \[\tilde{p}_{ik}=\int_{\gamma_{i}}\tau_{k}f_{G}(\mathbf{x};\mathbf{\mu}_{k},\mathbf{\Sigma}_{k })d\gamma_{i}, \tag{36}\] where \(f_{G}\) denoted a bivariate normal distribution and \(\tau_{k}=\frac{L_{k}}{N}\) is an estimated weight of the \(k\)-th component. The mean vector \(\mathbf{\mu}_{k}\) and covariance matrix \(\mathbf{\Sigma}_{k}\) are known from the previous step. Mapping \(\gamma_{i}\) is a parametrization of the line of response we are currently observing. The solution to this integral is a point from the univariate normal distribution (see A and Eq. 13). Keep in mind that, since \(i\)-th response have occurred, the overall probability must be 1. Therefore, we define \[p_{ik}=\frac{\tilde{p}_{ik}}{\sum_{k=1}^{K}\tilde{p}_{ik}}. \tag{37}\] We keep the same ratio as in \(\tilde{p}_{ik}\), but force the probabilities to add up to one for each LoR. Additionally, the estimation of the number of LoR-s from Eq. 35 still holds. After we updated all membership probabilities \(p_{ik}\), we calculate a new mean vector \(\mathbf{\mu}_{k}\) as already described (Eq. 10). With newly estimated mean vectors, we determine new covariance matrices \(\mathbf{\Sigma}_{k}\) in the same way as presented in the last subsection. Then we repeat the procedure, i.e. we calculate \(p_{ik}\) from \(\mathbf{\mu}_{k}\) and \(\mathbf{\Sigma}_{k}\). This can be described in 8 steps: 1. Calculate \(p_{ik}\) (Eq. 36 and Eq. 37) for all lines \(i\) and all components \(k\) that best fit the current model. 2. Estimate the mean vector for each component according to newly obtained \(p_{ik}\). 3. Calculate \(\hat{\sigma}_{1}^{(k)}\) and \(\hat{\sigma}_{2}^{(k)}\) from Eq. 24. 4. Determine \(\hat{\varphi}_{0}^{(k)}\) from Eq. 30 and 31. 5. Reevaluate \(\hat{\sigma}_{1}^{(k)}\) and \(\hat{\sigma}_{2}^{(k)}\) from Eq. 32. 6. Update \(\hat{\varphi}_{0}^{(k)}\) from Eq. 30 and 31 with new \(\hat{\sigma}_{1}^{(k)}\) and \(\hat{\sigma}_{2}^{(k)}\). 7. Use eigenvalue decomposition to calculate the covariance matrix (Eq. 33). 8. Do 3)-7) for every component in the mixture (\(k=1,2,3,..,K\)). 9. Repeat 1)-8) until the change in estimated weight of all components is sufficiently small. We obtained all mixture parameters: \(\mathbf{\mu}_{k}\) and \(\mathbf{\Sigma}_{k}\) and \(\tau_{k}\). Such a set of parameters represents continuous model of the reconstructed image. Pay attention that the proposed algorithm is numerically efficient, in spite of the number of equations in the previous section.1 ## 5 Results Our test mixture consists of three Gaussian components, in which two components highly overlap. The first component has mean vector \(\boldsymbol{\mu}_{1}=\begin{pmatrix}0\\ 0\end{pmatrix}\) and covariance matrix \(\boldsymbol{\Sigma}_{1}=\begin{pmatrix}0.0625&0\\ 0&0.0625\end{pmatrix}\). The second component highly overlaps with the first component and its parameters are \(\boldsymbol{\mu}_{2}=\begin{pmatrix}-0.4\\ -0.4\end{pmatrix}\) and \(\boldsymbol{\Sigma}_{2}=\begin{pmatrix}0.04&0.03\\ 0.03&0.09\end{pmatrix}\). The third component has the mean \(\boldsymbol{\mu}_{3}=\begin{pmatrix}1.25\\ -1\end{pmatrix}\) and covariance \(\boldsymbol{\Sigma}_{3}=\begin{pmatrix}0.04&0.006\\ 0.006&0.01\end{pmatrix}\). During the simulation, we created \(N_{1}=3500\) lines that originate from the first component, \(N_{2}=2500\) lines that originate from the second component, and \(N_{3}=1000\) from the last component. Hence, the corresponding weights in the Gaussian mixture model are \(\tau_{1}=0.5\), \(\tau_{2}=\frac{5}{14}\approx 0.36\), and \(\tau_{3}=\frac{1}{7}\approx 0.14\). The PDF of our GMM is (as depicted in Fig 7): \[\begin{split} g(x,y)=\tau_{1}\cdot f_{G}(x,y;\boldsymbol{\mu}_{1 },\boldsymbol{\Sigma}_{1})+\tau_{2}\cdot f_{G}(x,y;\boldsymbol{\mu}_{2}, \boldsymbol{\Sigma}_{2})\\ +\tau_{3}\cdot f_{G}(x,y;\boldsymbol{\mu}_{3},\boldsymbol{\Sigma}_{3}), \end{split} \tag{38}\] where \(f_{G}\) is bivariate normal distribution as in Eq 3. We have repeated the simulation \(n=100\) times with the same parameters and applied the proposed algorithm to determine the unknown parameters of the Gaussian mixture model. To check our algorithm, we measure four errors. The first one measures an error between Figure 7: The probability density function of the GMM (Ground truth) the ideal and estimated mean vector \[\mathbf{\mu}_{er_{i}}=||\hat{\mathbf{\mu}}_{i}-\mathbf{\mu}_{i}||_{2}. \tag{39}\] The second one evaluates the error between the ideal and estimated covariance matrix \[\mathbf{\Sigma}_{er_{i}}=||\hat{\mathbf{\Sigma}}_{i}-\mathbf{\Sigma}_{i}||_{F}. \tag{40}\] Also, we measure the error between the ideal and estimated weights in the GMM \[\tau_{er_{i}}=|\hat{\tau}_{i}-\tau_{i}|. \tag{41}\] Unlike the mentioned measures that deal with each parameter separately, the last one is the Kullback-Leibler (KL) divergence. It measures an overall error, since it compares the two PDF-s: \[D_{KL}(\hat{g}|g)=\int\limits_{-\infty}^{+\infty}\int\limits_{-\infty}^{+\infty }\hat{g}(x,y)\log\left(\frac{\hat{g}(x,y)}{g(x,y)}\right)dxdy, \tag{42}\] where \(g(x,y)\) is the ideal PDF (Eq. 38) and \(\hat{g}(x,y)\) is the estimate of the PDF obtained via the proposed algorithm. In Figure 8, we can see mean, covariance, and components weight error in each simulation. Pay attention that the non-overlapping component has smaller errors. Moreover, all errors are very small, and the averages are presented in Table 1. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Component & Avg. Mean & Avg. Cov. & Avg. Weight \\ \hline First & 0.035 & 0.014 & 0.019 \\ Second & 0.029 & 0.021 & 0.018 \\ Third & 0.011 & 0.004 & 0.002 \\ \hline \end{tabular} \end{table} Table 1: Average error of parameters of GMM in 100 simulations Figure 8: Errors of individual parameters of GMM in each simulation The KL divergence of 100 simulations is depicted in Fig. 9. The calculation of the KL divergence is done numerically. The KL divergence is always smaller than 0.023, while the mean value of all simulations is 0.013. From all that has been said, we conclude that the estimated PDF of GMM highly resembles the true PDF. A typical reconstruction of the observed setup can be seen in Fig. 10. As expected, the difference between the true PDF (Fig. 7) and the estimated one is not noticeable. ## 6 Conclusion We presented a novel method for estimation of unknown parameters of the Gaussian mixture model in Positron Emission Tomography. In contrast to competitive reconstruction methods based on pixel or voxel grid, the obtained GMM is continuous, and virtually of infinite resolution. It can be directly used for further analytical processing. There are several well-known GMM estimation methods based on samples. In PET imaging, the samples are not known. The challenge was to estimate the GMM parameters from projections, namely from the lines of response that fires from unknown radioactive decay points under some random angles. We solved the problem analytically, and proposed a systematic approach for obtaining the unknown parameters, under the assumption that the number of Gaussian components is known. First, we estimate the mean vectors for each component in the mixture. For each component, we get a \(2\times 2\) linear system of equations. Then, the covariance matrices of each component are estimated in four steps. In the first step, we propose to use higher moments of each Gaussian component to estimate variances in the principal axes. Then, we obtain the direction of the larger principal axis. In the third step, we additionally tune the variance estimates. Finally, we use the eigenvalue decomposition to get the desired covariance matrix of each component. Weights of Gaussian components are given by the membership probabilities. Described steps are integrated into an iterative ML-like algorithm. The results presented in this paper show that recovery of the unknown parameters is Figure 9: KL divergence in each simulation possible even when two components significantly overlap. A relatively small number of LoR-s is needed for an accurate reconstruction, thus eventually leading to lower radiation doses in the PET imaging. ## Funding This research was supported by the Croatian Science Foundation [IP-2019-04-6703]. ## Appendix A Line integration of centered and not rotated bivariate normal distribution In this appendix we present a detailed procedure on how to calculate an integral stated in Eq. 11: \[f_{p_{0_{\varphi}}}(s)=\int_{\gamma_{\varphi}}f_{Gc}(x,y)d\gamma_{\varphi}.\] We remind the reader that \(|\varphi|\leq\frac{\pi}{2}\). Since we are dealing with a line integral, we calculate \(||\gamma^{\prime}_{\varphi}(t)||_{2}\): \[||\gamma^{\prime}_{\varphi}(t)||_{2}=||(1,k)||_{2}=\sqrt{1+k^{2}}=\frac{1}{ \cos(\varphi)}.\] We calculate \(f_{p_{0_{\varphi}}}(s)=\int_{\gamma_{\varphi}}f_{Gc}(x,y)d\gamma_{\varphi}\): Figure 10: Estimated PDF of the GMM \[\int\limits_{\gamma_{\varphi}}f_{Gc}(x,y)d\gamma=\int\limits_{-\infty}^{+ \infty}\frac{1}{2\pi\sigma_{1}\sigma_{2}}\exp(-(\frac{t^{2}}{\sigma_{1}^{2}}+ \frac{(kt+l)^{2}}{\sigma_{2}^{2}}))\cdot||\gamma^{\prime}(t)||dt=\] \[\frac{\exp(-\frac{t^{2}}{2\sigma_{2}^{2}})}{2\pi\sigma_{1}\sigma_{2}\cos( \varphi)}\int\limits_{-\infty}^{+\infty}\exp(-(t^{2}(\frac{1}{2\sigma_{1}^{2}} +\frac{k^{2}}{2\sigma_{2}^{2}})+\frac{kl}{\sigma_{2}^{2}}t))dt=\] \[\frac{\exp(-\frac{t^{2}}{2\sigma_{2}^{2}})\exp(\frac{k^{2}t^{2}}{4\sigma_{2}^{ 2}}\frac{1}{2\sigma_{1}^{2}}+\frac{k^{2}}{2\sigma_{2}^{2}})}{2\pi\sigma_{1} \sigma_{2}\cos(\varphi)}\int\limits_{-\infty}^{+\infty}\exp(-(t\sqrt{\frac{1 }{2\sigma_{1}^{2}}+\frac{k^{2}}{2\sigma_{2}^{2}}})+\frac{1}{\sqrt{\frac{1}{2 \sigma_{1}^{2}}+\frac{k^{2}}{2\sigma_{2}^{2}}}}\frac{kl}{2\sigma_{2}^{2}})^{2 })dt=\] \[\frac{1}{2\pi\sigma_{1}\sigma_{2}\cos(\varphi)}\frac{\sqrt{\pi}}{\sqrt{\frac{1 }{2\sigma_{1}^{2}}+\frac{k^{2}}{2\sigma_{2}^{2}}}}\exp(\frac{-l^{2}}{2\sigma_ {2}^{2}}(1-\frac{2\sigma_{1}^{2}\sigma_{2}^{2}}{\sigma_{2}^{2}+k^{2}\sigma_{1} ^{2}}\frac{1}{2\sigma 2^{2}}))=\] \[\frac{1}{\sqrt{2\pi}\sigma_{1}\sigma_{2}\cos(\varphi)}\frac{\sigma_{1}\sigma_{ 2}}{\sqrt{\sigma_{2}^{2}+k^{2}\sigma_{1}^{2}}}e^{\frac{-l^{2}}{2(\sigma_{2}^{ 2}+k^{2}\sigma_{1}^{2})}}=\] \[\frac{1}{\sqrt{2\pi}}\frac{1}{\cos(\varphi)\sqrt{\sigma_{2}^{2}+\tan^{2}( \varphi)\sigma_{1}^{2}}}\exp(\frac{-s^{2}}{2\cos^{2}(\varphi)(\sigma_{2}^{2}+ \tan^{2}(\varphi)\sigma_{1}^{2})})=\] \[\frac{1}{\sqrt{2\pi}\sqrt{\sigma_{2}^{2}\cos^{2}(\varphi)+\sigma_{1}^{2}\sin^ {2}(\varphi)}}\exp(\frac{-s^{2}}{2(\sigma_{2}^{2}\cos^{2}(\varphi)+\sigma_{1}^{ 2}\sin^{2}(\varphi))})=\] \[\frac{1}{\sqrt{2\pi}\sigma_{p_{0}}(\varphi)}\exp(-\frac{s^{2}}{2\sigma_{p_{0} }(\varphi)^{2}}).\] Here, we used known result: \[\int\limits_{-\infty}^{+\infty}\exp(-(at+b)^{2})dt=\frac{\sqrt{\pi}}{|a|},\ \ a \neq 0.\]
2305.00004
Accurate ignition detection of solid fuel particles using machine learning
In the present work, accurate determination of single-particle ignition is focused on using high-speed optical diagnostics combined with machine learning approaches. Ignition of individual particles in a laminar flow reactor are visualized by simultaneous 10 kHz OH-LIF and DBI measurements. Two coal particle sizes of 90-125{\mu}m and 160-200{\mu}m are investigated in conventional air and oxy-fuel conditions with increasing oxygen concentrations. Ignition delay times are first evaluated with threshold methods, revealing obvious deviations compared to the ground truth detected by the human eye. Then, residual networks (ResNet) and feature pyramidal networks (FPN) are trained on the ground truth and applied to predict the ignition time.~Both networks are capable of detecting ignition with significantly higher accuracy and precision. Besides, influences of input data and depth of networks on the prediction performance of a trained model are examined.~The current study shows that the hierarchical feature extraction of the convolutions networks clearly facilitates data evaluation for high-speed optical measurements and could be transferred to other solid fuel experiments with similar boundary conditions.
Tao Li, Zhangke Liang, Andreas Dreizler, Benjamin Böhm
2023-04-20T21:10:14Z
http://arxiv.org/abs/2305.00004v1
# Accurate Ignition Detection of Solid Fuel Particles Using Machine Learning ###### Abstract In the present work, accurate determination of single-particle ignition is focused on using high-speed optical diagnostics combined with machine learning approaches. Ignition of individual particles in a laminar flow reactor are visualized by simultaneous 10 kHz OH-LIF and DBI measurements. Two coal particle sizes of 90 - 125 \(\mathrm{\SIUnitSymbolMicro m}\) and 160 - 200 \(\mathrm{\SIUnitSymbolMicro m}\) are investigated in conventional air and oxy-fuel conditions with increasing oxygen concentrations. Ignition delay times are first evaluated with threshold methods, revealing obvious deviations compared to the ground truth detected by the human eye. Then, residual networks (ResNet) and feature pyramidal networks (FPN) are trained on the ground truth and applied to predict the ignition time. Both networks are capable of detecting ignition with significantly higher accuracy and precision. Besides, influences of input data and depth of networks on the prediction performance of a trained model are examined. The current study shows that the hierarchical feature extraction of the convolutions networks clearly facilitates data evaluation for high-speed optical measurements and could be transferred to other solid fuel experiments with similar boundary conditions. ## Introduction Particle ignition is an essential stage for the flame stability of a pulverized fuel stream, which has been a research topic since years [1, 2, 3, 4]. In general, particle ignition can be classified into two modes: homogeneous and heterogeneous ignition. In homogeneous ignition, volatile matters, including hydrogen (H\({}_{2}\)), hydrocarbons (C\({}_{x}\)H\({}_{y}\)), and tars, are released from particles upon increasing temperatures. These fuel gases, mixing with oxidizer, ignite above the flammable temperature and mixture fraction limits, resulting in a gas-phase flame in the vicinity of the particle. In heterogeneous ignition, oxidizers approach the particle surface followed by direct surface reactions. The ignition mode can be influenced by coal rank, particle size, heating rate, and gas composition. On the one hand, particle surface temperature is a widely accepted parameter for determining heterogeneous ignition. A well-calibrated two- or three-color pyrometer can provide conclusive information about particle temperature and heterogeneous combustion [5, 6, 7]. On the other hand, indicators and corresponding measurement techniques are rather diverse for homogeneous combustion. Here, particle temperature is still an essential parameter in experiments, see for example [8, 9, 10, 11, 12, 13, 14]. However, the pyrometric temperature closely relates to hot soot and char particles. Recent advancements in high-speed imaging enables tracking the particle combustion history with sufficient temporal and spatial resolutions [15, 16, 17, 10]. These investigations define the gas-phase ignition on the first visible broad-band emission signal recorded by high-speed cameras in the kHz range. Methylidyne (CH) chemiluminescence was also used as an ignition indicator, which can be imaged in a time-integrated [18] or single-shot [7] manner. For single-shot measurements, intensified cameras are required due to the weak CH* chemiluminescence signals. In addition, soot particles were imaged by an intensified camera to derive the homogeneous ignition time [19]. Since the CH* correlates with the heat release in the gas phase, it is a better indicator than the black-body radiation of soot particles. Unfortunately, CH* emission (at about 430 \(\mathrm{nm}\)) spectrally overlaps with the broad-band black-body emission. Besides CH radicals, the hydroxyl (OH) radical has been used as an important flame marker, which abundantly exists in the reaction zone and the burned gas. The first planar laser-induced fluorescence of OH radicals (OH-LIF) on the single-particle level was reported in [20]. The homogeneous ignition process of individual particles was visualized at 10 kHz with a spatial resolution of approximately 100 um. The technique enabled the evaluation of single-particle volatile flame structures with different particle sizes and atmospheres [21]. Further experiments, supplemented by simultaneous flame luminosity and diffused backlight-illumination (DBI), indicated the distinctive appearance of OH-LIF and luminosity signals at the onset of ignition, suggesting OH-LIF as a favorable diagnostic approach for homogeneous ignition detection [22]. Accurate ignition detection depends on combustion conditions, experimental techniques, as well as the definition of the ignition event [23]. The onset of ignition is usually defined based on a certain intensity threshold [7, 18, 22] above the background level or the evaluation of signal topology [19, 24]. Parameter modification in the image analysis is unavoidable if a different particle or particle size is investigated, or another detection system is used. This hurdle can be tackled by deep learning for a better feature extraction. Deep learning approaches are capable of learning image features and making predictions after training on a data set. Recently, convolution neural networks (CNN) have been applied in experimental studies to predict velocity fields [25] and 3D flame structures [26]. In the present work, high-speed multi-parameter laser diagnostics are assisted by deep learning approaches, providing accurate ignition delay time based on object classification and detection architectures. Considering its importance and challenges, the current study emphasizes on the accurate ignition detection in solid fuel combustion, which was not fully addressed in the previous experiment. Homogeneous ignition of high-volatile solid fuels at realistic heating rates is particularly targeted. For this purpose, simultaneous OH-LIF and DBI measurements were applied for single particles burning in a laminar flow reactor (LFR), producing a comprehensive database. Homogeneous ignition is evaluated for two particle sizes under air and oxy-fuel atmospheres. Two main deep learning architectures, namely residual network (ResNet) and feature pyramidal network (FPN) are implemented. The influence of training data, network depth, and pre-training on the capability for ignition prediction is carefully examined. ### Experiments Multi-parameter optical diagnostics were employed to investigate the fundamental processes of single coal particles during ignition and volatile combustion in laminar flows. The experimental details were reported in the previous work [24] and will be briefly introduced here. An in-house laminar flow reactor was designed to generate desired gas atmospheres simulating temperatures and species concentrations in realistic conditions. Premixed lean methane (CH\({}_{4}\)) flat flames were stabilized on the surface of a ceramic matrix. By operating inlet gas mixtures of CH\({}_{2}\)/O\({}_{2}\)/N\({}_{2}\) or CH\({}_{2}\)/O\({}_{2}\)/CO\({}_{2}\), conventional (AIR) and oxy-fuel (OXY) atmospheres were generated in the exhaust gas with homogeneous temperature and velocity fields. These conditions were denoted as AIR10/20/30/40 and OXY20/30/40, with the number indicating the volumetric oxygen concentrations of the exhaust gas. Micrometer-sized bituminous coal particles were seeded individually into the burner by carrier gases having the same molecular composition and velocity as the flat flame inlet. Figure 1 illustrates the multi-parameter optical measurement and a sketch of the burner. Gas-phase volatile flames were visualized by two-dimensional high-speed OH-LIF at 10 kHz. OH-LIF signals were excited at 283.01 nm by a dye laser system and collected by a CMOS camera (HSS6, LaVision) coupled with a lens-coupled intensifier (HS-IRO, LaVision). A narrow bandpass filter and a short gate time were applied to eliminate interfering thermal radiation and chemiluminescence. Particle ignition time can be determined by tracking the temporal variation of OH-LIF signal topology [24]. Besides, particles positions and velocities are simultaneously detected by 10 kHz DBI measurements. A high-power LED illuminated coal particles at 550 nm and projected particle shadows were imaged onto another CMOS camera (HSS6, LaVision). This camera was equipped with a long-distance microscope to enhance the spatial resolution estimated as approximately 20 um at the burner center position [27]. The DBI tech nique allowed for an accurate determination of particle size, shape, and velocities [24, 27]. As illustrated in Fig.1, the difference in resolutions led to a smaller field of view for DBI (i.e. \(11(\text{height})\times 5(\text{width})\,\text{mm}^{2}\)) than for OH-LIF (i.e. \(19\times 19\,\text{mm}^{2}\)). Since the homogeneous ignition of single particles occurs within a few millimeters above the burner, the ignition process can be fully captured by both imaging techniques. More details regarding experimental methodology are referred to [24]. #### Data evaluation Colombian high-volatile bituminous coal particles sieved to two size distributions of 90 -125 \(\upmu\)m and 160 -200 \(\upmu\)m were investigated, which are referred as particles A and B. With an approximately \(10^{5}\) K/s particle heating rate [24, 27], homogeneous ignition of released volatiles dominates the particle ignition mode. The ignition process was captured in the so-called single-particle event, in which an individual particle moves through the probe volume without interacting with other particles. In total, 1006 events and 512 events were detected for particles A and B, respectively. Due to the larger diameter, particle B was more difficult to seed through the 0.8 mm injection tube, resulting in lower probabilities of particle detection. An example of a single-particle event is presented in Fig. 2, temporally conditioned on the ignition instance at \(t_{\text{ign}}\) (every tenth images shown). Figure 2(a) shows the temporal evolution of OH-LIF intensities normalized on the homogeneous background signal. By applying a constant Figure 1: A schematic experimental layout including optical diagnostics and the laminar flow reactor. Figure 2: A time-resolved sequence of particle ignition with \(t_{\text{ign}}\) given by the ground truth (manual labeling). (a) OH-LIF raw images. (b) binary OH-LIF images. intensity threshold \(I_{\text{th}}\) of 1.2 [24], corresponding binary images are shown in Fig.2(b). In the previously proposed signal and structure (SAS) analysis [24], the onset of ignition is defined if the normalized intensity and the connected area exceed respective thresholds. The center points of particles (red solid dots) are shot-by-shot determined from the DBI data. Since all cameras are spatially mapped based on an accurate target calibration, DBI center points reveal as good references to locate particles in OH-LIF images. To assess the accuracy of different methods for ignition detection, a ground truth of ignition delay time needs to be defined for each particle event. For this purpose, the ignition frame of all 1518 events is manually labeled the same person, who is instructed to select the first OH-LIF image, which contains recognizable flame structures against the background. The performance for predicting single-particle ignition can be evaluated using the ignition time difference (ITD): \[\text{ITD}_{\text{detector}}=t_{\text{i,detector}}-t_{\text{i,gt}}. \tag{1}\] Here, \(t_{\text{i,detector}}\) and \(t_{\text{i,gt}}\) indicate the ignition delay time determined by a detector and ground truth, respectively. If ITD \(>\) 0, ignition time is over-predicted, otherwise ignition time is under-predicted. A statistical analysis was performed for each particle size to compare the accuracy and precision of each detector or data processing method. Two different deep learning networks are implemented, which belong to object classification (i.e, residual networks, ResNet) and object detection (feature pyramidal networks, FPN) in the context of computer vision. Image classification works on an entire image or an image segment and classifies the image into a category, whereas object detection specifies multiple objects within an image into categories (e.g. classification) with their locations represented by bounding boxes (e.g. bounding box regression). For brevity, the architecture of deep learning networks are not elaborated here, and more details of related work are referred to [28] for ResNet, [29] for R-CNN, [30] for Fast R-CNN, [31] for Faster R-CNN, and [32] for FPN. #### 2.0.2 Results and Discussions _Ignition detected by SAS_ Figure 3(a) shows the ignition delay time determined by ground truth (\(t_{\text{i,gt}}\)) for particles A and B under seven investigated atmospheres. Here, symbols and error bars indicate the mean value (\(\mu\)) and \(\pm\) one standard deviation (\(\sigma\)), respectively. The ignition delay time is referenced to the flat flame position (\(y\) \(\approx\) 1.5 mm) where particles start to heat up by the hot exhaust gas. Basically, the overall trend over particle diameters and atmospheres is very similar to \(t_{\text{i,SAS}}\) in [24]. Effects of particle size, oxygen concentration, and CO\({}_{2}\) replacement can be observed, and more details can be found in [24]. Differences between \(t_{\text{i,gt}}\) and \(t_{\text{i,SAS}}\) are observed. The over-prediction of ignition times by the SAS analysis can be quantified by calculating the ignition time difference ITD for each single-particle event, as shown in Figure 3(b). Overall good agreement is archived by the SAS analysis for the small particles A, whereas some discrepancies can be identified for the large particle B. Specifically, the SAS analysis over-predicts the ignition delay time of particles B by 2 - 4 ms in all atmospheres, which is presumably caused by the improper selection of intensity and area thresholds. This challenge motivated the application of machine learning for the more generalized feature detection of ignited particles. _Ignition detected by ResNets_ In machine learning approaches, only 35% of the entire events are used to train ResNet models [28], which corresponds to 462 particle events. The remaining 65% (1056 events) are testing data, allowing for sufficient statistical comparisons with ground truth and the performance evaluation of different architectures. However, if the 35% data were further divided into training data and validation data, the number of training data would be insufficient to obtain converge networks. Therefore, the \(k\)-folds cross-validation approach is consistently used to train object classification networks. With \(k\)-folds cross-validation, the entire training set is split into \(k\) sets. One by one, a set is selected for validation, and the \(k-1\) other sets are combined into the corresponding training set. 5-folds cross-validation is used, and 10 epochs are trained for each fold. A predicted probability higher than 50% is classified as an ignited particle in testing data. Several ResNet-18 models are trained with an increasing number of particle events, which are equally selected from different operating conditions, in order to examine the influence of training data. Figure 4 shows the ignition time difference between ground truth \(t_{\text{t,gt}}\) and \(t_{\text{i,RN}}\) predicted by ResNet-18 which is trained with increasing particle events \(N_{\text{ev}}\) = 14, 56, 140, and 462. They correspond to 1, 4, 10, and 33 (equivalent) particle events selected from each atmosphere and particle size. Note that new data are always added into the training by retaining the existing data, e.g., the original 14 events are included in the 56 events and so on. For a reasonable comparison, 1056 particle events are consistently used for the prediction purpose. Generally, ResNet-18 achieves a higher precision with less scattered results than the SAS methods. With more data fed into the training process, mean values of \(t_{\text{i,RN}}\) - \(t_{\text{i,gt}}\) (ITD) approach 0 ms indicating an continuously improving accuracy in predicting the ground truth. With a small amount of data, e.g. \(N_{\text{ev}}\leq 140\), ITD distributions shift in either negative or positive side. It implies that the networks are still fragile to the newly added training data, as weight and bias parameters are not sufficiently trained with a limited number of events. With \(N_{\text{ev}}\) increased to 462, the predictive accuracy becomes equally high for particle A and B. Furthermore, error bars in Fig. 4 constantly narrow with increasing data involved in the training process. It is obvious that the number of images used for training improves the precision of ignition prediction. For ignition detection, 35% of data for training is proposed as a reasonable compromise between Figure 4: Ignition time difference \(t_{\text{i,RN}}\) - \(t_{\text{i,gt}}\) by using ResNet-18 with the amount of particle events (a) \(N_{\text{ev}}\) = 14, (b) \(N_{\text{ev}}\) = 56, (c) \(N_{\text{ev}}\) = 140, and (d) \(N_{\text{ev}}\) = 462. Figure 3: Comparison of ignition delay times by the SAS method \(t_{\text{i,SAS}}\) and the manual label \(t_{\text{i,gt}}\) for two particle sizes A and B in seven atmospheres. the performance of trained networks and the expanse of manual labeling whenever a new data set is under evaluation. #### 3.2.2 Ignition detected by FPN For FPN [32], training loss converges, and 5 epochs are trained to reduce the computational cost. A particle will be identified as \(ignited\) for the final classification if its predicted probability is higher than 50%. Figure 5 compares predictive results of FPN by implementing ResNets with an increasing depth in the bottom-up pathway. All models are trained with 462 particle events. The ignition time difference \(t_{i,\text{FPN}}\) - \(t_{i,\text{gt}}\) is represented by its mean and standard deviation. An overall good agreement between FPN prediction and ground truth is observed, which is independent on atmosphere but on particle size. Compared to the SAS method in Fig.3, evident improvements are achieved in predictive accuracy and precision. However, compared to ResNet-18 in Fig 4(d), the ITD scatters more in FPN by using the same training data. In addition, it can be noted that increasing backbone convolutions layers narrows the error bars and improves the prediction precision. But no further improvements can be noticed comparing backbone ResNet-101 with ResNet-50. #### 3.2.3 Conclusions The current work presents an experimental investigation of single particle ignition using high-speed optical diagnostics in a laminar flow reactor. Homogeneous ignition is visualized by 10 kHz OH-LIF measurements with the simultaneously detected particle location in DBI measurements. Accurate detection of ignition delay times is focused on with both conventional threshold methods and advanced machine learning approaches. The prediction performance of different approaches for ignition detection are conclusively compared in Fig. 6. The mean \(\mu\) and standard deviation \(\sigma\) of ITD are evaluated by including all atmospheres and further used to generate normal distributions \(\mathcal{N}(\mu,\,\sigma)\) approximating the overall prediction performance for particles A and B. The previously introduced SAS method provides satisfactory predictions for small particles but substantially over-estimates large particles' ignition. It is because the accuracy and precision inherently relate with the threshold selection. Owing to the changing characteristics of OH-LIF signals with particles and atmospheres, fixed thresholds for area and intensities might induce errors and restrict the detection quality. Sensitivity analyses (not shown) clearly indicates the distinctive signals, especially the intensity levels at the onset of ignition, which makes Figure 5: Ignition time difference \(t_{i,\text{FPN}}\) - \(t_{i,\text{gt}}\) using different ResNet models in the bottom-up pathway of FPN networks. conventional threshold methods incapable for thus a task. Although optimum thresholds can be statistically obtained by minimizing the difference between prediction and ground truth, this method is not really viable when dealing with a new set of data, owing to the in-prior knowledge required for optimal thresholding. To avoid the difficulties in optimizing algorithm parameters, convolutions networks with hierarchical feature extraction are implemented. Figure 6(b) and (c) show the results provided by the best ResNet model (i.e. pre-trained ResNet-18 fine-tuned with 462 events) and the best FPN model (i.e. backbone ResNet-50 trained with 462 events), respectively. Evidently, enhanced quality of ignition detection is achieved, which is superior to the conventional processing approach investigated in this study. This can be explained by the feature recognition over different scales, which is inherently included in the hierarchy of convolutional layers in their architectures. As a result, ResNet-18 achieves the most accurate and precise ignition delay time compared with ground truth. More complex networks such as FPN promote no further improvements but slight under-estimation of ignition delay times. However, training ResNet-18 involves an additional processing step of RoI extraction from the particle center. Despite FPN models also require pre-estimated particle positions but is able to work on an entire image. Although these models are heavier and need a longer training time, they could be further developed to detect multi-particle ignition in the future. Regarding the simple features of ignited coal particles, it can be concluded that both object classification and detection approaches of machine learning are valuable for solid fuel combustion analysis. Residual and feature pyramidal networks are appropriate architectures for such evaluation tasks and have the potential to be transferred to other experimental investigations. ## Acknowledgements This work was founded by the Hessian Ministry of Higher Education, Research, Science and the Arts - cluster project Clean Circles.
2304.06984
On equilibria of tetrahedra
The monostatic property of polyhedra (i.e. the property of having just one stable or unstable static equilibrium point) has been in a focus of research ever since Conway and Guy \cite{Conway} published the proof of the existence of the first such object. In the same article they also proved that a homogeneous tetrahedron has at least two stable equilibrium points. By using polar duality, the same idea has been used \cite{balancing} to prove that a homogeneous tetrahedron has at least two unstable equilibria. Conway \cite{Dawson} also claimed that among inhomogeneous tetrahedra one can find monostable ones. Here we not only give a formal proof of this statement and show that monostatic tetrahedra have exactly 4 equilibria, but also demonstrate a startling new aspect of this problem: being monostatic implies certain \emph{visible} features of the shape and vice versa. Our results also imply that mono-monostatic tetrahedra (having just one stable and just one unstable equilibrium point) do not exist. In contrast, we show that for any other legal number of faces, edges, and vertices there is a mono-monostatic polyhedron with that face vector.
Gergő Almádi, Robert J. MacG. Dawson, Gábor Domokos, Krisztina Regős
2023-04-14T08:11:54Z
http://arxiv.org/abs/2304.06984v1
# On equilibria of tetrahedra ###### Abstract. The monostatic property of polyhedra (i.e. the property of having just one stable or unstable static equilibrium point) has been in a focus of research ever since Conway and Guy [2] published the proof of the existence of the first such object. In the same article they also proved that a homogeneous tetrahedron has at least two stable equilibrium points. By using polar duality, the same idea has been used [7] to prove that a homogeneous tetrahedron has at least two unstable equilibria. Conway [3] also claimed that among inhomogeneous tetrahedra one can find monostable ones. Here we not only give a formal proof of this statement and show that monostatic tetrahedra have exactly 4 equilibria, but also demonstrate a startling new aspect of this problem: being monostatic implies certain _visible_ features of the shape and vice versa. Our results also imply that mono-monostatic tetrahedra (having just one stable and just one unstable equilibrium point) do not exist. In contrast, we show that for any other legal number of faces, edges, and vertices there is a mono-monostatic polyhedron with that face vector. Key words and phrases:polyhedron, static equilibrium, monostatic polyhedron 2010 Mathematics Subject Classification: 52B10, 77C20, 52A38 GD and GA: Support of the NKFIH Hungarian Research Fund grant 134199 and of grant BME FIKP-VIZ by EMMI is kindly acknowledged. KR This research has been supported by the program UNKP-22-3 by ITM and NKFIH. The gift representing the Albrecht Science Fellowship is gratefully appreciated. Here we focus on the weighted case. We will exhibit conditions on a (non-regular) tetrahedron \(\mathcal{T}\) equivalent to the existence of a weighting \((\mathcal{T},O)\) making it monostable; and we will offer necessary and sufficient conditions for a weighted tetrahedron to illustrate Conway's claim. (While this is not hard to construct, there does not appear to be a published example.) Although never stated explicitly, one underlying question about monostability has always been whether it could be a _visible_ property of any given shape. In general, this does not seem to be the case: the extreme (0.1%) shape tolerance of the Gombic shape signal that it would be hard to select a mono-monostatic shape based alone on visual inspection. Indeed, in case of smooth shapes it was shown [11] that mono-monostatic ones have minimal flatness and thinness, thus they appear in the vicinity of the sphere. Here we show a startling feature of tetrahedra, where both monostability and mono-instability are reflected in the shape in a unique manner. Not only are these extrinsic features beautiful, they also appear to be at the heart of the phenomenon: building on these shape characteristics we will show that any monostable tetrahedron is _bi-unstable_ - that is, it has equilibrium on exactly two vertices. Neither Gomboc-like mono-monostatic weighted tetrahedra, nor monostable tetrahedra with three or four unstable equilibria, exist. However, we will show that if a tetrahedron has a weighting making it monostable on one face, then it must have other weightings making it monostable on each of its faces. We will also show that while, in general, the physics of tipping bodies in three or more dimensions is highly complicated, the tipping of a monostable tetrahedron can be completely described knowing its shape and center of mass. The _polar dual_ of a polyhedron \(\mathcal{P}\) is the set \(\mathcal{P}^{*}=\{x:x\cdot p\leq 1\text{ for all }p\in\mathcal{P}\}\). If \(\mathcal{P}\) is convex, \(\mathcal{P}^{**}=\mathcal{P}\). The polar dual of a tetrahedron is also a tetrahedron; and there is a natural pairing between the vertices of one and the faces of the other. The following result was proved in [7]. **Proposition 1**.: _Let \((\mathcal{P},O)\) be a weighted convex polyhedron, with \(O\) at the origin. Then \(\mathcal{P}\) has an equilibrium on a face if and only if \(\mathcal{P}^{*}\) has an equilibrium on the corresponding vertex._ Thus, any mono-unstable weighted tetrahedron is bistable. We will exhibit explicit geometric conditions for a tetrahedron to have such a weighting; and we can see that (in contrast with the monostable case) if \((\mathcal{T},O)\) is mono-unstable on a vertex \(A\), \(\mathcal{T}\) cannot be weighted to be mono-unstable on any other vertex. Finally, we will show that the face vector \((f,e,v)=(4,6,4)\) that characterizes tetrahedra is unique among those of polyhedra, in that any other legal face vector does have a representative polyhedron that may be weighted to make it mono-monostatic. ### Definitions of equilibrium **Definition 1**.: [7], [11] Let \(\mathcal{P}\) be a convex polyhedron, and let \(\operatorname{int}\mathcal{P}\) and \(\operatorname{bd}\mathcal{P}\) denote its interior and boundary, respectively. We select a point \(O\in\operatorname{int}\mathcal{P}\), which we shall think of as the center of mass. (We are not assuming \(\mathcal{P}\) to have uniform density, so this implies no restriction on \(O\) other than its being an interior point.) We say that \((\mathcal{P},O)\) is _in equilibrium on_ a face, edge, or vertex \(A\) if there exists \(Q\in\operatorname{relint}A\) such that the plane perpendicular to \([O,Q]\) at \(Q\) supports \(\mathcal{P}\). (Recall that "relative interior" is defined in such a way that a singleton's relative interior is itself, though its _interior_ is empty: thus \(\mathcal{P}\) may be in equilibrium on a vertex.) We call the equilibrium _stable_ if \(Q\) is on the relative interior of a face, _unstable_ if \(Q\) is a vertex, and _hyperbolic_ (saddle) otherwise and we denote their numbers by \(S,U,H\), respectively. As noted above, Maxwell showed that \(S-H+U=2\). These equilibria correspond intuitively to positions in which a physical model of \((\mathcal{P},O)\) balances on a horizontal surface. They also correspond to "pits", "peaks", and "passes" in the radial function of \(\mathcal{P}\) with respect to \(O\). **Definition 2**.: [7] We call a \(\mathcal{P}\) convex polyhedron _monostable_ if it has a unique stable equilibrium (there is exactly one face upon which it will rest) and _mono-unstable_ if it has a unique unstable equilibrium (there is exactly one vertex upon which it can balance precariously.) ## 2. Results on monostability Henceforth we assume that \(\mathcal{P}\) is a tetrahedron \(\mathcal{T}=\boxtimes ABCD\) with face \(\mathcal{A}\) opposite vertex \(A\) (etc.) We say that a tetrahedron has an _obtuse path_\(A-B-C-D\) if it has three edges, WLOG \(\overline{AB}\),\(\overline{BC}\), and \(\overline{CD}\), with obtuse dihedral angles and no common vertex. Such tetrahedra exist, for instance tetrahedron \(\mathcal{T}_{0}\) with vertices \((A,B,C,D)=((0,0,0),(0,0,10000),(153600,44400,0),(112200,7800,6400)\). (We remark that \(\mathcal{T}_{0}\) is monostable on face \(\mathcal{D}\) with the center of mass located at \(O=(104200,4300,100)\).) We note that three obtuse edges can never surround a face, and that no tetrahedron can have four obtuse dihedrals. **Theorem 1**.: _Let \(\mathcal{T}\) be a tetrahedron; then the following are equivalent:_ 1. \(\mathcal{T}\) _has an obtuse path;_ 2. _there exists_ \(O\) _such that_ \((\mathcal{T},O)\) _is monostable;_ 3. _for every face_ \(F\)_, there exists_ \(O_{F}\) _such that_ \((\mathcal{T},O_{F})\) _is monostable on_ \(F\)_._ Proof.: (3)\(\Rightarrow\)(2) trivially. (2) \(\Rightarrow\)(1): A tetrahedron has the property (unique among polyhedra) that we can walk from vertex to vertex along a path of edges if and only if we can skip from face to face across the same edges. There must be enough obtuse dihedrals to let the tetrahedron roll from any face to the resting face; and no face can have three obtuse dihedrals. The obtuse dihedrals thus form a path of length 3. Finally, (1)\(\Rightarrow\)(3). If our obtuse path is \(A-B-C-D\), the obtuse edges connect the faces in the order \(\mathcal{C}-\mathcal{D}-\mathcal{A}-\mathcal{B}\) (Figure 1). It suffices to show that for appropriate \(O_{\mathcal{A}}\) the pair \((\mathcal{T},O_{\mathcal{A}})\) is monostable on \(\mathcal{A}\), and similarly for \(\mathcal{B}\). Construct the plane perpendicular to \(\mathcal{C}=\Delta ABD\) along the edge \(\overline{AB}\) shared with \(\mathcal{D}\). This cuts \(\overline{CD}\) at a point \(E\) (Figure 0(a)). The tetrahedron \(\boxtimes ABCE\) has obtuse edges \(\overline{BC}\) and \(\overline{EC}\). If \(O\in\operatorname{int}\boxtimes ABCE\), then \((\mathcal{T},O)\) has no stable equilibrium on \(\mathcal{C}\). Next, construct the plane perpendicular to \(\mathcal{D}=\Delta ABC\) along \(\overline{BC}\). This cuts \(\overline{AE}\) at \(F\), and \(\overline{CE}\) is an obtuse edge of the tetrahedron \(\boxtimes BCEF\) (Figure 0(b)). If \(O\in\operatorname{int}\boxtimes BCEF\), then \((\mathcal{T},O)\) has no stable equilibrium on \(\mathcal{C}\) or \(\mathcal{D}\). If we repeat this with a plane perpendicular to \(\mathcal{A}\) along \(\overline{CE}\), it meets \(\overline{BF}\) at \(G\); and if \(O_{\mathcal{B}}\in\operatorname{int}\boxtimes CEFG\), then \((\mathcal{T},O_{\mathcal{B}})\) has no stable equilibrium on \(\mathcal{A}\), \(\mathcal{C}\), or \(\mathcal{D}\), so is monostable on \(\mathcal{B}\) (Figure 0(c)). Similarly, the plane through the same edge but perpendicular to \(\mathcal{B}\) meets \(\overline{BF}\) at \(H\); and if \(O_{\mathcal{A}}\in\operatorname{int}\boxtimes BCEH\), then \((\mathcal{T},O_{\mathcal{A}})\) has no stable equilibrium on \(\mathcal{B}\), \(\mathcal{C}\), or \(\mathcal{D}\) and is monostable on \(\mathcal{A}\). **Remark**.: We've shown that, suitably weighted, a tetrahedron with an obtuse path is stable only on one face. We haven't shown how it gets there. (As the bartender says at closing time, "You don't have to go home, ladies and gentlemen, but you can't stay here.") In fact, without some way to dissipate energy, the tetrahedron will never settle onto any face, but will bounce forever. 1 Footnote 1: Fans of opera, or at least of operatic trivia, will recall the story of Eva Turner, in the role of Tosca, throwing herself from the battlements onto an over-resilient trampoline placed there by the stage hands, and making several unplanned curtain calls! When tipping over an edge, a polyhedron has only one degree of freedom, and how it dissipates its energy does not affect where it ends up, so long as it does so effectively. However, in some cases, the body may tip not onto an edge (from where it must continue to the next face) but onto a vertex (figure 2 a). Should this happen, the body temporarily has not two but three degrees of freedom: the center of mass \(O\) moves on a sphere about \(A\), and the body can also rotate about the axis \(OA\) (figure 2 b). We thus need to take into account torque, moment of inertia, and the position of \(O\) relative to the edge upon which the tetrahedron lands: the problem would seem intractable. Fortunately, these difficulties never arise if \((\mathcal{T},O)\) is a monostably-weighted tetrahedron! In this case, as shown above, if a face has two obtuse dihedrals, the center of mass is positioned so that \(\mathcal{T}\) will tip _onto_ that face across at least one of those edges. Each non-equilibrium face thus has a unique exit; and providing that we assume landings to be inelastic, the exact path to stable equilibrium may be computed knowing only \((\mathcal{T},O)\) and the starting face. ## 3. Some spherical geometry If we consider the intersection of \(\mathcal{T}\) with a small sphere \(\mathcal{S}_{A}\) centered at some vertex \(A\), we see that the geometry of polyhedral vertices is just that of the sphere! For a weighted tetrahedron \((\mathcal{T},O)\), let \(P,Q,R,\Omega\) be the respective intersections of \(\mathcal{S}_{A}\) with \(\overline{AB},\overline{AC},\overline{AD}\), and \(\overline{AO}\) (see Figure 3.) Then (for instance) the face angle \(\angle BAC\) corresponds to the arc \(\overline{PQ}\) on \(\mathcal{S}_{A}\), while the dihedral angle between faces \(\Delta ABC\) and \(\Delta\,ACD\) corresponds to the spherical angle \(\angle PQR\). In each case the radian measures are equal. Figure 1. Loading regions for a tetrahedron with an obtuse path Figure 2. A tetrahedron that rolls without slipping can have 1 or 3 degrees of freedom. We call a spherical segment _short_ if its measure is less than \(\pi/2\), otherwise _long_; angles, as usual, are "acute" or "obtuse." The following result characterizes unstable equilibria. **Lemma 1**.: _For any vertex \(A\) of a weighted tetrahedron \((\mathcal{T},O)\)_ 1. \((\mathcal{T},O)\) _has (unstable) equilibrium on_ \(A\) _if and only if the angles_ \(\angle BAO\)_,_ \(\angle CAO\)_, and_ \(\angle DAO\) _are all acute. Equivalently, all of the spherical arcs_ \(\overline{P\Omega}\)_,_ \(\overline{Q\Omega}\)_, or_ \(\overline{R\Omega}\) _are short._ 2. \((\mathcal{T},O)\) _has an equilibrium on_ \(A\) _for every_ \(O\in\operatorname{int}\mathcal{T}\) _if none of the face angles_ \(\angle BAC\)_,_ \(\angle CAD\)_,_ \(\angle DAB\) _are obtuse. Equivalently, none of the arcs_ \(\overline{PQ}\)_,_ \(\overline{QR}\)_, or_ \(\overline{RP}\) _is long._ Proof.: Let \(\Pi\) be the plane normal to \(\overline{OA}\) at \(A\): then \(\mathcal{T}\) has an equilibrium on \(A\) if and only if \(B\),\(C\), \(D\) all lie on the same side of \(\Pi\) as \(O\). This is true for every \(O\)_interior to_\(\mathcal{T}\) if and only if \(\angle BAC\), \(\angle CAD\), \(\angle DAB\) are all acute or right. We can also characterize stable equilibria in this way, though we have local configurations at three vertices to consider. The following result is obvious: **Lemma 2**.: 1. \((\mathcal{T},O)\) _has stable equilibrium on_ \(\Delta ABC\) _if and only if the dihedral angles between (on the one hand)_ \(\triangle ABC\) _and (on the other hand)_ \(\triangle ABO\)_,_ \(\triangle AOC\)_, and_ \(\triangle OBC\) _are all acute._ 2. \((\mathcal{T},O)\) _has stable equilibrium on_ \(\triangle ABC\) _for every_ \(O\in\operatorname{int}\mathcal{T}\) _if and only if none of the dihedral angles between (on the one hand)_ \(\triangle ABC\) _and (on the other hand)_ \(\triangle ABD\)_,_ \(\triangle ADC\)_, or_ \(\triangle DBC\) _are obtuse._ 3. _If (for instance) the dihedral angle between_ \(\triangle ABC\) _and_ \(\triangle ABO\) _is obtuse, then (on_ \(\mathcal{S}_{A}\)_) the angle_ \(\angle QP\Omega\) _is obtuse, as is the corresponding angle on the sphere_ \(\mathcal{S}_{B}\)_._ Now that we've seen the significance of spherical geometry to this problem, let's establish a few facts from the folklore. **Lemma 3**.: 1. \(A\) _spherical triangle with only acute angles has only short edges;_ 2. \(A\) _spherical triangle with exactly one acute angle has exactly one short edge, which is opposite the acute angle;_ 3. \(A\) _spherical triangle with three long edges has three obtuse angles;_ Figure 3. The geometry of a vertex is the geometry of a sphere. 4. _A spherical triangle with exactly one long edge has exactly one obtuse angle, opposite the long edge._ 5. _A spherical triangle with only short edges has at most one obtuse angle._ Proof.: 1. If \(\Delta ABC\) has only acute angles, \(\cos A\), \(\cos B\), and \(\cos C\) are all positive. Then if (for instance) edge \(BC\) has radian length \(a\), one of the spherical cosine laws gives us that \[\cos a=\frac{\cos A+\cos B\cos C}{\sin B\sin C}>0,\] whence \(a<\pi/2\). The proofs for \(b\) and \(c\) are similar. 2. If \(A\) is acute, let \(A^{\prime}\) be the antipodal point: the columnar triangle \(\Delta A^{\prime}BC\) satisfies the conditions of (1). 3. If \(\Delta ABC\) has only long edges, \(\cos a\), \(\cos b\), and \(\cos c\) are all negative; the other spherical cosine law gives \[\cos A=\frac{\cos a-\cos b\cos c}{\sin b\sin c}<0\] and \(A\) is obtuse: the proofs for \(B\) and \(C\) are similar. 4. again follows from (3) by consideration of the columnar triangle. 5. In this case, \(\cos a\), \(\cos b\), and \(\cos c\) are all positive. If \(A\) is obtuse, \(\cos a<\cos b\cos c\), whence \(a\), opposite \(A\), must be the strictly longest edge. We can, however, construct spherical triangles with exactly one obtuse angle and zero, or two, long edges. We can also construct a spherical triangle with three obtuse angles and only two long edges. These results are summarized in Table 1. ## 4. Results on instability We begin by ruling out the possibility of a "weighted tetrahedral Gomboc", and in fact prove more. **Theorem 2**.: _No tetrahedron has both a monostable weighting and a mono-unstable weighting, even with different centers of mass._ \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & \multicolumn{6}{|c|}{Obtuse angles} \\ & \multicolumn{6}{|c|}{(dihedrals)} \\ \hline Long & & 0 & 1 & 2 & 3 \\ \cline{2-6} edges & 0 & \(\surd\) & \(\surd\) & x & x \\ \cline{2-6} (obtuse & 1 & x & \(\surd\) & x & x \\ \cline{2-6} face & 2 & x & \(\surd\) & \(\surd\) & \(\surd\) \\ \cline{2-6} angles) & 3 & x & x & x & \(\surd\) \\ \hline \end{tabular} \end{table} Table 1. Possible combinations of long/obtuse elements in spherical triangles and polyhedral vertices Proof.: As observed above, every monostable weighted tetrahedron \((\mathcal{T},O)\) has two vertices \(B,C\) that each have two obtuse dihedrals. Table 1 shows that each of these two must have two obtuse face angles; but a tetrahedron cannot have more than four obtuse face angles in total, so the other two vertices \(A,D\) have only acute face angles. By Lemma 1, \((\mathcal{T},O^{\prime})\) has equilibria on those vertices for any \(O^{\prime}\in\operatorname{int}\mathcal{T}\). For a specific weighting, we can say more: **Theorem 3**.: _If a weighted tetrahedron \((\mathcal{T},O)\) is monostable, it has unstable equilibrium on exactly two vertices._ Proof.: By Theorem 1, we may assume that \((\mathcal{T},O)\) has an obtuse path \(A-B-C-D\) and equilibrium on either \(\triangle DBC\) or \(\triangle\,ACD\). As the dihedrals on \(\overline{BC}\) and \(\overline{CD}\) are obtuse, the dihedral on \(\overline{BD}\) must be acute. However, by hypothesis, the other two dihedrals at \(B\) are obtuse. The local geometry at \(B\) thus corresponds to a spherical triangle \(\triangle\delta\gamma\alpha\) with acute angle at \(\delta\), obtuse angles at \(\gamma\) and \(\alpha\). Then (Lemma 3) the edges \(\overline{\delta\gamma}\) and \(\overline{\delta\alpha}\) are long, and \(\overline{\alpha\gamma}\) is short. Let \(E\) be polar to \(\bigcirc\delta\alpha\); it lies (Figure 4) on the great circle polar to \(\delta\), which meets \(\overline{\delta\gamma}\) at \(F\) and \(\overline{\delta\alpha}\) at \(G\). But by assumption \((\mathcal{T},O)\) has no equilibrium on \(\triangle DBA\), so \(\angle\delta\alpha\Omega\) is obtuse; thus \(\Omega\) lies on the far side of \(\overline{\alpha E}\) and _a fortiori_\(\overline{GE}\) from \(D\). Thus \(\overline{\delta\Omega}\) is long, \(\angle DBO\) is obtuse, and \((\mathcal{T},O)\) has no equilibrium on \(B\). A similar argument (using the lack of equilibrium on \(\triangle\,ABC\)) shows that \((\mathcal{T},O)\) has no equilibrium on \(C\). Using polar duality, we also get **Corollary 3.1**.: If a weighted tetrahedron \((\mathcal{T},O)\) is mono-unstable, it has stable equilibrium on exactly two faces. We can now prove a result analogous to Theorem 1 for mono-unstable tetrahedra, which does not appear to follow from that result via polar duality. Define an _obtuse cycle_ to be a cycle of edges on a tetrahedron \(A-B-C-D-A\) such that the face angles \(\angle ABC\), \(\angle BCD\), and \(\angle CDA\) are all obtuse. Figure 4. The configuration at a vertex with two obtuse dihedral angles. **Theorem 4**.: _A tetrahedron \(\mathcal{T}\) has an obtuse cycle if and only if for some \(O\) the pair \((\mathcal{T},O)\) is mono-unstable._ Proof.: Suppose that \(A-B-C-D-A\) is an obtuse cycle, and \(P\in\overline{BC}\); then \(\angle\)\(ABP=\angle\)\(ABC\) is obtuse. By the same argument, the angle \(\angle\)\(DCP\) is obtuse. Moreover, as \(\angle\)\(ADC\) is obtuse, so is \(\angle\)\(ADP\) for \(P\in\operatorname{relint}\overline{BC}\) close enough to \(C\). \(P\) is on the boundary of \(\mathcal{T}\), but if we let \(O\) be an interior point close enough to \(P\), angle \(\angle\)\(ABO\), \(\angle\)\(DCO\), and \(\angle\)\(ADO\) will still be obtuse, and \((\mathcal{T},O)\) will have no equilibrium on \(B\), \(C\), or \(D\). We now show that monoinstability requires the existence of an obtuse cycle. As a triangle has at most one obtuse angle, a tetrahedron has at most four obtuse face angles; and to be mono-unstable it must have one (or more) at each of the three vertices without equilibrium. The obtuse face angles can thus be partitioned among the vertices in only three ways: \(\{0,1,1,1\}\), \(\{0,1,1,2\}\), and \(\{1,1,1,1\}\). We will represent a vertex with \(m\) obtuse face angles and \(n\) obtuse dihedrals by the ordered pair \([m,n]\). As every obtuse dihedral has two ends, the sum of \(n\) over the vertices is even; and we can only use the pairs \([m,n]\) found in Table 1. The only possibilities for a mono-unstable tetrahedron are: **I:**: \(\{[0,1],[1,1],[1,1],[1,1]\}\); **II:**: \(\{[0,0],[1,1],[1,1],[2,2]\}\); **III:**: \(\{[0,1],[1,1],[1,1],[2,1]\}\); **IV:**: \(\{[0,1],[1,1],[1,1],[2,3]\}\); **V:**: \(\{[1,1],[1,1],[1,1]\}\). **I,** which has only three obtuse face angles, is realizable, for instance by a tetrahedron with vertices \[\{(-10,0,0),(0,2,0),(0,-2,0),(1,0,1)\}\] (Figure 5 a). Let A be the \([0,1]\) vertex, and \(\overline{AC}\) its obtuse dihedral, Then \(\overline{BD}\) is also an obtuse dihedral, face angles \(\angle\)\(ABC\), \(\angle\)\(BCD\), and \(\angle\)\(CDA\) are obtuse, and \(A-B-C-D-A\) is an obtuse cycle. **II cannot occur. Let D be the \([2,2]\) vertex, with obtuse angles \(\angle\)\(ADB\) and \(\angle\)\(ADC\). Then the dihedrals on \(\overline{DC}\) and \(\overline{DB}\) are obtuse, B and C are the vertices of type (1,1), and the angles \(\angle\)\(ABC\) and \(\angle\)\(ACB\) are both obtuse, which is impossible. Figure 5. Tetrahedra with obtuse cycles **III** can occur. Let A be the \([0,1]\) vertex, D the \([2,1]\) vertex. The tetrahedron has two obtuse dihedrals without a common endpoint. If they were \(\overline{AD}\) and \(\overline{BC}\), then one of the angles \(\angle ADB,\angle ACD\) would be obtuse (without loss of generality \(\angle ADB\).) But the angles opposite \(\overline{BC}\), that is, \(\angle ABD\) and \(\angle ACD\), are also obtuse; so \(\Delta ABD\) would have two obtuse angles, which is impossible. However, if (without loss of generality) the obtuse dihedrals are \(\overline{AC}\) and \(\overline{BD}\), we can construct examples, for instance \[(A,B,C,D)=((-10,0,0),(2,0,0),(3,2,0),(0,4,1))\] (Figure 5 b ). Here \(A-B-C-D-A\) is the obtuse cycle. **IV** cannot occur. Let A be the \([2,3]\) vertex; then the dihedrals on \(\overline{AB}\), \(\overline{AC}\), and \(\overline{AD}\) are all obtuse. Without loss of generality let C and D be the vertices of type \([1,1]\); as before, the angles \(\angle CDB\) and \(\angle BCD\) are both obtuse. **V** would require the tetrahedron to have two disjoint obtuse dihedrals, WLOG \(\overline{AC},\overline{BD}\), opposite the four obtuse angles; but then the skew quadrilateral \(\rhd\)\(ABCD\) would have angles summing to more than \(2\pi\), which is impossible. Examples with every combination of 2-4 stable equilibria and 2-4 unstable equilibria are given in [7]. Indeed, there exists a single tetrahedron which exhibits all nine combinations for appropriate choices of centre (Figure 6)2. Footnote 2: Vertices are \((0,0,0)\), \((100000,0,0)\), \((50000,41429,0)\), and \((13549,13544,11223)\). Centers are \(M_{22}=(15884,5116,835)\), \(M_{23}=(46670,11911,3061)\), \(M_{24}=(28497,5544,2041)\), \(M_{32}=(11400,7243,2597)\), \(M_{33}=(33447,17389,3061)\), \(M_{34}=(23866,8138,3339)\), \(M_{42}=(21845,14097,7142)\), \(M_{43}=(42514,9100,6122)\), and \(M_{44}=(24407,10239,1391)\). **Remark.** If a weighted tetrahedron \((\mathcal{T},O)\) is mono-unstable with an equilibrium on vertex \(A\), then \(A\) has no obtuse face angles. It follows that \((\mathcal{T},O^{\prime})\) has equilibrium on \(A\) for all \(O^{\prime}\in\mathrm{int}\mathcal{T}\); and thus (in contrast to the situation in Theorem 1.3) \(\mathcal{T}\) cannot be weighted to be mono-unstable on any other vertex. Results like this show that (despite our use of polar duality, and the symmetry of Table 2) there is no simple duality between stable and unstable equilibria. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{Unstable} \\ & \multicolumn{4}{c|}{equilibria} \\ \hline \multirow{3}{*}{Stable equilibria} & & 1 & 2 & 3 & 4 \\ \cline{2-6} & 1 & x & \(\surd\) & x & x \\ \cline{2-6} & 2 & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \cline{2-6} & 3 & x & \(\surd\) & \(\surd\) & \(\surd\) \\ \cline{2-6} & 4 & x & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \end{tabular} \end{table} Table 2. Possible combinations of equilibria ## 5. Other polyhedra We have seen that no tetrahedron can be mono-monostatic, even when weighted. What about other classes of polyhedron? A vector \((f,e,v)\in\mathbb{N}^{3}\) is the face vector of some nondegenerate polyhedron if and only if * \(f\geq\frac{v}{2}+2\), * \(v\geq\frac{7}{2}+2\), * and \(e=f+v-2\). We'll call such a vector 'legal.' We note that equality is obtained in the first expression only when all vertices have degree \(3\), and the second only when all faces are triangular. **Theorem 5**.: _Every legal vector except for \((4,6,4)\) is the face vector of a mono-monostatic weighted polyhedron._ Proof.: Let \(\mathcal{P}\) be a weighted polyhedron with at least one nontriangular face. We claim that some vertex \(V\) of \(\mathcal{P}\) is included in one, two, or three nontriangular faces. For suppose otherwise: intersecting the halfspaces bounded by supporting planes on these faces and containing \(P\), we get a convex polyhedron \(\mathcal{Q}\) with at least \(4\) edges at every vertex and at least \(4\) edges on every face. Then \(e\geq 2v\), \(e\geq 2f\), and so for the Euler characteristic we have \(\chi(\mathcal{Q})\leq 0\), an impossibility. Let \(V\) be such a vertex, let \(F\) be a nontriangular face including \(V\), and let \(\delta>0\). We will construct a new polyhedron \(\mathcal{P}^{\prime}\), which shares every vertex of \(\mathcal{P}\) except that \(V\) is replaced by a new vertex \(V^{\prime}\). Let \(G\) be the intersection of the affine hulls of the other nontriangular faces (if any) of \(\mathcal{P}\) at \(V\): it's an affine subspace of dimension at least \(1\). Let \(H\) be the open halfspace bounded by \(\operatorname{aff}F\) that contains \(\operatorname{int}\mathcal{P}\). Then take \(V^{\prime}\in G\cap H\cap B_{\delta}(V)\): clearly, at least for small \(\delta\), \(\mathcal{P}^{\prime}\) has the same number of vertices as \(\mathcal{P}\), and one more face. Moreover, by taking \(\delta\) small enough, we can change the orientations of Figure 6. A tetrahedron that can have \(2\)–\(4\) stable equilibria and \(2\)–\(4\) unstable equilibria, depending on the choice of center. edges and faces by an angle less than any desired \(\epsilon>0\). We'll refer to this below as "face bending." Let \((\mathcal{P},O)\) be a weighted polyhedron; we assume \(O\) to be in general position with respect to all edges and face diagonals. For small enough \(\delta\), the following are true: * \(O\in\operatorname{int}\mathcal{P}^{\prime}\) * \((\mathcal{P}^{\prime},O)\) has an equilibrium on a vertex \(X\) if and only if \((\mathcal{P},O)\) has an equilibrium on the corresponding vertex; * \((\mathcal{P},O)\) has equilibrium on any face other than \(F\) if and only if \((\mathcal{P}^{\prime},O)\) has equilibrium on the corresponding face; * \((\mathcal{P},O)\) has equilibrium on \(F\) if and only if \((\mathcal{P}^{\prime},O)\) has equilibrium on \(F^{\prime}\) or \(T\) (this requires the foot of the perpendicular from \(O\) to \(F\) not to lie on the face diagonal that becomes an edge of \(F^{\prime}\)); * \((\mathcal{P}^{\prime},O)\) cannot have equilibrium on both \(F^{\prime}\) and \(T\). It follows that if there exists a mono-monostatic polyhedron that has \(v\) vertices, and \(f\) faces not all triangles, then there exists one with \(v\) vertices and \(f+1\) faces. As shown in [7], by polar duality there also exists one with \(f\) vertices and \(v\) faces. We conclude the proof using induction. First, we note that there exists a mono-monostatic polyhedron with face vector \((5,8,5)\) (combinatorially equivalent to a square pyramid) - see Figure 7. Assume, as an inductive hypothesis, that the claim holds for any legal vector with \(v\leq 2n\). Then in particular it holds for \((f,e,v)=(2n+1,3n+2,n+3)\) and \((f,e,v)=(2n+2,3n+3,n+3)\). By polar duality, it also holds for \((f,e,v)=(n+3,3n+2,2n+1)\) and \((f,e,v)=(n+3,3n+3,2n+2)\), both of which minimize \(f\) for the given \(v\). Face bending then shows that the claim holds for all legal vectors with \(v\leq 2n+2\), hence by induction for all legal vectors. Except in a few cases, the vector \((f,e,v)\) does not determine the combinatorial class of a polyhedron. We conjecture that, in fact, every combinatorial class of polyhedra, except the tetrahedra, contains elements that admit a mono-monostatic weighting. ## Declaration of competing interests The authors have no relevant financial or non-financial interests to disclose.
2305.17578
Non-volatile heterogeneous III-V/Si photonics via optical charge-trap memory
We demonstrate, for the first time, non-volatile charge-trap flash memory (CTM) co-located with heterogeneous III-V/Si photonics. The wafer-bonded III-V/Si CTM cell facilitates non-volatile optical functionality for a variety of devices such as Mach-Zehnder Interferometers (MZIs), asymmetric MZI lattice filters, and ring resonator filters. The MZI CTM exhibits full write/erase operation (100 cycles with 500 states) with wavelength shifts of $\Delta\lambda_{non-volatile} = 1.16 nm$ ($\Delta n_{eff,non-volatile} ~ 2.5 \times 10^{-4}$) and a dynamic power consumption $<$ 20 pW (limited by measurement). Multi-bit write operation (2 bits) is also demonstrated and verified over a time duration of 24 hours and most likely beyond. The cascaded 2nd order ring resonator CTM filter exhibited an improved ER of ~ 7.11 dB compared to the MZI and wavelength shifts of $\Delta\lambda_{non-volatile} = 0.041 nm$ ($\Delta n_{eff, non-volatile} = 1.5 \times 10^{-4}$) with similar pW-level dynamic power consumption as the MZI CTM. The ability to co-locate photonic computing elements and non-volatile memory provides an attractive path towards eliminating the von-Neumann bottleneck.
Stanley Cheung, Di Liang, Yuan Yuan, Yiwei Peng, Yingtao Hu, Geza Kurczveil, Raymond G. Beausoleil
2023-05-27T20:37:02Z
http://arxiv.org/abs/2305.17578v1
Non-volatile heterogeneous III-V/Si photonics via optical charge-trap memory ###### Abstract We demonstrate, for the first time, non-volatile charge-trap flash memory (CTM) co-located with heterogeneous III-V/Si photonics. The wafer-bonded III-V/Si CTM cell facilitates non-volatile optical functionality for a variety of devices such as Mach-Zehnder Interferometers (MZIs), asymmetric MZI lattice filters, and ring resonator filters. The MZI CTM exhibits full write/erase operation (100 cycles with 500 states) with wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}\) = 1.16 nm (\(\Delta\text{n}_{\text{eff,non-volatile}}\) \(\sim\) 2.5 \(\times\) 10\({}^{-4}\)) and a dynamic power consumption \(<\) 20 pW (limited by measurement). Multi-bit write operation (2 bits) is also demonstrated and verified over a time duration of 24 hours and most likely beyond. The cascaded 2\({}^{\text{nd}}\) order ring resonator CTM filter exhibited an improved ER of \(\sim\) 7.11 dB compared to the MZI and wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}\) = 0.041 nm (\(\Delta\text{n}_{\text{eff,non-volatile}}\) = 1.5 \(\times\) 10\({}^{-4}\)) with similar pW-level dynamic power consumption as the MZI CTM. The ability to co-locate photonic computing elements and non-volatile memory provides an attractive path towards eliminating the von-Neumann bottleneck ## Introduction Increasing workloads in data centers and high-performance computing are dominated by data and graph analytic applications aided by artificial intelligence (AI) [1, 2]. Nowadays, these data-intensive applications are handled by graphical processing unit (GPU)-based artificial neural networks (ANNs) using traditional von Neumann computing architecture [2]. The majority of energy consumption in an electrical ANN comes from matrix vector multiplication (MVM) and data movement operations. The well-known von Neumann bottleneck also fundamentally limits the speed and energy-efficiency of the data transferred between memory and the processor [3, 4]. Optical neural networks (ONNs) are expected to achieve orders of magnitude enhancement in energy efficiency and throughput due to their unique capability of all-optical matrix-multiplication at the speed of light without being subject to capacitive delays or loss due to heat dissipation [5, 6, 7, 8, 9, 10, 11, 12, 13]. However, three main challenges prevent ONNs from achieving competitive performance compared to digital ANNs. The first challenge is the lack of a device platform that can monolithically integrate optical neurons with lasers, photodetectors, electrical neural circuits, optical non-linear activation (NLA) functions, synaptic interconnects, and memory on a common silicon platform. The 2\({}^{\text{nd}}\) challenge is the lack of a clear solution to address power hungry optical weighting via thermal heaters which can consume 10s of mW each [14, 15, 16]. The 3\({}^{\text{nd}}\) challenge is the lack of optical memory which can potentially reduce communication latency between ONNs and memory. In light of this, there have been efforts to address integrated non-volatile photonics through the use of chalcogenide phase-change memory (PCM) [17, 18, 19, 20, 21, 22, 23, 24, 25, 26], micro-electro-mechanical systems (MEMS) [27], ferroelectrics (BaTiO\({}_{3}\)[28], LiNbO\({}_{3}\)[29], PZT), floating-gate memory (FGM) [30, 31, 32, 33, 34, 35, 36, 37], and memristors [38, 39, 40, 41, 42]. Memristors encounter several performance and manufacturability challenges which have prevented industry wide adoption; this includes performance variability, latency, density, and technological feasibility [43]. In the meantime, there is a need for a near term and reliable non-volatile silicon photonics solution such as charge-trap memory (CTM) or FGM [31]. Song et al., demonstrated FGM based non-volatile optical switching in 2016 via ring resonators. To the best of our knowledge, there has been only 1 demonstration of electrically driven CTM ring resonator albeit with large write voltage of V = 50 V and no electrical reset operation [36, 44]. There have also been a few simulation studies outlining various configurations of CTM with photonics going back to 2006 [30] and more recently 2021 [32, 35, 45]. The difference between CTMs from FGMs is such that the charge trapping layer is an insulator instead of a conductor (poly-Si) [46, 47, 48]. There are 2 main disadvantages for using FGM: 1) stored electrons have a tendency to leak because of interfacial proximity to the tunnel oxide/floating gate region, and 2) high data write loads in FGMs can cause stress on the tunnel oxide layer, thus creating oxide defects which act as a leakage path from the poly-Si floating gate to the channel or source/drain regions [46, 49, 50]. CTM devices are immune to such failures since the floating gate consists of an insulator [46, 47, 48, 50]. We address the aforementioned challenges by demonstrating a heterogeneous III-V/Si photonic platform capable of non-volatile optical functionality via the CTM effect. In addition, this platform is suitable for seamless integration of quantum dot (QD) comb lasers [51, 52, 53, 54], III-V/Si MOSCAP ring modulators [55, 56, 57], Si-Ge avalanche photodetectors (APDs) [58, 59, 60, 61], QD APDs [62, 63], in-situ III-V/Si light monitors [64, 65], III-V/Si MOSCAP optical filters [66, 67], and non-volatile phase shifters [38, 39, 40, 41, 68], which are all essential towards realizing a fully integrated optical chip. We believe the co-integration of silicon photonics and non-volatile CTM memory provides a possible near term path towards eliminating the von-Neumann bottleneck as well as playing a role in energy efficient, non-volatile large scale integrated photonics such as: neuromorphic/brain inspired optical networks [69, 70, 71, 72, 73, 74, 6, 6, 67, 68, 69, 70, 71, 72, 74], optical switching fabrics for tele/data-communications [75, 76], optical phase arrays [77, 78], quantum networks, and future optical computing architectures. In this work, we demonstrate, for the first time, heterogeneous III-V/Si MZI and ring resonators with co-integrated CTM memory cells operating at O-band wavelengths. The MZI CTM exhibit full write/erase operation (100 cycles with 500 states) with wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}=1.16\) nm (\(\Delta\text{n}_{\text{eff,non-volatile}}\sim 2.5\times 10^{-4}\)) and a dynamic power consumption \(<20\) pW (limited by measurement). The extinction ratio (ER) is \(\sim 1.78\) dB, mainly limited by imperfect directional couplers. Multi-bit write operation (2 bits) is demonstrated and verified over a time duration of 24 hours and most likely beyond. The cascaded 2nd order ring resonator CTM filter exhibited an improved ER of \(\sim 7.11\) dB compared to the MZI and wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}=0.041\) nm (\(\Delta\text{n}_{\text{eff,non-volatile}}=1.5\times 10^{-4}\)) with similar dynamic power consumption as the MZI CTM. ## Principle of Operation CTM flash memory cells have typically been based on the SONOS (silicon-oxide-nitride-oxide-silicon) configuration where the tunneling, charge trap, and blocking layers are defined by SiO\({}_{2}\), Si\({}_{3}\)N\({}_{4}\), and SiO\({}_{2}\) respectively [79]. High-k dielectric materials (HfO\({}_{2}\), ZrO\({}_{2}\), TiO\({}_{2}\)) have become popular for the charge trap region because of increased potential barrier height (\(\phi_{0}^{TiO_{2}}=3.15,\phi_{0}^{MoO_{2}}=1.65>\phi_{0}^{Si_{3}N_{4}}=1.03\)) which leads to improved charge retention times (\(>10\) years) and improved programming speeds (\(\sim\) us) via effective oxide thickness (EOT) reduction [46, 50]. Our optical CTM cell is based on an n-GaAs/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/Si heterogeneous III-V/Si structure, where the Al\({}_{2}\)O\({}_{3}\) and HfO\({}_{2}\) serve as the tunneling/blocking oxide and charge trap respectively as shown in Fig. 1a - b. In addition to high potential barriers, HfO\({}_{2}\) was chosen because of reported deep energy level traps (E\({}_{t}=1.5\) eV) [46, 80] and high electron density traps ranging from \(10^{19}\) - \(10^{21}\) cm\({}^{-3}\)[32, 49, 81]. An Al\({}_{2}\)O\({}_{3}\) layer is inserted in between the HfO\({}_{2}\), because it was experimentally determined to be easier to wafer-bond Al\({}_{2}\)O\({}_{3}\) to Al\({}_{2}\)O\({}_{3}\) rather than HfO\({}_{2}\). The choice of n-GaAs over p-GaAs was two-fold: 1) lower optical absorption loss from dopants, and 2) easier III-V/Si laser integration. Also, GaAs exhibits \(\sim 4\times\) smaller electron effective mass and \(\sim 6\times\) larger electron mobility (\(\text{m}_{\text{e}}^{*}=0.063\text{m}_{0}\), \(\mu_{\text{e}}\) = 8500 cm\({}^{2}\)/V-s) than crystalline Si (\(\text{m}_{\text{e}}^{*}=0.28\text{m}_{0}\), \(\mu_{\text{e}}=1400\) cm\({}^{2}\)/V-s) [63, 67, 82]. Therefore, the plasma dispersion effect on index change in n-type GaAs is more efficient with lower free carrier absorption (FCA) loss. The single-mode waveguide structure is defined by a width, height, and etch depth of 500, 300, and 170 nm respectively as indicated in _Fig. 3a_ - b. The wafer-bonded III-V region is primarily 150 nm-thick n-GaAs doped at 3\(\times\)10\({}^{18}\) cm\({}^{3}\). _Fig. 3c_ shows the simulated transverse electric (TE) of the optical CTM cell. Assuming dielectric thicknesses of 1.7/4.0/2.0/4.0/0.5 nm (_Fig. 1e - f_) and refractive indices of 1.75/1.90/1.75/1.90/1.75 for Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\) respectively, the calculated optical confinement factors are \(\Gamma_{\mathrm{Si}}\) = 64.49 %, \(\Gamma_{\mathrm{HfO2}}\) = 1.637 %, and \(\Gamma_{\mathrm{Al2O3}}\) = 0.82 % with an overall effective index of n\({}_{\mathrm{eff}}\) = 3.0971 and group index of n\({}_{\mathrm{g}}\) = 3.7914. For comparison, a pure silicon waveguide with oxide cladding has an effective index of n\({}_{\mathrm{eff}}\) = 2.9774 and group index of n\({}_{\mathrm{g}}\) = 3.9765. In order to understand the non-volatile operation of the optical CTM cell, Fig. 2a - b illustrate the energy-band diagrams of the flat-band and biased regimes respectively. The bandgap energy (E\({}_{\mathrm{g}}\)), electron affinity (\(\chi\)), valence/conduction band offsets (VBO/CBO), refractive index (n), trap state density (N\({}_{\mathrm{TC}}\)) and trap state energy (\(\varphi_{\mathrm{d}}\)), are listed in supplementary note 1. During the write process (Fig. 2b), a positive bias is applied to the p-Si region which injects electrons from the highly doped n-GaAs into the high-k HfO\({}_{2}\) where carriers are trapped due to the presence of charge traps. These charge traps can exist in the bulk and interface regions, but for simplicity, we model the bulk trap case. Once the HfO\({}_{2}\) region is fully charged, holes will accumulate at the p-Si/Al\({}_{2}\)O\({}_{3}\) interface, thus altering the effective index of the optical mode due to the plasma dispersion effect [83, 84]. During the erase process, a reverse bias is applied to sweep out the trapped electrons, thus returning the optical CTM cell back to the initial electrical and optical state. A two-dimensional solver (SILVACO ATLAS [85]) was used to perform energy-band diagram and charge concentration calculations to theoretically predict optical effective index changes as a function of trapped charge density. The solver numerically calculates the Poisson and charge continuity equations and the Figure 1: (a) 3-D schematic of the heterogeneous III-V/Si CTM cell with (b) blown up material layer view and respective dimensions. (c) Simulated TE optical mode with CTM interface, (d) SEM cross section of CTM cell with (e) high resolution transmission electron microscope (HRTEM) image of charge-trap memory dielectric stack, and (f) electron dispersive spectroscopy (EDS) line scan for atomic mapping effects of defect traps and self-consistently solves quantum mechanical tunneling. The main contributions to carrier injection are: Fowler-Nordheim tunneling, direct tunneling, and hot carrier injection. Figure 2: Schematic of energy-band diagram for n-GaAs/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/SiO\({}_{2}\)/p-Si CTM cell (a) at flat-band condition, and (b) with positive bias on the p-Si side. Simulated (c) C-V curves for various fixed charge concentration Q\({}_{\text{TC}}\), (d) electron and hole concentrations for various Q\({}_{\text{TC}}\) at retention state (0 \(\xrightarrow{}\) 9 \(\xrightarrow{}\) 0 V), (e) energy-band diagrams for 5 states: initial, write, retain, erase, and return, (f) non-volatile effective index change \(\Delta\text{n}_{\text{eff,non-volatile}}\) vs. trapped charge concentration Q\({}_{\text{TC}}\) for different CTM dielectric stacks. There have been reports of both acceptor and donor traps in HfO\({}_{2}\), however, the majority of research considers electron traps [46, 86, 87] and this is what we will consider in simulations. Non-volatile electrical behavior of CTM devices can be observed by the hysteresis of the capacitance-voltage (C-V) curve [46, 88]. Fig. 2c shows the simulated C-V hysteresis for our heterogeneous III-V/Si CTM structure with several values of trapped charge density (Q\({}_{\mathrm{TC}}\)) ranging from \(0-10^{20}\) cm\({}^{3}\). The upper limit of this range is consistent with reported HfO\({}_{2}\) density traps ranging from \(10^{19}-10^{21}\) cm\({}^{-3}\)[32, 49, 81]. The blue line represents the write state from the initial state (0 \(\xrightarrow{}\)9 V), and the other colors represent a combination of retention (9 \(\xrightarrow{}\)0 V), erase (0 \(\xrightarrow{}\)-5 V), and return (-5 \(\xrightarrow{}\)0 V) states for different Q\({}_{\mathrm{TC}}\). For example, if we assume a Q\({}_{\mathrm{TC}}=\)\(3\times 10^{19}\), we would traverse the blue line from (0 \(\xrightarrow{}\)9 V) for write operation, then the green line from (9 \(\xrightarrow{}\)0 V) for retention, then green line from (0 \(\xrightarrow{}\)-5 V) for erase, and finally (-5 V \(\xrightarrow{}\)0 V) for the return state. Next, the energy band diagrams for all 5 states are calculated (Fig. 2e shows the case for Q\({}_{\mathrm{TC}}=3\times 10^{19}\) cm\({}^{-3}\)) along with the electron and hole concentrations in the retention state (Fig. 2b). These values of electron and hole concentrations can be used to calculate a spatial change in index [83, 84]: \(\Delta\)n(x,y) (at 1310 nm) = - 6.2 \(\times\)\(10^{-22}\)\(\Delta\)N(x,y) - \(6\times 10^{-18}\)\(\Delta\)P(x,y)\({}^{0.8}\), where x and y are the 2D lateral and vertical dimensions as detailed in supplementary section. The resulting spatial indices are then used in an optical finite-difference-eigenmode (FDE) solver to calculate non-volatile effective index changes \(\Delta\)n\({}_{\mathrm{eff,non-volatile}}\) vs. Q\({}_{\mathrm{TC}}\) as shown in Fig. 2f. Two types of CTM structures are evaluated in Fig. 2f; the structure used experimentally which entails 2 HfO\({}_{2}\) layers vs. 1 HfO\({}_{2}\) layer with an equivalent thickness. Because an equivalent thickness is used, it is expected that effective index changes will be similar, however, literature suggests temperature annealing (\(>800\)\({}^{\circ}\)C) is a solution for charge trapping enhancement at the HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{5}\) interface [49, 89]. The corresponding effective index changes on the order of \(10^{-4}\) can significantly affect amplitude changes when implemented in resonant systems such as ring resonators. Consider the case of a heterogeneous III-V/Si CTM add-drop ring resonator with the following parameters: group index n\({}_{\mathrm{g,~{}III-V/Si}}=3.7912\), radius R = 10.0 \(\upmu\)m, power coupling coefficient \(\kappa^{2}\) = 0.01, and internal loss of \(\alpha_{i}\) = 3 dB/cm. By using the appropriate ring resonator transfer function equations [90] and assuming a trapped charge density of Q\({}_{\mathrm{TC}}=9\times 10^{19}\) cm\({}^{-3}\) (\(|\Delta\)n\({}_{\mathrm{eff,non-volatile}}|=2.47\times 10^{-4}\)) in the retain state, this change in index is enough to theoretically create a non-volatile wavelength shift of \(\Delta\)\(\Delta\)\({}_{\mathrm{non-volatile}}\) = 0.085 nm resulting in an ER \(>\) 14 dB. Alternatively, the hybrid III-V/Si CTM cell can be implemented in a traveling wave interference device such as a Mach-Zehnder interferometer (MZI). Consider the heterogeneous III-V/Si CTM MZI with the following parameters: effective index n\({}_{\mathrm{eff,~{}III-V/Si}}=3.0971\), n\({}_{\mathrm{eff,~{}Si}}\)= 2.9774, base length L\({}_{\mathrm{III-V/Si}}\)\(=325\)\(\upmu\)m, optical path length Figure 3: (a) 2-D cross section of the heterogeneous III-V/Si CTM cell, (b) 2D-FDE simulated optical mode, (c) microscope image of fabricated III-V/Si CTM MZI, (d) design dimensions from mask layout difference \(\Delta\)OPL = 5.206 um, power coupling coefficient \(\kappa^{2}\) = 0.50, and internal loss of \(\alpha_{i}\) = 5 dB/cm. These design parameters result in a free-spectral-range (FSR) of \(\sim\) 25.6 nm. Using the transfer matrix method for a MZI [91] and assuming the same trapped charge density of Q\({}_{\rm TC}\) = 9 \(\times\) 10\({}^{19}\) cm\({}^{-3}\) (\(|\Delta\)\({}_{\rm eff,non-volatile}|\) = 2.47 \(\times\) 10\({}^{-4}\)), this will result in a non-volatile phase shift of \(\sim\) 0.38\(\pi\) (\(\Delta\)\({}_{\rm non-volatile}\) = 1.567 nm) with an extinction ratio ER \(\sim\) 46 dB. The simulated electric field for the write process (V = 9 V) in the Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\) regions are 1.0 \(\times\) 10\({}^{7}\), 3.75 \(\times\) 10\({}^{6}\), 1.0 \(\times\) 10\({}^{7}\), 3.75 \(\times\) 10\({}^{6}\), 1.0 \(\times\) 10\({}^{7}\) V/cm respectively. These are all well below experimentally reported breakdown electric field strengthens (E\({}_{\rm BD}\), H\({}_{\rm HO2}\) = 4.80 \(\times\) 10\({}^{6}\), E\({}_{\rm BD,~{}Al203}\) = 5 - 30 \(\times\) 10\({}^{6}\) V/cm). Experimentally, we did not see breakdown when operating these devices as shown in the results section and supplementary section. ### Charge Trap Memory Experimental Demonstration Initial phase tuning measurements were performed on a 350 um long CTM Mach-Zehnder Interferometer (MZI) as shown in Fig. 3d. Measured spectral response indicated an FSR \(\sim\) 16.58 nm with 1.62 nm of tuning at a 9V bias while maintaining an extinction ratio (ER) of \(\sim\) 10 dB. From transfer matrix modeling of the CTM MZI, the directional couplers were inferred to have a power transfer ratio of \(\sim\) 30 %, thus yielding the low ER. In order to investigate non-volatile CTM functionality, we applied a voltage cycle of [0, 9, 0, -5, 0] V and recorded the output optical spectra as shown in Fig. 4a. We start with the initial virgin state (blue curve) at 0 V and then proceed to bias up to 9 V which places the CTM MZI in a volatile state (orange curve) with a wavelength shift of \(\Delta\lambda\) = 1.62 nm. By turning off the voltage, a non-volatile state is reached as shown by the green curve with a non-volatile wavelength shift of \(\Delta\)\({}_{\rm non-volatile}\) = 1.16 nm. This translates to a non-volatile effective index change of \(\Delta\)\({}_{\rm eff,~{}non-volatile}\)\(\sim\) 2.5 \(\times\) 10\({}^{-4}\) which matches quite well with simulated results in Fig. 2f (for Q\({}_{\rm TC}\) = 9 \(\times\) 10\({}^{19}\) cm\({}^{-3}\)). Simulations also show that perfect 50 % power couplers can yield an ER \(>\) 25 dB and 50 dB for waveguide losses of 53 and 3 dB/cm, respectively. A new fabrication run is currently underway with improved directional couplers and optical loss for the CTM MZIs. A reverse bias of -5 V perfectly resets this non-volatile state back to its original state (red curve). We performed 100 voltage cycle tests (500 voltage states) to measure endurance/ repeatability performance and recorded optical spectra, resonant wavelength shifts, and I - V data, Fig. 4c,e and Fig. 4d,f, respectively. From Fig. 4c, the mean wavelength resonances for the non-volatile and reset states are determined to be 1290.608 \(\pm\) 0.016 nm and 1289.654 \(\pm\) 0.039 nm respectively. This translates to write/erase wavelength accuracies of \(\pm\) 2.807/6.842 GHz which may be attributed to environmental temperature fluctuations. In parallel, the current-voltage relationship was tracked during optical measurements and is shown in Fig. 4d. We believe measured current values are below the sensitivity of our measurement unit (Keithley 2400, 10 pA) which would indicate write/erase dynamic powers of \(<\) 20 pW. TEM images (supplementary note 4) of the initial (0 V), set (9 V), and reset state (- 5 V) indicate fully intact dielectric stacks with no visible signs of dielectric breakdown. EDS line scans also indicate minimal atomic/interfacial changes. It has been demonstrated that CTM cells exhibit charge retention times over 10 years [92, 93, 94]. Figure 4: Measured optical spectra for (a) initial state, volatile state, non-volatile state, reset state, final state, (b) 500 states via endurance testing: 100 voltage cycles with each cycle = [0, 9, 0, -5, 0]V. (c) Applied voltage vs. tracked resonance dip for 100 cycles. (d) Measured CTM voltage and current for 100 cycles. (e) – (f) Close-up snapshot of (c) – (d) respectively. Preliminary study on the reliability of this CTM optical non-volatile state was conducted by performing a 24 hour time test for a particular "write" state as shown in Fig. 5a. The red curve is the minimum resonance wavelength for the initial optical response and the gray dots represent the resonant minima for the optical "write" state. The results indicate non-volatile states up to 24 hours and most likely beyond. Not shown here, but this particular "write" state held beyond 2 weeks before stopping experiments. Variance in the "write" wavelengths is observed and may possibly be attributed to room temperature variations due to the absence of a temperature controlled chuck. Recently, Zhang et al. have demonstrated electrical multi-bit (2 bits) information storage based on an Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\) gate stack with a record memory window exceeding 20 V [95]. In Fig. 5b, we demonstrate the optical analogue of multi-bit electrical storage for various applied voltage biases. We are able to achieve 2 bit retention with wavelength discretization errors due to the optical spectrum analyzer (OSA) resolution set to 0.2 nm. Taking into account the discretization errors, the mean and variance of write states 1 to 4 are \(\bar{\lambda}_{\text{write}}\) = 1289.656, 1290.125, 1290.608, 1290.908 nm with a variance of \(\sigma^{2}_{write}\)= 1.6, 6.2, 0.25, 29.8 pm, respectively. The write states scale approximately linear for the first 3 states up until 11 V where there is an observed saturation in the wavelength red shift. Further improvement on bit resolution could be achieved by 4 solutions: 1) increasing \(\Delta\)n\({}_{\text{eff, non-volatile}}\) via engineering the number of trap defects to accommodate increased charge retention [96, 97, 98, 46, 99, 86], 2) enabling larger optical mode overlap with the CTM cell, 3) making device longer, and 4) improved ER such that small wavelength shifts can still offer reasonable amplitude differences. Simultaneous C-V curves and optical transmission spectra were also performed to track the evolution of optical non-volatility. A hysteresis curve in Fig. 5c illustrates clear optical transmission non-volatility (at \(\lambda\) = 1289.57 nm) in the write to retention state (0 - 9 - 0 V). By applying an erase operation and observing Figure 5: (a) Measured optical “write” state tracked over a 24 hour period indicating non-volatile memory. (b) Demonstration of multi-bit optical states for 0, 5, 9, 11 V, (c) simultaneous C-V curve and non-volatile optical transmission difference at \(\lambda=1289.57\) nm, (d) simultaneous C-V curve and resonant wavelength minima. the final state (0 - 5 - 0 V), we can see near perfect reset of the device with a slight difference in amplitude of \(\sim\) 0.125 dB. Fig. 5d shows measured C-V curves while tracking resonant wavelength minima. Near perfect reset states are also achieved. The measured C-V curves indicate significant smearing compared to theoretical curves and can be attributed to interface states at the p-Si/SiO\({}_{2}\) and n-GaAs/HfO\({}_{2}\) interface. The interfacial trap density (D\({}_{\mathrm{ii}}\)) can be extracted by employing the high-low frequency (_Castagne-Vapaille_) method [99, 100] via the following equation: D\({}_{\mathrm{ii}}\) = (1/qA)[(C\({}_{\mathrm{LF}}-\) C\({}_{\mathrm{ox}}\))-1 - (C\({}_{\mathrm{HF}}-\) C\({}_{\mathrm{Ox}}\))-1], where C\({}_{\mathrm{LF}}\) and C\({}_{\mathrm{HF}}\) are the low and high frequencies measured at 20 kHz and 2 MHz respectively, A is the area, and q is unit charge. The calculated D\({}_{\mathrm{it}}\) at near flat-band voltage is determined to be \(\sim\) 2.24 \(\times\) 10\({}^{10}\) cm\({}^{\mathrm{{}^{2}}}\)eV-1, thus the existence of net positive charge trapped in the donor states [101] at the p-Si/SiO\({}_{2}\) can potentially reduce \(\Delta\)n\({}_{\mathrm{eff,non-volatile}}\). Figure 6: (a) Microscope image of fabricated 2 order cascaded ring filters with CTM cells, (b) design parameters, (c) Out3 optical spectrum for write operation (0 - 9 - 0V) on ring bank 2, (d) close-up of spectrum indicating a 0.041 nm shift and a 7.11 dB extinction, (e) Output 3 optical spectrum for erase operation (0 - 9 - 0V) on ring bank 2, (f) close-up of spectrum indicating a near perfect reset. We have also integrated CTM cells into ring resonator-based designs which have improved ER over relatively small wavelength shifts. One example consists of two cascaded double ring structures as shown in Fig. 6a-b. The power coupling coefficients are chosen such that \(\kappa_{1}{}^{2}=0.25\)k\(\kappa_{0}{}^{4}\) results in a maximally flat filter condition if k = 2 and \(\kappa_{0}{}^{2}=0.35\)[102, 103]. Each cascade double ring structure can be divided into "Ring bank 1" and "Ring bank 2", with the first ring bank designed for 130 GHz spacing assuming a group index of n\({}_{\rm g}\) = 3.78. Measurements yielded 126 GHz most likely due to fabrication imperfections and unknown experimental group indices. "Ring bank 2" is shifted by a \(\Delta\)L = 26.25 \(\upmu\)m to offset resonances from "Ring bank 1" as shown in Fig. 6c-f. Fig. 6c-d shows the measured optical response before and after a non-volatile write operation of \(0\)\(\rightarrow\)\(\rightarrow\)\(9\)\(\rightarrow\)\(0\) V. An ER of 7.11 dB was achieved with only a shift of \(\Delta\)\(\lambda_{\rm non-volatile}\) = 0.041 nm. Transfer matrix calculations indicate an effective index change of \(\Delta\)\(\Delta\)\({}_{\rm eff}\)\(\sim 1.5\times 10^{-4}\) nm (similar to MZI CTM result), which is quite comparable to plasma dispersion based phase shifters[83]. Duration tests were performed on this write state up to 24 hours yielding a mean wavelength shift of 0.027 nm with a variance of \(6.7\times 10^{-5}\) nm (supplementary note 3). Next, we proceeded to apply an erase voltage of \(0\)\(\rightarrow\) - \(9\)\(\rightarrow\)\(0\) V and Fig. 6e-f illustrates the near perfect reset in wavelength. Duration tests up to 24 hours were also performed for the erase state and yielded a mean wavelength shift of \(-4.30\times 10^{-4}\) nm with a variance of \(5.03\times 10^{-5}\) nm (supplementary note 3). The current levels of both write and erase states were monitored to be fluctuating in the tens of pA indicating we are limited by the sensitivity of our measurement unit. ### Heterogeneous III-V/Si photonic in-memory computing platform Our vision of co-located in-memory optical computing fabricated on a common heterogeneously integrated substrate is illustrated in Fig. 7. Fig. 7a shows a general artificial neural network (ANN) architecture composed of one input layer, N hidden layers, and an output layer. This ANN can be realized by our III-V/Si ONN where N layers can be achieved by time re-cycling the chip. This architecture is comprised of previously demonstrated HPE integrated photonic building blocks (III-V/Si QD lasers[104] and amplifiers[105], MOSCAP MZIs and microrings[67, 106], lossless light monitors[64, 65], QD APDs[62, 107], SiGe APDs[58, 59, 60], programmable nonlinear activation functions[108, 109], etc.). The non-volatile III-V/Si CTM cell in this work adds a final critical missing pieces that can be placed in MZIs or ring resonators without adding any design and fabrication complexity. The architecture in Fig. 7c uses MZIs, but can also be represented by a mesh of ring resonators[110]. The weights of an entire network can be first trained by using the III-V/Si MZIs in a _low-voltage, volatile_ push-pull operation (up to \(\sim\) 30 Gbps in traveling wave electrode configuration[111]) while keeping track of the weight amplitudes with the III-V/Si lossless light monitor[64, 65]. Fabrication imperfection and low-speed phase tuning due to environmental change can be compensated by the same MOSCAP as well, an athermal tuning process with negligible power consumption[63, 67] unique to this platform. Fig. 7b shows a total tuning wavelength of 2.36 nm can be achieved from - 3 V to 2 V and with open 4 Gbps eye diagrams. In push-pull configuration, the wavelength tuning should be higher with equivalent drive voltage. The light monitors utilize internal trap mediated photo-carriers and induce no optical loss. These detectors can be part of a feedback circuitry with the non-volatile III-V/Si MZI CTM cell for true in-memory optical computing. Once network training is complete, one can appropriately adjust one arm of the III-V/Si MZI using the higher voltage charge trap effect for non-volatile inference. Current optical non-volatile neuromorphic systems use PCM materials that require: 1) high power (4 mW) optical pulses (200 ns)[112] or 2) graphene thermal heaters in pulse operation (SET: 3 V, 100 \(\upmu\)s, RESET: 5 V, 400 ns) to change from crystalline to amorphous and vice versa. Furthermore, these set/reset operations require a temporal separation in the seconds time range to ensure thermal relaxation[25]. Reconfiguring PCMs with optical pulses places additional scalability issues because number of light source and their controller equals to number of MZI in order to arbitrarily broadcast a network of control pulses into the system. In addition, ultrafast pulsed lasers can be quite power hungry and reduce MAC/J figure of merit. For the electrical based heating approach, the \(\upmu\)s SET time and long enough thermal relaxation duration (2 s) between pulses can affect training throughput and suffer from thermal crosstalk. Our ability to train at current 10s of Gbps in low-power, non-volatile operation combined with reliable multi-bit non-volatile inferencing allows for increased throughput and energy efficiency which are lacking in PCM based approaches and modern day electrical MVM architectures. The last but not the least, potentially integrated single- and multi-wavelength QD lasers offer convenience to architect the fan-out distribution through spatial or/and wavelength division multiplexing. The high-gain low-noise QD optical amplifiers boost up signals before or after reaching the inter-layer or final optical neurons. Thus, a fully-integrated, highly-scalable and programmable, energy-efficient photonic neuromorphic photonic chip including the entire optical computing functionality prior to inputting training outcome into a decision-making ASIC chip can be envisioned in the near future on this platform. Figure 7: (a) A general ANN architecture, (b) volatile phase tuning and 4Gbps speed measurements for network training, (c) schematic of a fully-integrated ONN MVM mesh on a heterogeneous III-V/Si platform. ## Conclusion The von-Neumann bottleneck in conventional computing architecture inherently results in the need to transfer massive data between processor and memory with an intrinsic limit on bandwidth \(\times\) distance plus increasing power consumption on the interconnect. As a major step towards breaking this bottleneck (especially for photonic neuromorphic computing), the work described here enables volatile operation for low-power, high-speed, on-chip training and non-volatile optical memory functionality for inference. This is all done on a heterogeneous III-V/Si platform capable of integrating all the necessary components needed for next generation applications such as: neuromorphic/brain inspired optical networks [69, 70, 71, 5, 6, 15, 72, 73, 74], optical switching fabrics for tele/data-communications [75, 76], optical phase arrays [77, 78], quantum networks, and future optical computing architectures. In particular, this work demonstrates for the first time, co-location of CTM memory cells with III-V/Si MZI and ring resonators which are key components in both optical communication and computing applications. The CTM memory cell is fully compatible with our heterogeneous III-V/Si platform and can offer benefits from an energy consumption, latency, and packaging standpoint. The MZI CTM exhibits full write/erase operation (100 cycles with 500 states) with wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}=1.16\) nm (\(\Delta\text{n}_{\text{eff,non-volatile}}\sim 2.5\times 10^{-4}\)) and a dynamic power consumption \(<20\) pW (limited by measurement). The ER is \(\sim\) 1.78 dB, and can be improved to \(>\) 50 dB with the correct directional coupler design. Multi-bit write operation (2 bits) is also demonstrated and verified over a time duration of 24 hours and most likely beyond. The cascaded 2nd order ring resonator CTM filter exhibited an improved ER of \(\sim\) 7.11 dB compared to the MZI and wavelength shifts of \(\Delta\lambda_{\text{non-volatile}}=0.041\) nm (\(\Delta\text{n}_{\text{eff,non-volatile}}=1.5\)\(\times\) 10\({}^{-4}\)) with similar pW-level dynamic power consumption as the MZI CTM. We believe the demonstration and work presented in this paper can fuel further innovations in photonic co-integration with recent advanced CTM cells such as c-ZrTiO\({}_{4}\) (ZTO) [88], MoS\({}_{2}\)/hBN/MoS\({}_{2}\)/graphdiyne/WeS\({}_{2}\)[113], WSe\({}_{2}\)/BN [114], etc. ## Data availability The data that support the findings of this study are available from the corresponding author on reasonable request ## References * [1] Cheng, Q., Bahadori, M., Glick, M., Rumley, S. & Bergman, K. Recent advances in optical technologies for data centers: a review. 17. * [2] Hazelwood, K. _et al._ Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. in _2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)_ 620-629 (IEEE, 2018). doi:10.1109/HPCA.2018.00059. * [3] Beyond von Neumann. _Nat. Nanotechnol._**15**, 507-507 (2020). * [4] Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing. _Nat. Nanotechnol._**15**, 529-544 (2020). * [5] Shen, Y. _et al._ Deep Learning with Coherent Nanophotonic Circuits. _Nature Photon_**11**, 441-446 (2017). * [6] Xiao, X., On, M. B. & Vaerenbergh, T. V. Large-scale and energy-efficient tensorized optical neural networks on III-V-on-silicon MOSCAP platform. _APL Photonics_**12** (2021). * [7] Xiao, X. & Ben Yoo, S. J. Scalable and Compact 3D Tensorized Photonic Neural Networks. in _2021 Optical Fiber Communications Conference and Exhibition (OFC)_ 1-3 (2021). * [8] Pai, S. _et al._ Parallel Programming of an Arbitrary Feedforward Photonic Network. _IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS_**26**, 13 (2020). * [9] Zhou, H. _et al._ Photonic matrix multiplication lights up photonic accelerator and beyond. _Light Sci Appl_**11**, 30 (2022). * [10] De Marinis, L., Cococcioni, M., Castoldi, P. & Andriolli, N. Photonic Neural Networks: A Survey. _IEEE Access_**7**, 175827-175841 (2019). * [11] Shokraneh, F., Nezami, M. S. & Liboiron-Ladouceur, O. Theoretical and Experimental Analysis of a \(4\times 4\) Reconfigurable MZI-Based Linear Optical Processor. _Journal of Lightwave Technology_**38**, 1258-1267 (2020). * [12] Cem, A., Yan, S., de Moura, U. C., Ding, Y. & Zibar, D. Comparison of Models for Training Optical Matrix Multipliers in Neuromorphic PICs. 3 (2022). * [13] Hamerly, R., Bandyopadhyay, S. & Englund, D. Accurate Self-Configuration of Rectangular Multiport Interferometers. _arXiv:2106.03249 [physics]_ (2021). * [14] Zhang, H. An optical neural chip for implementing complex-valued neural network. 11. * [15] Peng, H.-T., Nahmias, M. A., de Lima, T. F., Tait, A. N. & Shastri, B. J. Neuromorphic Photonic Integrated Circuits. _IEEE Journal of Selected Topics in Quantum Electronics_**24**, 1-15 (2018). * [16] Al-Qadasi, M. A., Chrostowski, L. & Shastri, B. J. Scaling up silicon photonic-based accelerators: Challenges and opportunities. _APL Photonics_ 20 (2022). * [17] Leonardis, F. D., Soref, R. & Passaro, V. M. N. Broadband Electro-Optical Crossbar Switches Using Low-Loss Ge2Sb2Se4Te1 Phase Change Material. _JOURNAL OF LIGHTWAVE TECHNOLOGY_**37**, 9 (2019). * [18] Fang, Z., Chen, R., Zheng, J. & Majumdar, A. Non-Volatile Reconfigurable Silicon Photonics Based on Phase-Change Materials. _IEEE Journal of Selected Topics in Quantum Electronics_**28**, 1-17 (2022). * [19] Fang, Z. _et al._ Non-Volatile Reconfigurable Integrated Photonics Enabled by Broadband Low-Loss Phase Change Material. _Adv. Optical Mater._**9**, 2002049 (2021). * [20] Rios, C., Hosseini, P., Wright, C. D., Bhaskaran, H. & Pernice, W. H. P. On-Chip Photonic Memory Elements Employing Phase-Change Materials. _Advanced Materials_**26**, 1372-1377 (2014). * [21] Jr, R. F. H. Optical phase change materials in integrated silicon photonic devices: review. 15. * [22] Wang, Q. Optically reconfigurable metasurfaces and photonic devices based on phase change materials. _NATURE PHOTONICS_**10**, 7 (2016). * [23] Wang, J., Wang, L. & Liu, J. Overview of Phase-Change Materials Based Photonic Devices. _IEEE Access_**8**, 121211-121245 (2020). * [24] Wuttig, M. Phase-change materials for non-volatile photonic applications. _NATURE PHOTONICS_**11**, 12 (2017). * [25] Fang, Z. _et al._ Ultra-low-energy programmable non-volatile silicon photonics based on phase-change materials with graphene heaters. _Nat. Nanotechnol._**17**, 842-848 (2022). * [26] Alexoudi, T., Kanellos, G. T. & Pleros, N. Optical RAM and integrated optical memories: a survey. _Light Sci Appl_**9**, 91 (2020). * [27] Liu, H.-B. & Chollet, F. Moving Polymer Waveguides and Letching Actuator for \(2\times 2\) MEMS Optical Switch. _Journal of Microelectromechanical Systems_**18**, 715-724 (2009). * [28] Geler-Kremer, J. _et al._ A Non-Volatile Optical Memory in Silicon Photonics. 3 (2021). * [29] Kampfe, T., Wang, B., Haussmann, A., Chen, L.-Q. & Eng, L. M. Tunable Non-Volatile Memory by Conductive Ferroelectric Domain Walls in Lithium Niobate Thin Films. 11 (2020). * [30] Barrios, C. A. & Lipson, M. Silicon Photonic Read-Only Memory. _JOURNAL OF LIGHTWAVE TECHNOLOGY_**24**, 8 (2006). * [31] Song, J.-F. _et al._ Integrated photonics with programmable non-volatile memory. _Sci Rep_**6**, 22616 (2016). * [32] Olivares, I., Parra, J. & Sanchis, P. Non-Volatile Photonic Memory Based on a SAHAS Configuration. _IEEE Photonics Journal_**13**, 9 (2021). * [33] Guo, X., Xiang, J., Zhang, Y. & Su, Y. Integrated Neuromorphic Photonics: Synapses, Neurons, and Neural Networks. _Advanced Photonics Research_**2**, 2000212 (2021). * [34] Parra, J. _et al._ Non-Volatile and Ultra-Compact Photonic Memory. 3. * [35] Parra, J., Olivares, I., Brimont, A. & Sanchis, P. Non-volatile epsilon-near-zero readout memory. 4. * [36] Grajower, M., Mazurski, N., Shappir, J. & Levy, U. Non-Volatile Silicon Photonics Using Nanoscale Flash Memory Technology. _Laser Photonics Rev._ 8 (2018). * [37] Miscuglio, M., Adam, G. C., Kuzum, D. & Sorger, V. J. Roadmap on Material-Function Mapping for Photonic- Electronic Hybrid Neural Networks. 22. * [38] Cheung, S. _et al._ Heterogeneous III-V/Si Non-Volatile Optical Memory: A Mach-Zehnder Memristor. 2 (2022). * [39] Tossoun, B., Sheng, X., Paul Strachan, J., Liang, D. & Beausoleil, R. G. Hybrid silicon MOS optoelectronic memristor with non-volatile memory. in _2020 IEEE Photonics Conference (IPC)_ 1-2 (2020). doi:10.1109/IPC47351.2020.9252481. * [40] Tossoun, B., Sheng, X., Strachan, J. P., Liang, D. & Beausoleil, R. G. Memristor Photonics. in Tu5B.3 (2021). * [41] Tossoun, B., Sheng, X., Strachan, J. P. & Liang, D. The Memristor Laser. in 7.6.1-7.6.4 (2020). doi:10.1109/IEDM13553.2020.9371989. * [42] Tossoun, B., Sheng, X., Strachan, J. P. & Liang, D. The memristor laser: the first-ever laser with non-volatile memory. 4. * [43] Adam, G. C., Khiat, A. & Prodromakis, T. Challenges hindering memristive neuromorphic hardware from going mainstream. _Nat Commun_**9**, 5267 (2018). * [44] Grajower, M. Post processing resonance trimming of a silicon micro-ring resonator using Flash memory technology. (2017). * [45] Li, Y., Ping, H., Dai, T., Chen, W. & Wang, P. Nonvolatile silicon photonic switch with graphene based flash-memory cell. * [46] She, M. Semiconductor Flash Memory Scaling. 133. * [47] Goda, A. Recent Progress on 3D NAND Flash Technologies. 16 (2021). * [48]_Micron Transitions to Next-Generation 3D NAND Replacement-Gate Technology_. [https://media-www.micron.com/~/media/client/global/documents/products/white-paper/micron_rg_3d_nand_whitepaper.pdf?utm_medium=pr&rev=315635e4acb04255a9009c0ad9300cc7](https://media-www.micron.com/~/media/client/global/documents/products/white-paper/micron_rg_3d_nand_whitepaper.pdf?utm_medium=pr&rev=315635e4acb04255a9009c0ad9300cc7) &utm_content=press-release&utm_source=investor&la=en&utm_campaign=176l (2020). * [49] Spassov, D. _et al._ Charge Storage and Reliability Characteristics of Nonvolatile Memory Capacitors with HfO2/Al2O3-Based Charge Trapping Layers. 15 (2022). * [50] Parra, J., Olivares, I., Brimont, A. & Sanchis, P. Toward Nonvolatile Switching in Silicon Photonic Devices. _Laser Photonics Rev._ 19 (2021). * [51] Kurczveil, G., Descos, A., Liang, D., Fiorentino, M. & Beausoleil, R. Hybrid Silicon Quantum Dot Comb Laser with Record Wide Comb Width. _Frontiers in Optics_ 2 (2020). * [52] Kurczveil, G. _et al._ On-Chip Hybrid Silicon Quantum Dot Comb Laser with 14 Error-Free Channels. in _2018 IEEE International Semiconductor Laser Conference (ISLC)_ 1-2 (2018). doi:10.1109/ISLC.2018.8516175. * [53] Kurczveil, G., Seyedi, M. A., Liang, D., Fiorentino, M. & Beausoleil, R. G. Error-Free Operation in a Hybrid-Silicon Quantum Dot Comb Laser. _IEEE Photonics Technology Letters_**30**, 71-74 (2018). * [54] Kurczveil, G. _et al._ High-temperature error-free operation in a heterogeneous silicon quantum dot comb laser. in (2022). * [55] Srinivasan, S., Liang, D. & Beausoleil, R. G. High Temperature Performance of Heterogeneous MOSCAP Microring Modulators. 3 (2021). * [56] Srinivasan, S., Liang, D. & Beausoleil, R. G. Heterogeneous SISCAP Microring Modulator for High-Speed Optical Communication. in _2020 European Conference on Optical Communications (ECOC)_ 1-3 (2020). doi:10.1109/ECOC48923.2020.9333221. * [57] Cheung, S. _et al._ Demonstration of a \(17\times 25\) Gb/s Heterogeneous III-V/Si DWDM Transmitter Based on (De-) Interleaved Quantum Dot Optical Frequency Combs. _JOURNAL OF LIGHTWAVE TECHNOLOGY_**40**, 9 (2022). * [58] Yuan, Y. _et al._ High Responsivity Si-Ge Waveguide Avalanche Photodiodes Enhanced by Loop Reflector. _IEEE Journal of Selected Topics in Quantum Electronics_**28**, 1-8 (2022). * [59] Yuan, Y. _et al._ 64 Gbps PAM4 Si-Ge Waveguide Avalanche Photodiodes With Excellent Temperature Stability. _J. Lightwave Technol._**38**, 4857-4866 (2020). * [60] Yuan, Y. _et al._ OSNR Sensitivity Analysis for Si-Ge Avalanche Photodiodes. _IEEE Photonics Technology Letters_**34**, 321-324 (2022). * [61] Huang, Z. _et al._ 25 Gbps low-voltage waveguide Si-Ge avalanche photodiode. 6. * [62] Tossoun, B. _et al._ 32 Gbps heterogeneously integrated quantum dot waveguide avalanche photodiodes on silicon. _Opt. Lett._**46**, 3821 (2021). * [63] Liang, D. _et al._ An Energy-Efficient and Bandwidth-Scalable DWDM Heterogeneous Silicon Photonics Integration Platform. _IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS_**28**, 19 (2022). * [64] Srinivasan, S., Liang, D. & Beausoleil, R. In-situ light measurement in heterogeneous gain media. in _2021 27th International Semiconductor Laser Conference (ISLC)_ 1-2 (2021). doi:10.1109/ISLC51662.2021.9615660. * [65] Srinivasan, S., Liang, D. & Beausoleil, R. Non-invasive light monitoring for heterogeneous photonic integrated circuits. in _2021 IEEE Photonics Conference (IPC)_ 1-2 (2021). doi:10.1109/IPC48725.2021.9593047. * [66] Cheung, S. _et al._ Ultra-Power Efficient Heterogeneous III-V/Si De-Interleavers for DWDM Optical Links. in 1-2 (2021). doi:10.1109/GFP51802.2021.9673963. * [67] Cheung, S. _et al._ Ultra-power-efficient heterogeneous III-V/Si MOSCAP (de-)interleavers for DWDM optical links. _Photonics Research_**10**, A22-A34 (2022). * [68] Cheung, S. _et al._ Heterogeneous III-V/Si (De-)Interleaver Filters with Non-Volatile Memristive Behavior. in _2022 IEEE Photonics Conference (IPC)_ 1-2 (2022). doi:10.1109/IPC53466.2022.9975647. * [69] Lian, C. _et al._ Photonic (computational) memories: tunable nanophotonics for data storage and computing. _Nanophotonics_**11**, 3823-3854 (2022). * [70] Gu, J. ICCAD: G: Light in Artificial Intelligence: Efficient Neurocomputing with Optical Neural Networks. 5. * [71] Rios, C. _et al._ In-memory computing on a photonic platform. _Sci. Adv._**5**, eaau5759 (2019). * [72] Shastri, B. J. _et al._ Photonics for artificial intelligence and neuromorphic computing. _Nat. Photonics_**15**, 102-114 (2021). * [73] Harris, N. C. Programmable Nanophotonics for Quantum Information Processing and Artificial Intelligence. 126. * [74] El Srouji, L. _et al._ Tutorial: Photonic and Optoelectronic Neuromorphic Computing. _APL Photonics_ 5.0072090 (2022) doi:10.1063/5.0072090. * [75] Yu, R. _et al._ A scalable silicon photonic chip-scale optical switch for high performance computing systems. _Opt. Express_**21**, 32655 (2013). * [76] Grani, P., Proietti, R., Cheung, S. & Ben Yoo, S. J. Flat-Topology High-Throughput Compute Node With AWGR-Based Optical-Interconnects. _J. Lightwave Technol._**34**, 2959-2968 (2016). 77] Sun, J., Timurdogan, E., Yaacobi, A., Hosseini, E. S. & Watts, M. R. Large-scale nanophotonic phased array. _Nature_**493**, 195-199 (2013). * [78] Poulton, C. V. _et al._ Long-Range LiDAR and Free-Space Data Communication With High-Performance Optical Phased Arrays. _IEEE J. Select. Topics Quantum Electron._**25**, 1-8 (2019). * [79] You, H.-C. _et al._ SONOS-Type Flash Memory Using an HfO2 as a Charge Trapping Layer Deposited by the Sol-Gel Spin-Coating Method. 3. * [80] Zhu, W., Ma, T. P., Kim', J., Gibson, M. & Furukawa, T. HfOt and HfAlO for CMOS: Thermal Stability and Current Transport. 4. * [81] Xiong, H. D. _et al._ Characterization of electrically active defects in high-k gate dielectrics by using low frequency noise and charge pumping measurements. _Microelectronic Engineering_ (2007). * [82] Liang, D. _et al._ Integrated Green DWDM Photonics for Next-Gen High-Performance Computing. in _2020 Optical Fiber Communications Conference and Exhibition (OFC)_ 1-3 (2020). * [83] Chrostowski, L. & Hochberg, M. _Silicon Photonics Design: From Devices to Systems_. (Cambridge: Cambridge University Press, 2015). * [84] Reed, T., Mashanovich, G., Gardes, Y. & Thomson, J. Silicon optical modulators. _nature photonics_**4**, (2010). * [85] Atlas User Manual. (2022). * [86] Sugizaki, T. _et al._ Novel Multi-bit SONOS Type Flash Memory Using a High-k Charge Trapping Layer. in (2003). * [87] Hlali, S. High-k dielectric materials for the gate oxide of a MIS capacitor: effect of interface states on the C-V characteristics. _J Comput Electron_. * [88] Shen, Y.-S. Flash Memory Featuring Low-Voltage Operation by Crystalline ZrTiO4 Charge-Trapping Layer. _Scientific RePOrTS_ 9. * [89] Li, J. A non-volatile AND gate based on Al2O3/HfO2/Al2O3 charge-trap stack for in-situ storage applications. _Science Bulletin_ (2019). * [90] Bogaerts, W. _et al._ Silicon microring resonators. _Laser & Photon. Rev._**6**, 47-73 (2012). * [91] Madsen, C. K. & Zhao, J. H. _Optical Filter Design and Analysis: A Signal Processing Approach_. (John Wiley & Sons, 1999). * [92] Kim, J., Kim, J., Cho, E.-C. & Yi, J. Analysis of HfO2 Charge Trapping Layer Characteristics After UV Treatment. _ECS Journal of Solid State Science and Technology_**10**,. * [93] Xu, W. Electronic Structure and Charge-Trapping Characteristics of the Al2O3-TiAlO-SiO2 Gate Stack for Nonvolatile Memory Applications. 7 (2017). * [94] Song, Y. S. & Park, G. Retention Enhancement in Low Power NOR Flash Array with High-\(\kappa\)-Based Charge-Trapping Memory by Utilizing High Permittivity and High Bandgap of Aluminum Oxide. 12 (2021). * [95] Zhang, E. _et al._ Tunable Charge-Trap Memory Based on Few-Layer MoS2. 8. * [96] Si, M. & Ye, P. D. The Critical Role of Charge Balance on the Memory Characteristics of Ferroelectric Field-Effect Transistors. 6. * [97] You, H.-W. & Cho, W.-J. Charge trapping properties of the HfO2 layer with various thicknesses for charge trap flash memory applications. _Appl. Phys. Lett._ 4. * [98] Zhang, Y. _et al._ Defect states and charge trapping characteristics of HfO2 films for high performance nonvolatile memory applications. _Appl. Phys. Lett._ 6 (2014). * [99] Castagnc, R. & Vapaille, A. MEANS OF VERY LOW FREQUENCY MOS CAPACITANCE MEASUREMTS. * [100] Xia, P. Impact and Origin of Interface States in MOS Capacitor with Monolayer MoS2 and HfO2 High-k Dielectric. _Scientific Reports_. * [101] Sze, S. _Physics of Semiconductor Devices_. (Wiley, 2021). * [102] Heyn, P. D. Receivers Based on Silicon Ring Resonators for Multi-Wavelength Optical Interconnects. 270. * [103] De Heyn, P. _et al._ Fabrication-Tolerant Four-Channel Wavelength-Division-Multiplexing Filter Based on Collectively Tuned Si Microrings. _J. Lightwave Technol._**31**, 2785-2792 (2013). * [104] Liang, D. _et al._ High-performance quantum-dot distributed feedback laser on silicon for high-speed modulations. _Optica_**8**, 591-593 (2021). * [105] Descos, A., Kurczveil, G., Liang, D. & Beausoleil, R. Heterogeneous O-Band InAs/GaAs Quantum-Dot Optical Amplifier on Silicon. in (2021). * [106] Cheung, S. _et al._ Comparison of Al2O3 and HfO2 MOSCAP III-V/Si Power Splitters and (De-) Interleavers for DWDM Optical Links. (2022). * [107] Tossoun, B. _et al._ Indium arsenide quantum dot waveguide photodiodes heterogeneously integrated on silicon. * [108] Yuan, Y. _et al._ Low-phase quantization error Mach-Zehnder interferometers for high-precision optical neural network training. _APL Photonics_ (2023). * [109] Jha, A., Huang, C. & Prucnal, P. R. Reconfigurable all-optical nonlinear activation functions for neuromorphic photonics. _Opt. Lett._**45**, 4819-4822 (2020). * [110] Ohno, S., Tang, R., Toprasertpong, K., Takagi, S. & Takenaka, M. Si Microring Resonator Crossbar Array for On-Chip Inference and Training of the Optical Neural Network. _ACS Photonics_ (2022). * [111] Dong, P., Chen, L. & Chen, Y. High-speed low-voltage single-drive push-pull silicon Mach-Zehnder modulators. (2012). * [112] Feldmann, J. Parallel convolutional processing using an integrated photonic tensor core. * [113] Li, Y. Low-voltage ultrafast nonvolatile memory via direct charge injection through a threshold resistive-switching layer. _Nature Communications_ 9 (2022). * [114] Xiang, D. Two-dimensional multibit optoelectronic memory with broadband spectrum distinction. _NATURE COMMUNICATIONS_ 8 (2018). ## Competing interests The authors declare no competing interest. ## Acknowledgements We thank funding from DOE ARPA-E ULTRALIT contract No. DE-AR0001039, and USG MPO contract No. H98230-18-3-0001. We thank the UCSB nanofabrication facilities. We also thank Sung-Won Kong and Garrett Schlenvogt from Silvaco, Inc. for providing guidance and support with the ATLAS simulation solver. ## Author contributions S.C. conceived the initial concept, and designed devices. D.L. designed the MOSCAP structure, fabrication flow, and was heavily involved in data analysis and computing architecture design. G.K. and Y.H. fabricated the devices and suggested improvements in the design phase. S.C., Y.Y., and Y.P. conducted the chip testing. D.L. and R.B. managed the project and gave important technical advice. All authors reviewed the manuscript. ## Supplementary * Stanley Cheung\({}^{*}\), Di Liang\({}^{*}\), Yuan Yuan\({}^{*}\), Yiwei Peng, Yingtao Hu, Geza Kurczveil, and Raymond G. Beausoleil * Hewlett Packard Enterprise, Large-Scale Integrated Photonics Lab, Milpitas, CA. 95035, USA * [email protected], [email protected], [email protected] ## Supplementary Note 1: ### Electro-optical simulations and design A 2-D electrical solver (SILVACO ATLAS [7]) was used to study the non-volatile electrical behavior of the III-V/Si CTM structure. This modeling software has been used successfully in other theoretical optical non-volatile structures [8, 9]. The program numerically solves the Poisson, charge continuity equations, drift-diffusion transport, and quantum tunneling mechanisms. The models involved include Fermi-Dirac statistics, Shockley-Read-Hall recombination, quantum tunneling that includes direct and Fowler-Nordheim [7]. The semiconductor material parameters used for optical and electronic TCAD simulations are listed in Table _1_: Material parameters used in electro-optical simulations. Once the structure is built with the appropriately assigned models and trapped charge density, the band diagrams, C-V curves, and electron/hole concentrations are simulated for various biases. The spatial profiles of the change in electron/hole concentrations (\(\Delta\)P(x,y) and \(\Delta\)N(x,y)) are then exported and used to calculate a spatial refractive index change \(\Delta\)n(x,y) by the following equation [21, 22]: \[\Delta n(x,y)=-6.2\times 10^{-22}\,\Delta N(x,y)-6.0\times 10^{-18}\,\Delta P(x,y) \qquad(1)\] This equation is valid for a wavelength \(\lambda=1310\) nm and is used throughout the manuscript. Next, this spatial refractive index change \(\Delta\)n(x,y) is exported into a 2D-FDE optical mode solver (Lumerical) and the non-volatile change in effective index (\(\Delta_{\text{neft,non-volatile}}\)) is calculated. This change can then be used to calculate non-volatile phase shifts on photonic devices such as ring resonators, Mach-Zehnder interferometers, filters, etc. The figure below illustrates the simulation procedure: \begin{table} \begin{tabular}{c c c c c c c c} \hline Material & E\({}_{\text{g}}\) (eV) & \(\chi\) (eV) & VBO (eV) & CBO (eV) & n & N\({}_{\text{TC}}\) (cm\({}^{-3}\)) & \(\varphi_{\text{d}}\) (eV) \\ \hline \hline Al\({}_{2}\)O\({}_{3}\) & 7.0 [10] & 1.5 [11, 12, 13] & 3.2 [10] & 2.7 [10] & 1.75 & - & - \\ HfO\({}_{2}\) & 5.7 [10] & 2.5 [10] & 2.7 [10] & 1.9 [10] & 1.9 [10] & 10\({}^{19}\) - 10 [20, 14, 15] & 2.0 [16, 17, 18, 19] \\ SiO\({}_{2}\) & 8.9 [10] & 1.3 [10] & 4.5 [10] & 3.3 [10] & 1.44 & - & - \\ Si & 1.1 [10] & 4.1 [10] & - & - & 3.507 [20] & - & - \\ GaAs & 1.43 [7] & 4.07 [7] & - & - & 3.406 [20] & - & - \\ \hline \end{tabular} E\({}_{\text{g}}\): Energy gap, \(\chi\): electron affinity, n: refractive index (at \(\lambda=1310\) nm), VBO: valance band offset, CBO: conduction band offset, N\({}_{\text{TC}}\): trap density, \(\varphi_{\text{d}}\): electron trap level \end{table} Table 1: Material parameters used in electro-optical simulations An alternative use of the non-volatile III-V/Si CTM cell is the role it can play in post-fabrication trimming or permanent phase error correction without consuming static electrical power. Inherent to the high index contrast of the silicon photonic material system, phase sensitive devices such as arrayed waveguide gratings (AWGs), lattice filters, and (de-)interleavers are sensitive to phase errors and are dependent on waveguide width, thickness, and refractive index non-homogeneity. In this case, the ability to use non-volatile phase tuning to correct the aforementioned errors is essential to minimizing power consumption. Changes in resonant the resonant wavelength can be described by the following equation [4]: \[\Delta\lambda_{0}=\left(\lambda_{0}\,/\,n_{g}\right)\sqrt{\left(dn_{\mathit{ eff}}\,/\,dw\cdot\Delta w\right)^{2}+\left(dn_{\mathit{eff}}\,/\,dt\cdot\Delta t \right)^{2}} \tag{12}\] \(\lambda_{0},n_{\mathit{eff}},n_{g},\Delta w,\mathrm{and}\,\Delta t\) are the free-space wavelength, effective index, group index, width variation, and thickness variation respectively. Along with the group index \(n_{g}=n_{\mathit{eff}}-\lambda_{0}\cdot dn_{\mathit{eff}}\,/\,d\lambda\), the resonant wavelength variation for each dimension can be calculated as: \(\Delta\lambda_{0}\,/\,\Delta w=\left(\lambda_{0}\,/\,n_{g}\right)\left(dn_{ \mathit{eff}}\,/\,dw\right)\) and \(\Delta\lambda_{0}\,/\,\Delta t=\left(\lambda_{0}\,/\,n_{g}\right)\left(dn_{ \mathit{eff}}\,/\,dt\right)\). The effective index and group index as a function of width and thickness are plotted in Fig. 9a - b. It can be seen that both values increase as waveguide dimensions increase because of increased modal confinement. Fig. 9c - d illustrates width sensitivity (\(\mathrm{dn_{eff}}\)/dw) and thickness sensitivity (\(\mathrm{dn_{eff}}\)/dt). Throughout the paper, single-mode III-V/Si CTM waveguides are used and have design dimensions of \(\mathrm{height=300\ nm}\), \(\mathrm{width=500nm}\), \(\mathrm{etch\ depth=170nm}\), and \(\mathrm{GaAs}\) thickness of \(150\ nm\), thus resulting in effective index variations of \(\mathrm{dn_{eff}}\)/\(\mathrm{dw=5.80\times 10^{-4}\,/nm}\) and \(\mathrm{dn_{eff}}\)/\(\mathrm{dt=-4.44\times 10^{-5}\,/nm}\). Typically, the most critical parameter in controlling phase errors is the starting SOI wafer thickness uniformity. Figure 8: Flowchart of electro-optical simulation procedure for III-V/Si CTM structure However, for the III-V/Si case, the GaAs thickness reduces this SOI thickness sensitivity significantly, and as a result, width variations play a larger role in phase errors by an order of magnitude. The wavelength shift variation are \(\Delta\lambda_{0}\)/dw = 0.2457 nm/nm and \(\Delta\lambda_{0}\)/dt = -0.0187 nm/nm as shown in Fig. 9e - f. The use of wider waveguides can significantly reduce \(\Delta\lambda_{0}\)/dw by another order of magnitude, however, the TE01 mode starts to appear at a width of 600 nm as seen in Fig. 9b. We believe, the III-V/Si CTM waveguide dimensions used throughout this paper offers the best design trade-off in terms of low \(\mathrm{dn_{eff}}\)/dw and \(\mathrm{dn_{eff}}\)/dt while maintaining single-mode operation. However, if we assume a charge trap concentration of \(\mathrm{Q_{TC}>9\times 10^{19}\leavevmode\nobreak\ cm^{-3}}\) and a \(\mathrm{dn_{eff}}\)/dw = 5.80 \(\times\) 10\({}^{4}\) /nm, the III-V/Si CTM cell may be capable of correcting several nm of errors in waveguide width. Supplementary Note 2: Fabrication In-house device fabrications starts with a 100 mm SOI wafer which consists of a 350 nm thick top silicon layer and a 2 um buried oxide (BOX) layer. The top silicon is thinned down to 300 nm by thermal oxidation and buffered hydrofluoric (HF) acid etching, thus leaving a clean silicon surface. Silicon waveguides are defined by a deep-UV (248 nm) lithography stepper and boron is implanted to create p \(++\) silicon contacts. Grating couplers, silicon rib waveguides, and vertical out-gassing channels (VOCs) are respectively patterned using the same deep-UV stepper and then subsequently etched 170 nm with \(\mathrm{Cl_{2}}\)-based gas chemistry. Next, the silicon wafer goes through a Piranha clean followed by buffered hydrofluoric (HF) Figure 9: III-V/Si CTM optical mode calculations for (a) effective index (\(\mathrm{n_{eff}}\)), (b) group index (\(\mathrm{n_{g}}\)), (c) effective index change vs. waveguide width (\(\mathrm{dn_{eff}}\)/dw), (d) effective index change vs. waveguide thickness (\(\mathrm{dn_{eff}}\)/dt), (e) wavelength shift (\(\Delta\lambda_{0}\)/dw) vs. waveguide width, (f) wavelength shift vs. waveguide thickness (\(\Delta\lambda_{0}\)/dt). acid etching to remove any hard masks. Next, an oxygen plasma clean is performed followed by a SC1 and SC2 clean and HF dip. The III-V wafer goes through a solvent clean consisting of acetone, methanol, and IPA, followed by oxygen plasma cleaning and a short dip in NH\({}_{4}\)OH:H\({}_{2}\)O solution. Next a dielectric of Al\({}_{2}\)O\({}_{3}\) is deposited onto both GaAs and Si wafers via atomic layer deposition (ALD) with a target thickness of 0.5 nm on each side followed by 3 nm HfO\({}_{2}\) and 1 nm Al\({}_{2}\)O\({}_{3}\) depositions. The two samples are then mated and annealed at 250 \({}^{\circ}\)C. After thermal anneal, the III-V substrate is selectively removed by a combination of mechanical lapping and selective wet etching in H\({}_{2}\)O\({}_{2}\):NH\({}_{4}\)OH solution. Upon removing AlGaAs wet stop layer in buffered hydrofluoric acid (BHF) solution, n-GaAs surface is exposed. Ge-based n-contact metallization process is conducted beforethe III-V mesa is defined and dry etched to expose Si surface and metallization process on p\(++\) Si. Next, a plasma enhanced chemical vapor deposition (PECVD) SiO\({}_{2}\) cladding is deposited and the vias are defined and etched. Finally, thick metal probe pads are defined to make contact with n- and p-contacts. The relevant fabricated devices can be seen in Fig. 11a-b. Fig. 10 Fabrication flow of heterogeneous III-V/Si CTM device. Fig. 11: Schematic view of the device. ## Supplementary Note 3: Measurements The spectral response of the devices were characterized with a Thorlabs superluminescent diode (SLD) capable of 40 nm bandwidth (1290 - 1330 nm). The 100 mm wafer is vacuum mounted onto a stainless steel chuck on a semi-automatic probe station. Light is vertically coupled in/out of devices via grating couplers with a 7 \({}^{\circ}\) polished fiber array. Polarization control is performed with the use of a polarization controller and maximized for peak transmission on a straight test waveguide. C-V, I-V, and optical spectra measurements are performed with an Agilent E4980A, Keithley 2400, and Yokogawa AQ6370D respectively. The pre-bonded 0.5 \(\upmu\)m wide straight Si waveguide TE losses were determined to be about \(\sim\) 9.2 dB/cm primarily due to sidewall roughness for a wavelength of 1310 nm from a series of cutback test structures. 0.8 \(\upmu\)m wide multi-mode straight Si waveguide TE losses were about 9.8 dB/cm. Circular bends of radius = 2, 5, 7, 14 \(\upmu\)m had bend losses = 1.22, 0.83, 0.3, 0.08 dB/90\({}^{\circ}\) bend respectively. Grating coupler loss before and after bonding were calculated to be 7.7 and 7.8 dB/coupler indicating negligible effect after III-V removal. _Fig. 12_a shows the measured transmission spectra of the 2\({}^{\text{nd}}\) order cascaded ring resonator filter. The filter shape is far from ideal and transfer matrix modeling indicates power coupling coefficients of \(\kappa_{0}=0.43\), \(\kappa_{1}\) = 0.02 for "Ring bank 1" and \(\kappa_{0}=0.43\), \(\kappa_{1}=0.09\) for "Ring bank 2" as shown in Fig. 12b. By applying a write operation (0 \(\xrightarrow{}+9\xrightarrow{}0\) V), a \(\Delta\lambda_{\text{non-volatile}}=0.041\) is achieved which corresponds to a \(\Delta\alpha_{\text{eff, non-volatile}}=1.5\times 10^{-4}\). This change in effective index is similar to the MZI CTM result (\(\Delta\alpha_{\text{eff, non-volatile}}=2.5\times 10^{-4}\)). Non-volatile optical retention measurements for the "write" and "reset" states were performed over a 24 hour period and are shown in Fig. 13a, and b respectively. Supplementary Note 4: Transmission electron microscope (TEM) measurements of initial, set, and reset states. We performed transmission electron microscope (TEM) imaging of initial, set, and reset states as shown in _Fig. 14_a-b respectively. Each image is a separate sample that was biased and subsequently imaged. There Figure 12: III-V/Si CTM 2\({}^{\text{nd}}\) order cascaded ring resonator filters: (a) measured, and (b) calculated. Measured \(\Delta\lambda_{\text{non-volatile}}=0.041\) nm corresponds to a \(\Delta\alpha_{\text{eff, non-volatile}}=1.5\times 10^{-4}\). Figure 13: 24 hour data tracking for (a) “write” state optical spectra, (b) non-volatile wavelength shift for a resonant minimum. (c) “Reset” state optical spectra, and (d) non-volatile wavelength reset. does not appear to be any visual evidence of dielectric breakdown in the n-GaAs/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)/Si CTM structure. There are contrast differences and may indicate variability in sample preparation as well as imaging settings. The electron dispersive spectroscopy (EDS) line scans below the TEM images also indicate minimal atomic/interfacial changes.
2306.12178
Breaking small automorphisms by list colourings
For a graph G, we define a small automorphism as one that maps some vertex into its neighbour. We investigate the edge colourings of G that break every small automorphism of G. We show that such a colouring can be chosen from any set of lists of length three. In addition, we show that any set of lists of length two on both edges and vertices of G yields a total colouring which breaks all the small automorphisms of $G$. These results are sharp and they match the non-list variants.
Jakub Kwaśny, Marcin Stawiski
2023-06-21T11:14:56Z
http://arxiv.org/abs/2306.12178v1
# Breaking small automorphisms by list colourings ###### Abstract For a graph \(G\), we define a small automorphism as one that maps some vertex into its neighbour. We investigate the edge colourings of \(G\) that break every small automorphism of \(G\). We show that such a colouring can be chosen from any set of lists of length three. In addition, we show that any set of lists of length two on both edges and vertices of \(G\) yields a total colouring which breaks all the small automorphisms of \(G\). These results are sharp and they match the non-list variants. **Keywords**: distinguishing index, symmetry breaking, infinite graphs, colourings **Mathematics Subject Classifications**: 05C15, 05C78 ## 1 Introduction The concept of _distinguishing_ vertex colourings was introduced in 1977 by Babai [1], as the colourings which are preserved only by the identity automorphism of the graph. The natural optimization problem is to minimize the number of colours in such a colouring, and this minimum number for a given graph \(G\) is called the _distinguishing number_ of \(G\), and denoted by \(D(G)\). We study the following problem proposed by Kalinowski, Pilsniak and Wozniak [2]. An automorphism \(\varphi\) of a graph \(G\) is _small_ if, for some vertex \(v\in V(G)\), \(\varphi(v)\) is a neighbour of \(v\). We are interested in the minimum number of colours needed to break every small automorphism of \(G\), which is called the _small distinguishing index_ of \(G\) and denoted by \(D^{\prime}_{s}(G)\). This problem is strongly connected with the concept of general distinguishing of adjacent vertices by edge colourings, where the distinguishing condition is generally stronger than just breaking the automorphisms. In particular, we may demand that the incident vertices have different sums (or sets, or multisets, etc.) of the colours (which are then restricted to the positive integers) on the incident edges. One of the central problems in this field was 1-2-3 Conjecture posed by Karonski, Luczak and Thomason [3], which states that the set of colours \(\{1,2,3\}\) is sufficient for any finite graph without \(K_{2}\) as a component to admit an edge colouring so that the sums of colours on the incident edges are different for any two neighbouring vertices. This conjecture was recently confirmed by Keusch [4], and it was then generalized to locally finite graphs by Stawiski [5]. Kalinowski, Pilsniak and Wozniak [2] proved that \(D^{\prime}_{s}(G)\leq 3\) for any finite graph \(G\) without \(K_{2}\) as a component. They also showed that only two colours are sufficient for total colourings. We generalize both these results into the list colouring setting. Moreover, our result holds for both finite and infinite graphs. We follow the notation in [2] and extend it to the list colourings. In particular, we denote by \(D^{\prime}_{l,s}(G)\) the _small list distinguishing index_ of \(G\), i.e. the least size of the lists assigned to the edges of \(G\) such that there exists an edge colouring from any set of lists of this size which breaks every small automorphism of \(G\). We prove that \(D^{\prime}_{l,s}(G)\) is at most three for any finite or infinite locally finite graph without components isomorphic to \(K_{2}\), and we prove that two colours suffices for the analogous problem for total colourings. This is a support for the list version of 1-2-3 Conjecture, and the total list version of 1-2-3 Conjecture, which still remain open. Note that both results are sharp because \(D^{\prime}_{l,s}(G)=3\) for \(G\in\{K_{3},K_{4},K_{5},C_{4},C_{5}\}\). ## 2 Edge colourings Let \(G\) be an arbitrary graph and \(r\) be a vertex of \(G\). We say that \((G,r)\) is a _rooted_ graph, and we refer to \(r\) as the _root_ of \(G\). The set of automorphisms of \((G,r)\), denoted by \(\operatorname{Aut}(G,r)\), is the set of these automorphisms of \(G\) which fix \(r\). We denote the set of all the automorphisms of \(G\) by \(\operatorname{Aut}(G)\). The proof of the main theorems relies on the following lemma, which asserts that the lists of length \(2\) are almost sufficient to break all the small automorphisms of any locally finite graph. **Lemma 1**.: _Let \(G\) be a connected locally finite graph other than \(K_{2}\), and let \(r\) be an arbitrary vertex of \(G\). Then \(G\) admits a colouring from the list of length \(2\), which breaks all the small automorphisms of \((G,r)\)._ Proof.: First, consider the case that the graph \(G\) has bounded degree. This part of the proof is by induction on \(\Delta(G)\). If \(\Delta(G)\leq 2\), then \(G\) is the ray, the double ray, a path, or a cycle, and it is easy to verify that the claim holds. Assume then that \(\Delta(G)\geq 3\). Choose a vertex \(r\in V(G)\). We shall construct a colouring of the edges of \(G\) that breaks all small automorphisms in \(\operatorname{Aut}(G,r)\). Let \(\mathcal{A}\) be the set of orbits with respect to \(\operatorname{Aut}(G,r)\). We fix some ordering \(\mathcal{A}=\{A_{0},A_{1},\dots\}\) such that if \(i<j\), then the vertices of \(A_{i}\) have less or equal distance from \(r\) than the vertices of \(A_{j}\) (note that this distance is constant within each orbit). In particular, \(A_{0}=\{r\}\). We shall process these orbits one by one, starting with \(A_{1}\), and in each step \(i\) we shall choose colours for all the edges between the vertices of \(A_{i}\) and the vertices of \(\bigcup_{j\leq i}A_{j}\). Let \(A_{i}\) be the currently processed orbit. The vertices of \(A_{i}\) have at least one back edge, i.e. an edge to already coloured orbits. Therefore, \(\Delta(G[A_{i}])<\Delta(G)\) and we can colour the edges of each component \(H\) of \(G[A_{i}]\) using the induction hypothesis, and break all the small automorphisms of \((H,r_{H})\) for some \(r_{H}\in V(H)\). We do not require these components to have non-isomorphic colourings, because we just need to break the small automorphisms in each component. Then, in each component \(H\), we must fix the vertex \(r_{H}\) to obtain a colouring that breaks all the small automorphisms of \(H\). We achieve it by colouring some edge incident to \(r_{H}\), which lies on a shortest path from this vertex to \(r\) (the other end-vertex of that edge is in a previous orbit) using one of the two colours from its list, say blue, and then colouring all the other edges from the same component to the previous orbits with arbitrary colours other than blue. We now argue that after repeating these steps for each orbit \(A_{i}\), \(i\geq 1\), we break all the small automorphisms of \((G,r)\). Let \(\varphi\) be a small automorphism of \(G\) that fixes \(r\). Then, there is a vertex \(x\) such that \(\varphi(x)\in N(x)\). These two vertices \(x\) and \(\varphi(x)\) are contained in some orbit \(A_{i}\) with respect to \(\operatorname{Aut}(G,r)\), and since they are neighbours, they must lie in the same component \(H\) of this orbit. As our colouring restricted to \(H\) breaks all the small automorphisms of \((H,r_{H})\), and \(r_{H}\) is the only vertex in \(H\) with a blue edge to a previous orbit, then \(\varphi\) must change a colour of either this blue edge or one of the edges inside \(H\). This means that \(\varphi\) cannot be preserved by the colouring. Finally, if \(G\) has vertices of arbitrarily large degrees, then we perform the same procedure as above, and at the point when we used the induction hypothesis, we just use the claim for a finite maximum degree as all the considered orbits are finite. **Theorem 2**.: _Let \(G\) be a locally finite graph without a \(K_{2}\) component. Then \(D^{\prime}_{l,s}(G)\leq 3\)._ Proof.: Let \(G=(V,E)\) be a locally finite graph with no \(K_{2}\) components, and let \(\Delta(G)\) be its maximum degree. Note that it is sufficient to break only the automorphisms of each connected component of \(G\), since any automorphism that maps one component into another is a composition of a small automorphism that stabilizes all the components or the identity, and a non-small automorphism. Therefore, we shall colour each component of \(G\) separately. Let \(H\) be a component of \(G\). If \(\Delta(H)\leq 2\), then again the proof is straightforward. We shall assume that \(\Delta(H)\geq 3\). Choose a vertex \(r\in V(H)\) of degree at least \(3\). Take any edge incident to \(r\) and name any colour from its list pink (without colouring the edge, at this point we just choose a name for the colour). Remove the colour pink from all the lists and use Lemma 1 to obtain a colouring \(c\) of the subgraph \(H\) from the modified lists which breaks all the small automorphisms of \((H,r)\). We shall now make a slight correction of this colouring to fix \(r\) and keep the distinction of the rest of the subgraph. If \(r\) is fixed by now, then \(c\) is already the desired colouring. Otherwise, we shall recolour one of the edges incident to \(r\) with pink. We consider all such recolourings, for all the neighbours of \(r\). If there is a neighbour \(x\) of \(r\) such that recolouring the edge \(rx\) with pink yields a colouring that fixes \(r\) with respect to \(\operatorname{Aut}(H)\), then we implement this change and return the resulting colouring. Assume now that no such neighbour exists. If all the edges incident to \(r\) are of the same colour, say blue, then we recolour an arbitrary edge \(rx\) with pink and some other edge \(ry\) with a colour other than pink and blue, say red. Note that \(r\) and \(x\) became the only two vertices with a pink incident edge, and \(r\), unlike \(x\), has an incident red edge, so they are distinguished. If, otherwise, there are at least two different colours on the edges incident to \(r\), say \(rx\) and \(ry\), then we recolour \(ry\) with the colour of \(rx\) (say red) and \(rx\) with pink. Here we must be a bit more careful, and if possible, choose \(x\) and \(y\) from the same component of the same orbit, so that \(x\) is the root of that component. This way, we fix \(r\) for the exact same reason as in the previous case. Let now \(\varphi\) be a small automorphism of \(H\) which is preserved by the resulting colouring. By the arguments above, \(\varphi\) must fix \(r\). Since \(c\) broke all the small automorphisms of \((H,r)\), there must be an edge \(e\) such that \(c(e)\neq c(\varphi(e))\). Then, \(e\) or \(\varphi(e)\) must have changed its colour during the correction, so \(e=rx\) and \(\varphi(e)=ry\) for some \(x,y\). The new colour for \(e\) or \(\varphi(e)\) cannot be pink, as there is only one pink edge in the resulting colouring. Hence, the new colour must be red, and it is the case that there were at least two different colours on the edges incident to \(r\) in the colouring \(c\). Then, either \(x\) and \(y\) are in the same component of the same orbit (and, since \(x\) is its root, there would have to be a second edge inside this component, mapped by \(\varphi\) into an edge of a different colour), or any component of any orbit contained in \(N(r)\) has the same colour on the edges to \(r\) (so these colours were not relevant for breaking \(\varphi\) and some other edge not incident to \(r\) changes colour after applying \(\varphi\)). In both cases, \(\varphi\) cannot be preserved by our colouring. ## 3 Total colourings **Theorem 3**.: _Let \(G\) be a locally finite graph without a \(K_{2}\) component and \(\mathcal{L}=\{L_{x}\}_{x\in V\cup E}\) be the set of lists of length two for the vertices and edges of \(G\). Then, \(G\) admits a total colouring from lists, which breaks all the small automorphisms of \(G\)._ Proof.: We again consider each connected component \(H\) of \(G\) separately. Let \(r\) be an arbitrary vertex in \(H\). By Lemma 1, there is an edge colouring of \(H\) from the lists \(\{L_{e}\}_{e\in E}\), which breaks every small automorphism of \((H,r)\). Then, choose the colours for the vertices so that \(r\) has a unique colour in the component.
2302.09911
Fair $k$-Center: a Coreset Approach in Low Dimensions
Center-based clustering techniques are fundamental in some areas of machine learning such as data summarization. Generic $k$-center algorithms can produce biased cluster representatives so there has been a recent interest in fair $k$-center clustering. Our main theoretical contributions are two new $(3+\epsilon)$-approximation algorithms for solving the fair $k$-center problem in (1) the dynamic incremental, i.e., one-pass streaming, model and (2) the MapReduce model. Our dynamic incremental algorithm is the first such algorithm for this problem (previous streaming algorithms required two passes) and our MapReduce one improves upon the previous approximation factor of $(17+\epsilon).$ Both algorithms work by maintaining a small coreset to represent the full point set and their analysis requires that the underlying metric has finite-doubling dimension. We also provide related heuristics for higher dimensional data and experimental results that compare the performance of our algorithms to existing ones.
Jinxiang Gan, Mordecai Golin, Zonghan Yang, Yuhao Zhang
2023-02-20T11:13:48Z
http://arxiv.org/abs/2302.09911v1
# Fair \(k\)-Center: a Coreset Approach in Low Dimensions ###### Abstract Center-based clustering techniques are fundamental in some areas of machine learning such as data summarization. Generic \(k\)-center algorithms can produce biased cluster representatives so there has been a recent interest in _fair_\(k\)-center clustering. Our main theoretical contributions are two new \((3+\epsilon)\)-approximation algorithms for solving the fair \(k\)-center problem in (1) the dynamic incremental, i.e., one-pass streaming, model and (2) the MapReduce model. Our dynamic incremental algorithm is the first such algorithm for this problem (previous streaming algorithms required two passes) and our MapReduce one improves upon the previous approximation factor of \((17+\epsilon).\) Both algorithms work by maintaining a small _coreset_ to represent the full point set and their analysis requires that the underlying metric has finite-doubling dimension. We also provide related heuristics for higher dimensional data and experimental results that compare the performance of our algorithms to existing ones. Machine Learning, ICML ## 1 Introduction Data summarization is one of the most important problems in the area of machine learning. Its goal is to compute a small set of data which captures the key features of the original data set. Performing further work, e.g., running machine learning algorithms, on this small summary data set can be more efficient but almost as effective as running them on the whole set. One issue with standard data summarization algorithms is that they often produce a summary which is _non-representative_ of other aspects of the population as a whole, e.g., they are biased with respect to attributes such as gender, race, and age (see, e.g., (Kay et al., 2015)). and is therefore _unfair_. There has been much recent work in trying to alleviate this problem by developing technique for fair representation, in particular _fair \(k\) center_ (see e.g. (Kleindessner et al., 2019; Chiplunkar et al., 2020; Jones et al., 2020; Angelidakis et al., 2022)). Going further, there is also interest in solving the fair \(k\)-center problem for _large_ data sets, either using streaming algorithms (for one processor) or a large number of processors in parallel. That is the problem that we address in this paper. In particular, we revisit the streaming and Mapreduce problems addressed in (Chiplunkar et al., 2020) and develop a new _coreset_ based approach for metric spaces that have fixed doubling dimension (Defined in Section 2). This provides both better theoretical results and, in most practical examples, real performance. ### Definition of the Fair \(k\)-Center Problem Let \((X,d)\) denote a metric space and \(P\subset X\) be a set of points. Each point in \(P\) belongs to exactly one of \(m\) groups, \(\{1,...,m\}\). Let \(g:X\rightarrow\{1,...,m\}\) denote the group assignment function. Each group \(j\), has an associated fixed capacity \(k_{j}\) and \(k=\sum_{j=1}^{m}k_{j}\). A (center) subset \(S\subset P\) is called _feasible_ if for every \(j\), set \(S\) contains at most \(k_{j}\) points from group \(j\). The goal is to compute a feasible set \(S\) of centers so as to minimize \(C(S)=\max_{p\in P}\min_{s\in S}d(p,s)\). \(C(S)\) is called the cost of solution \(S\). The special case, \(m=1\), is the well known and studied \(k\)-center problem. Let \(OPT\) denote the cost of an optimal solution. An \(\rho\)-approximation algorithm for the problem would find a feasible set of centers \(C^{\prime},\) such that \(C(S^{\prime})\leq\rho\cdot OPT\). In particular, it is known that the plain \(k\)-center problem (\(m=1\)) is NP-hard to \((2-\epsilon)\)-approximate (Hsu and Nemhauser, 1979) for any \(\epsilon>0\), while there do exist some well known 2-approximation algorithms (Gonzalez, 1985; Hochbaum and Shmoys, 1985) for solving it. This paper studies the fair \(k\)-center problem in the MapReduce and streaming settings. The _MapReduce_ model was introduced by Google (Dean and Ghemawat, 2008). In this setting, a set of processors process data in a sequence of parallel rounds on a large number of machines, each with only limited memory. In addition, only small amounts of inter-machine communication are permitted. The _streaming_ model provides a mechanism to deal with large volumes of data in a limited-memory single-core processor by restricting access to sequential passes over the data (with only a limited amount of other working memory available). In particular, a _one-pass_ streaming algorithm may only see each piece of data once. One-pass streaming algorithms are essentially dynamic incremental algorithms that are only permitted limited working memory. Chiplunkar et al. (2020) study the fair \(k\) center problem in those two models. They show a \((3+\epsilon)\)-approximation two-pass streaming algorithm and a \((17+\epsilon)\)-approximation MapReduce algorithm. In practice, it is known that the metrics in many real-world datasets possess _finite doubling dimension_ (see definition in Section 2) Talwar (2004). Assuming finite doubling-dimension we develop better algorithms for the same problems. More specifically, * we show a deterministic \((3+\epsilon)\)-approximation _one-pass_ streaming algorithm. Unlike the best known \((3+\epsilon)\)-approximation _two-pass_ streaming algorithm of Chiplunkar et al. (2020) this only accesses each data point once and is actually a _dynamic incremental algorithm_. Although Chiplunkar et al. (2004) provides a dynamic incremental algorithms for the standard \(k\)-center problem, ours is the first such algorithm for the fair \(k\)-center one. * we show a deterministic \((3+\epsilon)\)-approximation MapReduce algorithm which theoretically and practically improves upon the \((17+\epsilon)\) approximation algorithm in Chiplunkar et al. (2020). Our MapReduce algorithm only has one communication round. After each processor preprocesses its own internal data it sends a small summary to the coordinator. Combining the summaries of all processors, the coordinator can generate a solution with a good global approximation ratio. * we run experiments to illustrate the practicality of our algorithms in both settings. More specifically, our theoretical guarantees only hold for fixed doubling-dimension, i.e., in low dimensions, so we also developed practical heuristics based on our algorithms that work in higher dimensions and ran experiments on them using the same data upon which Chiplunkar et al. (2020) was tested (including some high-dimensional data sets) and provide a comparison. Our main tool is the coreset approach (also used by Ceccarello et al. (2019) to attack \(k\)-centers with outliers). We conclude by noting that Chiplunkar et al. (2020) proved that achieving a \((4-\epsilon)\)-approximation to \(k\)-center in the MapReduce model with limited communication complexity is NP-hard. The reason our \((3+\epsilon)\)-approximation does not violate their bound is that their proof assumed a general metric, while our algorithms assume metrics with bounded doubling dimension. ### Related Works Chen et al. (2016) developed a 3-approximation algorithm that ran in \(O(n^{2}\log n)\) time. Kleindessner et al. (2019) then give a linear time algorithm with approximation ratio \(O(2^{m})\), where \(m\) is the number of groups in the input. Finally, Jones et al. (2020) developed a faster, \(O(nk)\) time, \(3\)-approximation algorithm. Note that \(3\) is still the best approximation factor known. Around the same time, Chiplunkar et al. (2020) presented the previously discussed \((3+\epsilon)\)-approximation two-pass streaming algorithm and a \((17+\epsilon)\)-Mapreduce algorithm. Yuan et al. (2021) study the fair \(k\) center problem _with outliers_ and described a \(4\)-approximation algorithm along with an \(18\)-approximation distributed algorithm. Very recently, Angelidakis et al. Angelidakis et al. (2022) combined the fairness constraint with a privacy constraint and proposed a new model called the _private and representative \(k\)-center_ where the privacy constraint means that every selected center has to cover at least a given amount of data. They designed a \(15\)-approximation algorithm for this new model. To conclude, we note that a different fairness constraint is studied in Chierichetti et al. (2017), where the solution requires that proportion of groups in each cluster must be similar to that in the whole. Some other related works using this other fairness constraint can be found in Bera et al. (2019); Bercea et al. (2019); Bera et al. (2022). ## 2 Notation and Terminology \(P\) will always denote a finite point set in some underlying known metric space \((\mathcal{X},d)\). **Definition 2.1**.: \(\mathcal{T}\subset 2^{P}\) is a partition of \(P\) if (1) \(P=\bigcup_{S\in\mathcal{T}}S\); and (2) \(\forall S_{1},S_{2}\in\mathcal{T}\), \(S_{1}\cap S_{2}=\emptyset\) Our results assume that the underlying metric space \((\mathcal{X},d)\) has finite _doubling dimension_. **Definition 2.2** (Doubling Dimensions).: The doubling dimension of metric space \((\mathcal{X},d)\) is the minimum value \(\dim(\mathcal{X})\) such that any ball \(B(x,r)\) in \((\mathcal{X},d)\) can be covered by \(2^{\dim(\mathcal{X})}\) balls of radius \(r/2\). It is known that the doubling dimension of the Euclidean space \((R^{D},\ell_{2})\) is \(\Theta(D)\) Heinonen et al. (2001). **Lemma 2.3**.: _(Krauthgamer & Lee, 2004) Let \((\mathcal{X},d)\) be a metric space and \(Y\subseteq\mathcal{X}\). The aspect ratio of the metric induced on \(Y\) is \(\frac{\max_{x,y\in Y}d(x,y)}{\min_{x,y\in Y}d(x,y)}\)._ _If the aspect ratio of \(Y\) is at most \(\Delta\) and \(\Delta\geqslant 2\), then \(|Y|\leqslant\Delta^{O(\dim(\mathcal{X}))}\)._ In the sequel, \(kCP\) and \(FkCP\) respectively denote the \(k\)-center problem and fair-\(k\) center problems. For \(P\subseteq\mathcal{X},\,r^{*}_{kC}(P)\) and \(r^{*}_{FkC}(P)\) respectively denote the optimal values of \(kCP\) and \(FkCP\). Trivially, \(r^{*}_{kC}(P)\leqslant r^{*}_{FkC}(P)\). ## 3 Coreset Technique The coreset paradigm is a well known and powerful tool for studying large data sets by summarizing them using smaller ones. For \(k\)-centers, a variant has previously been used to attack the \(k\) center problem with outliers (Ceccarello et al., 2019; Ding et al., 2023)). **Definition 3.1** (Coreset).: For \(P\subset X\), subset \(C\subset P\) is an \(\epsilon\)-coreset of \(P\) for \(FkCP\), if for every feasible set \(S\subset P\) of points, \[(1-\epsilon)\max_{p\in P}d(p,S)\leqslant\max_{p\in C}d(p,S)\leqslant(1+ \epsilon)\max_{p\in P}d(p,S).\] \(\epsilon\)-coresets will be small subsets that approximate the original set. More specifically, we will see later, that solving the \(FkCP\) on an \(\epsilon\)-coreset of \(P\) will, with some extra information, yield an approximate solution for \(P\). We will first need further definitions. **Definition 3.2** (\((r,\alpha)\)-net).: Let \((\mathcal{X},d)\) be a metric space. For fixed parameter \(r>0\), subset \(Y\subseteq\mathcal{X}\) is an \((r,\alpha)\)-net of \(\mathcal{X}\) if it satisfies: \(\bullet\)_(Packing Property:)_ For every \(x,y\in Y\), \(d(x,y)\geqslant r\); \(\bullet\)_(Covering Property:)_\(\forall x\in\mathcal{X}\), there exists at least one \(y\in Y\) such that \(d(x,y)\leqslant\alpha\cdot r\). When \(\alpha=1\), this is the well known \(r\)-net from (Heinonen et al., 2001). In \(FkCP\), the covering property will permit building an \(\epsilon\)-coreset from an \((r,\alpha)\)-net while the packing property restricts the number of points in the \((r,\alpha)\)-net. **Lemma 3.3**.: _Fix \(P\) and let \(r^{\prime}\leqslant r^{*}_{FkCP}(P)\). If \(Y\subset P\) is an \((\frac{\epsilon}{\alpha}r^{\prime},\alpha)\)-net, then \(Y\) is an \(\epsilon\)-coreset of \(P\). (see proof in appendix)_ While \(\epsilon\)-coresets as described do approximate \(P\), they have lost all group information. To remedy this, we need the further definitions. **Definition 3.4**.: Let \(P\) be fixed and \(Y\subset P\) be an \((r,\alpha)\)-net. \(Y\) is called \(\epsilon\)-proper if \(r\leqslant\frac{\epsilon}{\alpha}r^{*}_{FkC}(P)\). Further, for all \(y\in Y\) associate a neighborhood set \(N(y,r)\) such that \(\bullet\)\(y\in N(y,r)\) \(\bullet\) If \(p\in N(y,r)\), \(d(p,y)\leq\alpha r\). \(\bullet\)\(\mathcal{T}=\{N(y,r):y\in Y\}\) is a partition of \(P\) Such a \(\mathcal{T}\) always exists due to the covering property of \((r,\alpha)\)-nets but might not be unique. When discussing \(\epsilon\)-proper nets, we always assume an associated partition \(\mathcal{T}\). Note that \(C\subset Y\) might not be a coreset because it doesn't contain the correct number of points from each group. In that case, if \(Y\) is proper, we will be able to replace a point in \(y\in C\) with a point in \(N(y,r)\) that is close by. **Definition 3.5**.: Fix \(P\) and let \(Y\) be an \(\epsilon\)-proper \((r,\alpha)\)-net \(Y\). With every point \(x\in Y\) associate an \(m\) dimensional vector \(\text{Col}(x)=(\text{Col}_{1}(x),\text{Col}_{2}(x),...,\text{Col}_{m}(x))\) defined by \[\text{Col}_{i}(x)=\begin{cases}1&i\in\{g(p)\,:\,p\in N(x,r)\}\\ 0&\text{otherwise}\end{cases}\] Furthermore, when \(\text{Col}_{i}(x)=1\), a point \(y\in N(x,r)\) from group \(i\) will be stored in \(\text{Pot}(x,i)\) as follows: \[\text{Pot}(x,i)=\begin{cases}x&i=g(x)\\ \text{undefined}&\text{Col}_{i}(x)=0\\ \text{any point }p\in N(x,r)\text{ with }g(p)=i&\text{Otherwise}\end{cases}\] Finally, define \(\text{Pot}(x)=\{\text{Pot}(x,i)\,:\,\text{Col}_{i}(x)=1\}\). We require one further definition **Definition 3.6**.: Fix \(P\). Let \(Y\subset P\) be \(\epsilon\)-proper. We say \(y\) is from group \(i\) if \(\text{Col}_{i}(y)=1\) (\(\forall 1\leqslant i\leqslant m\)). \(S\subset Y\) is a _candidate feasible solution_ of \(Y\) if there exists a feasible set \(S^{\prime}\subseteq P\) such that \(\bullet\)\(S^{\prime}\subseteq\bigcup_{s\in S}N(s,r)\) \(\bullet\)\(|S^{\prime}\cap N(s,r)|=1\quad\forall s\in S\) Note that \(|S^{\prime}|=|S|\). We define the cost of the candidate feasible solution for \(Y\) is \(\max_{y\in Y}d(y,S)\). **Lemma 3.7**.: _Fix \(P\) and let \(Y\) be an \(\epsilon\)-proper \((r,\alpha)\)-net._ _If there exists a candidate feasible solution \(S\) with cost \(c\) and associated feasible \(S^{\prime}\) as defined in Definition 3.6 then \(S^{\prime}\) is a feasible solution in \(P\) with \(C(S^{\prime})\leq c+2\epsilon r^{*}_{FkC}(P)\) (see proof in appendix)_ We will now show that if we can solve \(FkCP\) on (a variant of) an \(\epsilon\)-proper \((r,\alpha)\)-net \(Y\) of \(P,\) something which will be very small, we can easily get a good approximate solution for \(FkCP\) on the original data set \(P\). **Lemma 3.8**.: _Let \(A\) be a \(\rho\)-approximation algorithm for \(FkCP\) and \(T_{A}(n)\) its running time on an input of size \(n\). Then, given an \(\epsilon\)-proper \((r,\alpha)\)-net \(Y\) for \(P,\) we can create a \(\rho(1+3\epsilon)r^{*}_{FkC}(P))\)-approximation algorithm for solving \(FkCP(P)\) in time \(T_{A}(m|Y|)+O(m|Y|)\)._ Proof.: Create a new set \(Y^{\prime}\) as follows. For each point \(y\in Y\) and each color \(i\) such that \(\text{Col}_{i}(y)=1\) add a new point \(y^{\prime}\) to \(y\). \(y^{\prime}\) will be at the same location as \(y\) and be in group \(i\). We say that \(y\) is _associated_ with \(y^{\prime}\). Note that \(|Y^{\prime}|=O(m|Y|)\). Let \(O=\{o_{1},...,o_{k}\}\) denote the optimal solution of \(FkCP(P)\). By the definition of \(Y,\,o_{t}\in N(y(o_{t}),r)\) for some \(y(o_{t})\in Y\). By the definition of \(Y^{\prime}\) there exists \(y^{\prime}(o_{t})\in Y^{\prime}\) (located at \(y(o_{t})\)) such that \(g(y^{\prime}(o_{t}))=g(o_{t})\) and \(d(y^{\prime}(o_{t}),o_{t})\leqslant\alpha r\). Now feed \(Y^{\prime}\) as input to the \(\rho\)-approximate \(FkCP\) algorithm. Call this algorithm \(A\). Let \(A(Y^{\prime})\) denote the value of the solution computed by algorithm \(A\) for input \(Y^{\prime}\). \(A(Y^{\prime})\leqslant\rho r_{FkC}^{*}(Y^{\prime})\). Because \(O\) is feasible, \(O^{\prime}=\{y^{\prime}(o_{1}),y^{\prime}(o_{2}),...,y^{\prime}(o_{k})\} \subset Y^{\prime}\) is feasible and \(\forall y^{\prime}\in Y^{\prime}\ d(y^{\prime},O^{\prime})\leqslant d(y^{ \prime},O)+\alpha r\leqslant r_{FkC}^{*}(P)+\alpha r\). Thus, \[A(Y^{\prime}) \leqslant\rho r_{FkC}^{*}(Y^{\prime})\leqslant\rho\max_{y^{ \prime}\in Y^{\prime}}d(y^{\prime},S^{\prime})\] \[\leqslant\rho(r_{FkC}^{*}(P)+\alpha r)\leqslant\rho(1+\epsilon) r_{FkC}^{*}(P).\] Finally, let \(\bar{S}\) be the actual feasible solution generated by algorithm \(A\) run on \(Y^{\prime}\) and \(S\subset Y\) the set of points associated with the points in \(\bar{S}\). For each \(y\in S\), arbitrarily choose one point \(y^{\prime}\) from \(\bar{S}\) associated with \(y\) and add \(\text{Pot}(x,g(y^{\prime}))\) to \(S^{\prime}\). Since \(\bar{S}\) is feasible (in \(Y^{\prime}\)), \(S^{\prime}\) is feasible (in \(P\)). This \(S^{\prime}\) witnesses that \(S\) is a candidate feasible solution of \(Y\). Furthermore, since \[A(Y^{\prime})=\max_{y^{\prime}\in Y^{\prime}}d(y^{\prime},S^{\prime})=\max_{y \in Y}d(y,S),\] the cost of \(S\) for \(y\) is \(\leq\rho(1+\epsilon)r_{FkC}^{*}(P)\). Plugging this \(S,S^{\prime}\) into Lemma 3.7 completes the construction. Note that all of the work performed other than calling \(A(Y^{\prime})\) can be implemented in \(O(m|Y|)\) time. Combining the last lemma with the \(O(kn)\)-time \(3\)-approximation JNN algorithm from (Jones et al., 2020) will yield good approximate solutions for \(FkCP(P)\) given an \(\epsilon\)-proper \((r,\alpha)\)-net of \(P\). It remains to construct such nets. When \(r\) is fixed, it is easy to construct an \((r,\alpha)\)-net \(Y\) from scratch. There are many scenarios, though, where it is more desirable to build the nets by _merging_ previously built ones. This occurs in both the MapReduce and streaming models. The following algorithm/lemma will be a useful tool when constructing new nets from old ones. **Lemma 3.9**.: _Let \(Y_{1}\) be an \(\epsilon\)-proper \((r,2\alpha)\)-net of \(P_{1}\) and \(Y_{2}\) an \(\epsilon\)-proper \((R,2\alpha)\)-net of \(P_{2}\). If \(2r\leqslant R\leqslant\frac{\epsilon}{2\alpha}r_{FkC}^{*}(P)\) and \(\alpha\geqslant 1\), \(Y^{\prime}\) constructed by Algorithm 1 is an \(\epsilon\)-proper \((R,2\alpha)\)-net of \(P_{1}\cup P_{2}\) whose \(\text{Col}\) and Pot vectors are accurately updated._ Proof.: Let \(Y_{1}^{\prime}=Y^{\prime}\cap Y_{1}\) be the points from \(Y_{1}\) added to \(Y^{\prime}\). Now let \(y,y^{\prime}\in Y^{\prime}\). If \(y,y^{\prime}\in Y_{2}\) then \(d(y,y^{\prime})\geq R\). If \(y\in Y_{2}\) and \(y^{\prime}\in Y_{1}^{\prime}\) then by construction, \(d(y,y^{\prime})>\alpha R>R\). If both \(y,y^{\prime}\in Y_{1}^{\prime}\) assume that \(y\) was added to \(Y^{\prime}\) before \(y^{\prime}\). Then, again, by construction, \(d(y,y^{\prime})>\alpha R>R\). So, in all cases, the packing condition \(d(y,y^{\prime})\geq R\) holds. To validate the covering condition, first assume that \(y\in P_{2}\). Then, because \(Y_{2}\) is an \(\epsilon\)-proper \((R,2\alpha)\)-net of \(P_{2}\), there exists \(y^{\prime}\in Y_{2}\subseteq Y^{\prime}\) such that \(d(y,y^{\prime})\leq 2\alpha R\). Next assume that \(y\in P_{1}\). Because \(Y_{1}\) is an \(\epsilon\)-proper \((r,2\alpha)\)-net of \(P_{1}\), there exists \(y^{\prime}\in Y_{1}\) such that \(d(y,y^{\prime})\leq 2\alpha r\). If \(y^{\prime}\in Y_{1}^{\prime}\) then, since \(2\alpha r\leq 2\alpha R\), the covering condition trivially holds. If \(y^{\prime}\not\in Y_{1}^{\prime}\), then there exists \(\bar{y}\in Y^{\prime}\) such that \(d(\bar{y},y^{\prime})\leq\alpha R\). But then, \[d(y,\bar{y})\leq d(y,y^{\prime})+d(y^{\prime},\bar{y})\leq 2\alpha r+\alpha R \leq 2\alpha R.\] Thus the covering condition always holds and \(Y^{\prime}\) is an \((R,2\alpha)\)-net of \(P_{1}\cup P_{2}\). It is proper because \(R\leqslant\frac{\epsilon}{2\alpha}r^{*}\). That the Col and Pot vectors are accurately updated for \(Y^{\prime}\) follows directly from the definitions and the fact that, if \(y\in Y_{1}\) is not added to \(Y^{\prime}\) because \(d(y,y^{\prime})\leq\alpha R\) for some \(y^{\prime}\in Y^{\prime}\), then all points from \(P_{1}\) in \(N(y,r)\) are within distance \(2\alpha R\) of \(y^{\prime}\). By Algorithm 1 and Lemma 3.9, when \(r\) is updated we can efficiently construct a new \(\epsilon\)-proper \((r,\alpha)\)-net of \(P\) from \(Y_{1},Y_{2}\) in time \(O(m|Y_{1}|\cdot|Y_{1}\cup Y_{2}|)\). In the next sections, we describe how to use these tools to construct an \(\epsilon\)-proper \((r,\alpha)\)-net of \(P\) in streaming and MapReduce settings. ``` 1:An \(\epsilon\)-proper \((r,2\alpha)\)-net \(Y_{1}\) of \(P_{1}\) and an \(\epsilon\)-proper \((R,2\alpha)\)-net \(Y_{2}\) of \(P_{2}\). 2:Set \(Y^{\prime}=Y_{2}\) 3:for each \(y\in Y_{1}\)do 4:if there exists \(y^{\prime}\in Y^{\prime}\) such that \(d(y,y^{\prime})\leqslant\alpha R\)then 5:for\(1\leqslant i\leqslant m\)do 6:if \(\text{Col}_{i}(y^{\prime})=0\) and \(\text{Col}_{i}(y)=1\)then 7:\(\text{Col}_{i}(y^{\prime})=1\) and \(\text{Pot}(y^{\prime},i)=\text{Pot}(y,i)\) 8:endif 9:endfor 10:else 11:\(Y^{\prime}=Y^{\prime}\cup\{y\}\) 12:endif 13:endfor ``` **Algorithm 1** Construct \(\epsilon\)-proper \((R,2\alpha)\)-net \(Y^{\prime}\) of \(P_{1}\cup P_{2}\) ## 4 The MapReduce Setting In the MapReduce model of computation, the set \(P\) of points to be clustered is distributed equally among \(\ell\)_processors_. Each processor is allowed restricted access to the metric \(d\): it may only compute the distance between only its own points. Each processor performs some computation on its set of points and sends a summary of small size to a _coordinator_. From the summaries, the coordinator then computes a globally feasible set \(S\) of points which covers all the points in \(P\) within a small radius. Let \(P_{t}\) denote the set of points distributed to processor \(t\). ### Robust Setting Firstly, given any constant \(\epsilon>0\), we present a \(3(1+\epsilon)\)-approximation algorithm in the MapReduce setting. In this subsection, robustly set a target ratio \(3(1+\epsilon)\) in advance and define \(\bar{\epsilon}=\epsilon/3\). The algorithm constructs a coreset with size \(O(k\ell(8/\bar{\epsilon})^{D})\) where \(D\) is the doubling dimension of the metric space and \(\ell\) is the number of processors in the MapReduce setting. **Input:** Set \(P_{i}\), metric \(d\) restricted to \(P_{i}\), group assignment function \(g\) restricted to \(P_{t}\) ``` 1:Arbitrarily select a point \(p_{1}^{t}\) from \(P_{t}\) and set \(S_{t}=Y_{t}=\{p_{1}^{t}\}\) 2:for\(j=2\) to \(k\)do 3: Compute \(p_{j}^{t}\leftarrow\arg\max_{p\in P_{t}}d(p,S_{t})\); 4: Set \(S_{t}=S_{t}\cup\{p_{j}^{t}\}\) 5:endfor 6: Compute \(r_{t}=\frac{1}{8}\max_{p\in P_{t}}d(p,S_{t})\) 7: Set \(\text{Col}_{g(p_{1}^{t})}(p_{1}^{t})=1\) and \(\text{Pot}(p_{1}^{t},g(p_{1}^{t}))=p_{1}^{t}\) 8: Set \(\text{Col}_{i}(p_{1}^{t})=0\) (\(\forall i\neq g(p_{1}^{t})\)) 9:for each \(p\in P_{t}\)do 10:if there exists \(y\in Y_{t}\) such that \(d(p,y)\leqslant 2\bar{\epsilon}r_{t}\)then 11:if\(\text{Col}_{g(p)}(y)=0\)then 12:\(\text{Col}_{g(p)}(y)=1\) and \(\text{Pot}(y,g(p))=p\) 13:endif 14:else 15:\(Y_{t}=Y_{t}\cup\{p\}\); 16: Set \(\text{Col}_{g(p)}(p)=1\) and \(\text{Pot}(p,g(p))=p\) 17: Set \(\text{Col}_{i}(p)=0\) (\(\forall i\neq g(p)\)) 18:endif 19:endfor 20: Send \((Y_{t},r_{t})\) to the coordinator, where each \(y\in Y_{t}\) associates with a vector \(\text{Col}(y)\) and a set \(\text{Pot}(y)\). ``` **Algorithm 2** Computation by the \(t\)'th Processor **Lemma 4.1**.: _Algorithm 2 computes an \(\bar{\epsilon}\)-proper \((\bar{\epsilon}r_{t},2)\)-net \(Y_{i}\) of given \(P_{t}\), where \(|Y_{t}|=O(k(8/\bar{\epsilon})^{D})\). (see proof in appendix)_ Since each point \(y_{t}\in Y_{t}\) has an associated set \(\text{Pot}(y_{t})\), by Lemma 4.1 processor \(t\) sends \(O(mk(8/\bar{\epsilon})^{D})\) points to the coordinator. After receiving information from all processors, the coordinator will use Lemma 3.9 to compute an \(\bar{\epsilon}\)-proper net \(Y\) of the input set \(P\) and solve \(FkPC\) in this coreset. To use the lemma, we first need to lower bound \(r_{kC}^{*}(P)\). **Lemma 4.2**.: \(\forall Q\subset P\)_, let \(S\) and \(A(Q)\) respectively denote the solution set and the value returned by the \(2\)-approximation greedy \(kPC\) algorithm (Gonzalez, 1985) when running on \(Q\) (recall that this is lines 2-5 of Algorithm 2). Then \(A(Q)\leqslant 2\cdot r_{kPC}^{*}(P)\). (see proof in appendix)_ **Lemma 4.3**.: _Algorithm 3_ _returns an \(\bar{\epsilon}\)-proper \((\bar{\epsilon}R,2)\) net \(Y\) of \(P\) in time \(O(m\ell k^{2}(8/\bar{\epsilon})^{2D})\). (see proof in appendix)_ After each processor runs Algorithm 2 and the coordinator runs Algorithm 3 the coordinator then uses Lemma 3.8 with the \(3\)-approximation JNN algorithm (Jones et al., 2020) for the fair \(k\)-center problem. When \(\bar{\epsilon}=\epsilon/3\), this immediately returns a \(3(1+\epsilon)\)-approximate solution to the fair \(k\)-center problem on \(P\). Recall that the running time of the JNN algorithm is \(O(|X|k)\) where \(|X|\) is the number of points in the input set. The coordinator receives \(O(\ell k(24/\epsilon)^{D})\) points and the \(Y\) outputted by Algorithm 3 is a subset of these. Hence, the use of Lemma 3.8 requires only \(O(m\ell k^{2}(24/\epsilon)^{D})\) time. ### A Practical Heuristic The size of the coreset in our algorithm can be viewed as a parameter that affects both the memory usage and the approximation ratio. Until now, we focused on fixing the worst-case approximation ratio and let that specify the memory required. In practice, we can deal with this parameter more flexibly. In real-world implementations, memory-space memory can be restricted. Inspired by a similar approach in (Ceccarello et al., 2019), we thus slightly modify our algorithm and use permitted memory size itself as an input, instead of the approximation ratio. Our new algorithm (heuristic) will start by restricting the size of the coreset to some given value \(Q\) (w.l.o.g., assume \(Q>k\)). We now describe the procedure and also show that this coreset becomes an \(\epsilon\)-coreset when \(Q\) is large enough. This new algorithm is two phases but is even easier to implement. During the first phase, after receiving the point set \(P_{t}\), each processor \(t\) uses the \(2\)-approximation greedy algorithm from (Gonzalez, 1985) to solve the \(Q\)-center problem on \(P_{t}\). This generates a solution set \(Y_{t}\) of \(Q\) points. Each point \(p\in P_{t}\) is then assigned to its closest point \(y^{t}\in Y_{t}\). All points that are assigned to the same center \(y^{t}_{j}\in Y_{t}\) form a cluster \(X^{t}_{j}\). By definition, \(\mathcal{T}_{t}=\{X^{t}_{j}:y^{t}_{j}\in Y_{t}\}\) is a partition of \(P_{t}\). We then, as in definition 3.5, construct vector \(\text{Col}(y)\) and set \(\text{Pot}(y)\) for each \(y\in Y_{t}\). Each processor \(t\) then sends \(Y_{t}\) along with the associated vector \(\text{Col}(y)\) and sets \(\text{Pot}(y)\) for all \(y\in Y_{t}\), to the coordinator. \(Y_{t}\) is a solution of the \(Q\)-center problem, so \(|Y_{t}|\leq Q\). The process concludes by having the coordinator directly run the JNN algorithm on \(\bigcup_{t}\bigcup_{t\in Y_{t}}\text{Pot}(y)\) to construct a feasible solution. **Theorem 4.4**.: _When \(Q\) is large enough, the heuristic is a \(3(1+\epsilon)\) approximation algorithm. (see proof in appendix)_ ## 5 The Dynamic/Streaming Setting For \(t\leqslant n=|P|\), let \(P(t)\) denote the set of first \(t\) points read and \(r^{*}(t)\) the optimal value of \(FkCP\) on \(P_{t}\). ### Robust Setting In order to use our techniques in the streaming setting we will need a lower bound on \(r^{*}(t)\). Such a bound already exists. More specifically, (Charikar et al., 2004) provide an incremental algorithm that maintains such a lower bound \(r(t)\) of \(r^{*}(t)\). Their algorithm actually maintains a solution set \(S(t)\), \(|S(t)|\leqslant k\) such that (1) \(P(t)\subset\bigcup_{s\in S(t)}B(s,8r(t))\); (2) \(\forall s_{1},s_{2}\in S(t)\ d(s_{1},s_{t})>4r(t)\); (3) \(\forall t\ r(t)\leqslant r^{*}(t)\); and (4) \(r(t+1)=2^{\lambda}r(t)\) where \(\lambda\) is a non-negative integer and computed by the incremental algorithm. When we use this incremental algorithm as a subroutine, robustly set a target ratio \(3(1+\epsilon)\) and define \(\bar{\epsilon}=\epsilon/3\), we can incrementally maintain an \(\bar{\epsilon}\)-proper \((r,2)\) net \(Y\) of \(P\). **Lemma 5.1**.: _Algorithm 4 computes an \(\bar{\epsilon}\)-proper \((\frac{\bar{\epsilon}}{2}r(t),2)\)-net \(Y_{t}\) of \(P_{t}\), where \(|Y_{t}|=O(k(32/\bar{\epsilon})^{D})\). (see proof in appendix)_ Finally, similar to the previous section, conclude by using Lemma 3.8 with \(\bar{\epsilon}=\epsilon/3\) and calling the \(3\)-approximation JNN algorithm (Jones et al., 2020) for the \(k\)-center problem. This returns a \(3(1+\epsilon)\) approximation solution in at most \(O(mk^{2}(96/\bar{\epsilon})^{D})\) time. ### A Practical Heuristic As in Section 4.2, we slightly modify our algorithm and use memory space instead of the target approximation-ratio as an input parameter. Again as in Section 4.2, our new algorithm (heuristic) will start by restricting the size of the coreset to some given value \(Q\) (w.l.o.g., assume \(Q>k\)). We next directly apply the incremental algorithm (Charikar et al., 2004) to solve the \(Q\)-center problem on the data stream. Different from (Charikar et al., 2004), each center \(x\) will now have an associated \(\text{Col}(x)\) function and a set \(\text{Pot}(x)\). At each step the algorithm also updates the group information associated with this \(Q\)-center. Due to space limitations, we describe the details of the heuristic algorithm in the appendix. Our heuristic algorithm then runs JNN algorithm (Jones et al., 2020) on the coreset constructed to generate a feasible solution. As in Section 4.2, we show that for large enough \(Q\) the heuristic is a \((3+\epsilon)\) approximation algorithm. **Theorem 5.2**.: _When \(Q\) is large enough, the heuristic is a \(3(1+\epsilon)\) approximation algorithm. (see proof in appendix)_ ## 6 Experiments In this section, we run experiments to evaluate the performance of our heuristic one-pass and MapReduce algorithm on some real-world datasets and a massive synthetic dataset. Though the theoretical guarantee of \(3+\epsilon\) for both algorithms requires the low dimensionality condition, i.e., bounded doubling dimension, condition, the results are still very good for the high dimensional datasets that do not satisfy those conditions. The one-pass algorithm, despite being incremental, achieves a similar performance ratio but with _faster running time and lower memory usage_ compared to the previous best algorithms. The MapReduce algorithm outputs the smallest cost solution in most experiments, while exhibiting _a much better ratio_ for the low dimensional case. ### Datasets We used the same datasets and preprocessing methods as (Chiplunkar et al., 2020), including three real world datasets: CelebA, Sushi, Adult, and one synthetic dataset: Random Euclidean. All of them use the \(\ell_{1}\) metric with the exception of SushiA where the pairwise distance between ranking orders is calculated by the number of inverse pairs. **Sushi**(Kamishima) contains \(5\,000\) responses to a sushi preference survey. There are two types of evaluations given: **SushiA** contains the ranking order of \(10\) kinds of sushi, and **SushiB** contains the score of \(50\) kinds of sushi. The attributes given are gender and six age groupings; this results in 2 groups (gender) 1, 6 groups (age), or 12 groups (gender \(\times\) age). Footnote 1: We sincerely apologize for any offense caused by the binary classification of “male” and “female” in the group representations. **Adult**(Kohavi & Becker) contains \(32\,561\) data points extracted from the 1994 US Census database in which education, occupation and other aspects are covered, and will be considered as \(6\)-dimensional features after normalizing. Using gender (2) and race (5) information, this generates groups with sizes 2, 5 and 10. **CelebA**(Liu et al., 2015) contains \(202,599\) face images After preprocessing, the \(15\,360\) dimensional features are extracted via pre-trained VGG16 using Keras, and groups are divided by gender (2 groups), or gender \(\times\) {young, not young} (4 groups). Since it is extremely high dimensional, it can test scalability of our algorithm. **Random Euclidean**(Chiplunkar et al., 2020) is a synthetic 100GB dataset designed by Chiplunkar et al. It contains \(4\,000\,000\) uniformly generated points in \(1\,000\)-dimensional Euclidean space, each randomly assigned with to of 4 groups. It is useful to illustrate the performance of algorithms when input data is larger than memory. ### Implementation Details The experiments were run on a PC with AMD Ryzen 7 2700X Processor @ 3.7GHz, 32GB Memory and 500GB Solid-State Drive. We used Python to implement the algorithms, and ran experiments on the several real datasets and massive synthetic dataset previously described. **Previous algorithms** We adopted and refined Chiplunkar et al. implementation2 in order to compare algorithms _fairly_: the reproduction of their results ensures our comparisons are reliable. Footnote 2: [https://github.com/sagark4/fair_k_center](https://github.com/sagark4/fair_k_center) In this section, the streaming algorithm and the distributed algorithm from (Chiplunkar et al., 2020) are respectively labeled as **Two Pass** and **CK Distributed**. According to (Chiplunkar et al., 2020), these two algorithms generally outperform (Chen et al., 2016) and (Kleindessner et al., 2019). We therefore compare our algorithms directly to (Chiplunkar et al., 2020)'s algorithms, keeping the parameters the same, e.g., \(\epsilon=0.1\), as they used. We also followed their format of using the cost of the output of (their implementations of) Gonzalez's algorithm as the **Lower Bound** that all of the other algorithms are compared to. **Our Implementations** (1) In our implementation of the heuristic One Pass algorithm the coreset size is set to a constant \(240\). This was chosen to be divisible by the number of processors and the sum of group sizes, and also to let the two streaming algorithms use comparable memory. (2) Our MapReduce algorithm is implemented by a multiprocessing library on a single machine. The number of processors is set to \(10\) to fit the CPU capacity. For the first three datasets the size of the coreset collected by the coordinator is the same as in One Pass (\(240\)), but for Random Euclidean, we used \(800\) as a coreset size to better utilize the simulated \(100\) processors, where the number is chosen so that two distributed algorithms will send exactly the same number (3 200) of points to the coordinator. ### Results To evaluate the scalability of streaming algorithms, we use the first \(32\,500\) points in dataset Adult; each group was allowed at most 10 centers (denoted by capacities \([10,10]\), i.e., \(10\) men and \(10\) women). We require the algorithm to report a solution after completing reading a multiple of \(2\,500\) points. Note that One Pass is updating the coreset after reading each point so far, after reading a multiple of \(2\,500\) points and reporting an approximate \(k\)-center solution using the JNN 3 algorithm, it can continue with the new points without having to backtrack and reprocess the old ones again. The reported running time of the One Pass at each checkpoint is then just the time to update the coresets and then to calculate the approximate \(k\)-centers. By comparison, Two Pass has to rerun the algorithm on the whole data set from the scratch. To make the comparisons between the algorithms more realistic, we also calculate the entire running time of One Pass if it started from scratch on the dataset up to that point. Footnote 3: We write our implementation for JNN because we fail to find JNN’s original implementation.This calls a maxflow subroutine from the networkx library. The results are shown in Figure 1. As input size grows, the two algorithms have similar solution quality when One Pass is set to use only half of Two Pass's memory. Meanwhile, One Pass shows a significantly higher efficiency over Two Pass since it can incrementally maintain coresets and obtain solutions upon request anytime. It is worth noting that One Pass remains faster than Two Pass even if it is required to run from scratch. We then ran experiments on all of the datasets. Table 1 compares the time and memory used by the streaming algorithms on the different datasets. To further contrast their efficiency, we also listed the time used by JNN algorithm, which used \(O(n)\) memory to achieve current performance: it consumed \(24\) GB memory to store points when running CelebA dataset. The two streaming algorithms are both much faster than JNN, and One Pass is much faster than Two Pass for large data. We remark that in the massive case, i.e., the Random Euclidean with \(4\,000\,000\) points experiments, our One Pass only requires \(23\) minutes, while just processing the input points needs \(21.8\) minutes. It's also noticeable that the One Pass algorithm can better utilize given memory. The memory usage of Two Pass highly depends on the aspect ratio \(\Delta\) of the data set; it uses little memory on the Random Euclidean dataset since its \(\Delta\) is quite small (about \(2.16\)) and uses much more memory for larger \(\Delta\) in the other, real, datasets. Comparatively, One Pass is more adaptive to a fixed given coreset size. In the middle three columns of Table 2, we compare the costs of different single-threaded algorithms having similar theoretical guaranteesWe also observe a similar experimental performance for them, while JNN usually generates the smallest cost solution. The last two columns of Table 2 compare the two distributed algorithms. Both algorithms are fast: MapReduce took \(23\) minutes on Random Euclidean and CKR Distributed took \(27\) minutes. We do not compare the precise timing results for these two distributed algorithms, as we did not simulate the IO process in a realistic distributed environment. Therefore, the running time in our experiment may not provide much insight about the efficiency of the two algorithms in a real-world setting. ## 7 Future Direction In this paper, we propose a coreset-based algorithm framework for the fair \(k\)-center problem. By Lemma 3.8, our approximation ratio for both the dynamic incremental and MapReduce algorithms will always be essentially the same as that of the best static algorithm, which is currently \(3\). Any new improved static algorithm would therefore immediately translate into an improvement to our algorithms. The current state of the art is that it is unknown whether \(3\) is the best approximation that could be attained for the static fair \(k\)-center problem. This needs to be further investigated. In addition, our coreset techniques currently strongly require metrics with finite doubling dimensions. Further work is needed to develop algorithms that do not have this requirement. Finally, our dynamic algorithm is only incremental and does not permit deletions. It would be useful to develop a fully dynamic fair \(k\)-center algorithm. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Size} & \multirow{2}{*}{Capacities} & \multicolumn{3}{c|}{Time (seconds)} & \multicolumn{2}{c|}{Memory (points)} \\ \cline{3-7} & & & JNN & Two Pass & One Pass & Two Pass & One Pass \\ \hline \multirow{3}{*}{SauhiA} & \([10,10]\) & \(12.70\) & \(5.88\) & \(\mathbf{3.92}\) & \(4.64\) & \(\mathbf{425}\) \\ \cline{2-7} & & \([10,10]\) + \(6\) & \(34.41\) & \(16.13\) & \(\mathbf{42.14}\) & \(1766\) & \(\mathbf{933}\) \\ \cline{2-7} & & \([5]\) + \(12\) & \(41.44\) & \(\mathbf{15.96}\) & \(17.07\) & \(2134\) & \(\mathbf{1489}\) \\ \hline \multirow{3}{*}{SauhiB} & \([10,10]\) & \(8.32\) & \(\mathbf{2.37}\) & \(2.62\) & \(\mathbf{250}\) & \(265\) \\ \cline{2-7} & & \([5]\) + \(12\) & \(11.97\) & \(7.20\) & \(\mathbf{200}\) & \(789\) & \(\mathbf{733}\) \\ \cline{2-7} & & \([5]\) + \(12\) & \(11.97\) & \(7.20\) & \(\mathbf{1.87}\) & \(\mathbf{808}\) & \(960\) \\ \hline \multirow{3}{*}{Adult} & \([10,10]\) & \(57.88\) & \(59.36\) & \(\mathbf{16.69}\) & \(825\) & \(\mathbf{378}\) \\ \cline{2-7} & & \([10,10]\) + \(5\) & \(114.1\) & \(84.34\) & \(\mathbf{30.06}\) & \(2516\) & \(\mathbf{573}\) \\ \cline{2-7} & & \([5]\) + \(10\) & \(113.14\) & \(84.35\) & \(\mathbf{30.08}\) & \(2971\) & \(\mathbf{948}\) \\ \hline CelebA & \(202590\) & \([10,10]\) & \(2052\) & \(1350\) & \(\mathbf{501.2}\) & \(516\) & \(\mathbf{431}\) \\ \hline Random Euclidean & \(466\) & \([2]\) + \(4\) & – & \(5191\) & \(\mathbf{1383}\) & \(\mathbf{52}\) & \(314\) \\ \hline \end{tabular} \end{table} Table 1: Time and memory for streaming algorithm on all datasets. JNN algorithm could not finish in a reasonable time for Random Euclidean. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Capacities} & \multicolumn{3}{c|}{Lower} & \multirow{2}{*}{JNN} & \multirow{2}{*}{Two Pass} & \multirow{2}{*}{One Pass} & \multicolumn{2}{c|}{CKR} & \multirow{2}{*}{Map} \\ \cline{3-7} & & & JNN & Two Pass & & & & Distributed & Reduce \\ \hline \multirow{3}{*}{SauhiA} & \([10,10]\) & \(8.00\) & \(2.00\) & \(2.38\) & \(2.12\) & \(2.50\) & \(2.12\) \\ \cline{2-7} & \([10]\) + \(6\) & \(6.50\) & \(2.15\) & \(2.46\) & \(2.46\) & \(2.62\) & \(2.15\) \\ \cline{2-7} & \([5]\) + \(12\) & \(6.50\) & \(2.31\) & \(2.31\) & \(2.15\) & \(2.77\) & \(\mathbf{2.15}\) \\ \hline \multirow{3}{*}{SauhiB} & \([10,10]\) & \(34.00\) & \(2.03\) & \(1.85\) & \(\mathbf{1.71}\) & \(1.94\) & \(2.06\) \\ \cline{2-7} & \([10,10]\) + \(30.50\) & \(1.93\) & \(1.97\) & \(2.07\) & \(1.97\) & \(1.93\) \\ \cline{2-7} & \([5]\) + \(12\) & \(30.50\) & \(1.97\) & \(1.97\) & \(1.97\) & \(1.97\) & \(1.97\) \\ \hline \multirow{3}{*}{Adult} & \([10,10]\) & \(4.01\) & \(2.08\) & \(2.41\) & \(2.38\) & \(2.78\) & \(2.12\) \\ \cline{2-7} & \([10,10]\) + \(5\) & \(30.44\) & \(\mathbf{2.45}\) & \(2.54\) & \(2.57\) & \(2.93\) & \(2.51\) \\ \hline CelebA & \([10,10]\) & \(40796\) & \(1.89\) & \(1.99\) & \(2.00\) & \(1.91\) & \(1.81\) \\ \hline Random Euclidean & \([2]\) + \(4\) & – & – & \(\mathbf{3.45647}\) & \(3.45607\) & \(3.47567\) & \(3.46167\) \\ \hline \end{tabular} \end{table} Table 2: Costs on all datasets. Each column after the third corresponds to an algorithm and shows the ratio of its cost and Gonzalez’s lower bound. The shaded values indicate the ratios to Lower Bound if available, darker is better. Figure 1: Checkpoint running comparison for dataset Adult with capacities: \([10,10]\). Time is the average of \(3\) runs.
2301.10910
Periodic Multi-Agent Path Planning
Multi-agent path planning (MAPP) is the problem of planning collision-free trajectories from start to goal locations for a team of agents. This work explores a relatively unexplored setting of MAPP where streams of agents have to go through the starts and goals with high throughput. We tackle this problem by formulating a new variant of MAPP called periodic MAPP in which the timing of agent appearances is periodic. The objective with periodic MAPP is to find a periodic plan, a set of collision-free trajectories that the agent streams can use repeatedly over periods, with periods that are as small as possible. To meet this objective, we propose a solution method that is based on constraint relaxation and optimization. We show that the periodic plans once found can be used for a more practical case in which agents in a stream can appear at random times. We confirm the effectiveness of our method compared with baseline methods in terms of throughput in several scenarios that abstract autonomous intersection management tasks.
Kazumi Kasaura, Ryo Yonetani, Mai Nishimura
2023-01-26T02:40:56Z
http://arxiv.org/abs/2301.10910v2
# Periodic Multi-Agent Path Planning ###### Abstract Multi-agent path planning (MAPP) is the problem of planning collision-free trajectories from start to goal locations for a team of agents. This work explores a relatively unexplored setting of MAPP where _streams_ of agents have to go through the starts and goals with high throughput. We tackle this problem by formulating a new variant of MAPP called periodic MAPP in which the timing of agent appearances is periodic. The objective with periodic MAPP is to find a periodic plan, a set of collision-free trajectories that the agent streams can use repeatedly over periods, with periods that are as small as possible. To meet this objective, we propose a solution method that is based on constraint relaxation and optimization. We show that the periodic plans once found can be used for a more practical case in which agents in a stream can appear at random times. We confirm the effectiveness of our method compared with baseline methods in terms of throughput in several scenarios that abstract autonomous intersection management tasks. 1 OMRON SINIC X Corporation Hongo 5-24-5, Bunkyo-ku, Tokyo, Japan [email protected], [email protected], [email protected] ## Introduction Multi-agent path planning (MAPP) refers to the problem of finding a set of collision-free trajectories from start to goal locations for a team of multiple agents. MAPP, specifically multi-agent pathfinding (MAPF) that searches for a solution on a given graph, is a fundamental problem in multi-agent systems [1]. We are particularly interested in the relatively unexplored problem of MAPP in which, rather than a single agent, _a stream of agents_ enters each start location and leaves the environment upon reaching the goal. Instead of finding a set of feasible trajectories with a small total cost, we aim to improve the throughput for agent streams passing through the environment. Such settings would be beneficial for several practical applications, such as autonomous intersection management (AIM) [12] and automated warehouses [23]. Handling agent streams in such a problem setting poses a nontrivial technical challenge. As the throughput increases, the environment would be filled with a large number of agents, making it difficult to use optimal planning algorithms with limited scalability (_e.g._, conflict-based search [1]). It is also not obvious if the high throughput can be maintained with scalable planners that nevertheless have to determine agent trajectories adaptively in sequence (_e.g._, prioritized planning [13]). Furthermore, finding collision-free trajectories in highly-crowded environments would require consideration of planning in the continuous space (_i.e._, not grid maps) and with continuous time (_i.e._, allowing agents to start and stop at an arbitrary timing in a continuous timeline). However, such continuous setups are generally challenging and there are few established approaches [1, 1, 13]. In this work, we start by formulating a bit simplified but new variant of MAPP called _periodic multi-agent path planning (periodic MAPP)_ in which the timing of agent appearances is periodic. The objective with periodic MAPP is to find a _periodic plan_, _i.e._, a set of collision-free trajectories that streams of agents can use repeatedly over periods. By finding such plans with periods that are as small as possible, we are able to improve the throughput of agent streams. Importantly, periodic plans once found can be easily used for solving a more practical problem called online MAPP, a variant of online MAPF [23] in which a stream of agents can appear at random times but can also wait until the subsequent agents enter the environment. We develop a constraint-relaxation and optimization method as a solution method to periodic MAPP. With this method, we first generate an initial periodic plan under relaxed constraints about the physical size of agents with an arbitrarily large period they appear. We then solve a continuous optimization problem to improve the initial plan such that the agent size matches the original one and the period becomes as small as possible. Therefore, our method can find a collision-free and repeatable plan while minimizing the period. We provide insights into the topological aspect of solutions obtained with the proposed method. We evaluated the effectiveness of our method on several scenarios of abstracting AIM tasks, in which the goal is to move vehicles appearing at intersections to the other side without collision. Unlike existing methods that require planning or re-planning for every appearance of a new vehicle, the proposed method using periodic plans does not necessitate communications with other vehicles to retrieve their current locations or to update their trajectories. Nev ertheless, the solutions derived from the proposed method are comparably good or sometimes even better in terms of the throughput, compared with those from baseline methods that combine online MAPP algorithms (Svancara et al., 2019) and MAPP algorithms for continuous spaces and times (Andreychuk et al., 2022, 2021; Kasaura, Nishimura, and Yonetani, 2022). ## Periodic MAPP Overview.We consider a two-dimensional (2D) environment with several pairs of start and goal locations. For each start location, a stream of agents appears periodically with a user-defined _period_ (_i.e._, time interval). Each agent must move to its goal while avoiding collisions with the borders of the environment and other agents and disappear from the environment upon reaching the goal. We assume that there exists a certain _cycle_, the number of periods within which we can find a _periodic plan_ (_i.e._, a set of collision-free trajectories that can be used periodically over cycles). Therefore, a periodic plan may span multiple periods as collision-free trajectories for agents appearing at the same locations that are not necessarily the same across periods (see agents shown in red/orange or those in blue/cyan in Fig. 1.) Informally, the objective with periodic MAPP is to find such periodic plans for a given cycle with periods that are as small as possible. In other words, we wish to produce a high throughput plan that enables us to move agents from their respective start to the goal even if they appear in rapid succession. Note that, if the non-periodic version of a given problem instance (_i.e._, standard MAPP with a single agent appearing from each start) has a solution, the problem instance has a periodic plan for any cycle with the period given by the arrival time of the last agent. For simplicity, we assume that all agents have bodies modeled by circles with the same fixed radius and follow a simple kinematic constraint in which the velocity cannot exceed a certain maximum limit. Problem instances.Formally, we model a problem instance of periodic MAPP by a tuple \((\mathcal{E},\{(s_{1},g_{1}),\ldots,(s_{N},g_{N})\},r,v_{\max})\), where \(\mathcal{E}\subseteq\mathbb{R}^{2}\) is a set of valid states in the 2D environment. The set \(\{(s_{1},g_{1}),\ldots,(s_{N},g_{N})\}\), where \(s_{n},g_{n}\in\mathcal{E}\), describes \(N\) pairs of start and goal locations for agent streams. The variables \(r\) and \(v_{\max}\) are the radius and maximum velocity of each agent, respectively. Periodic plans.We refer to a _period_ as \(\tau\in\mathbb{R}_{+}\), a time interval with which a new set of agents can appear at respective start locations \(s_{1},\ldots,s_{N}\). While denoting _cycle_ as \(M\in\mathbb{N}_{+}\), we describe a _periodic plan with \(M\) periods_ by a set of \(M\times N\) trajectories \(\Gamma_{M}=(\gamma_{n,m}:[0,T_{n,m}]\rightarrow\mathcal{E})_{1\leq n\leq N,0 \leq m<M}\). The periodic plan should satisfy the following conditions: * (Start and goal locations) For all \(1\leq n\leq N\), \(0\leq m<M\), \(\gamma_{n,m}(0)=s_{n}\), and \(\gamma_{n,m}(T_{n,m})=g_{n}\). * (Maximum velocity) For all \(1\leq n\leq N\), \(0\leq m<M\), and \(t\in[0,T_{n,m}]\), the velocity of agents satisfies \[\left|\frac{d\gamma_{n,m}}{dt}(t)\right|\leq v_{\max}.\] (1) * (Clearance from boundaries) Let \(\mathrm{dist}_{\mathcal{E}}(x)\) be the distance of \(x\in\mathcal{E}\) from the boundary of \(\mathcal{E}\). Then, for all \(1\leq n\leq N\), \(0\leq m<M\), and \(t\in[0,T_{n,m}]\), \[\mathrm{dist}_{\mathcal{E}}(\gamma_{n,m}(t))\geq r.\] (2) * (Collision-free)1 For all \(1\leq n,n^{\prime}\leq N\), \(0\leq m,m^{\prime}<M\), and \(t\in[0,T_{n,m}]\), \(t^{\prime}\in[0,T_{n^{\prime},m^{\prime}}]\) such that \((m-m^{\prime})\tau+(t-t^{\prime})\in M\tau\mathbb{Z}\) and \((n,m,t)\neq(n^{\prime},m^{\prime},t^{\prime})\), \[|\gamma_{n,m}(t)-\gamma_{n^{\prime},m^{\prime}}(t^{\prime})|\geq 2r.\] (3) Footnote 1: The agent appearing at \(s_{n}\) at time \((aM+m)\tau\) follows the trajectory \(\gamma_{n,m}\) for any \(a\in\mathbb{Z}\). Since there exists an agent at \(\gamma_{n,m}(t)\) when the time is \(\ldots,(-2M+m)pr+t,(-M+m)\tau+t,m\tau+t,(M+m)\tau+t,(2M+m)\tau+t,\)., where exist two agents at \(\gamma_{n,m}(t)\) and \(\gamma_{n^{\prime},m^{\prime}}(t^{\prime})\) at the same time if \((m-m^{\prime})\tau+(t-t^{\prime})\in M\tau\mathbb{Z}\), except when \((n,m,t)=(n^{\prime},m^{\prime},t^{\prime})\). To avoid a collision between them, their distance must be at least \(2r\). Objective of periodic MAPP.Given a problem instance \((\mathcal{E},\{(s_{1},g_{1}),\ldots,(s_{N},g_{N})\},r,v_{\max})\) and a cycle \(M\), our objective is to find periodic plans \(\Gamma_{M}\) with periods \(\tau\) that are as small as possible. ## Solution Method In this section, we explain the proposed solution method for producing periodic plans for periodic MAPP. Specifically, we use a two-step approach that first derives initial solution trajectories for a relaxed problem that ignores the constraints about \(r\) and the objective for \(\tau\). We then optimize them by solving a continuous optimization problem to satisfy all the original conditions and improve the solution quality with respect to \(\tau\). This is a reasonable approach to derive solution trajectories in a continuous space and time setup while aiming to minimize the continuous period value that affects the solution. ### Trajectory Representation We represent each trajectory \(\gamma_{n,m}\) by a sequence of \(K+1\) timed locations with a timestep \(\Delta t_{n,m}\), _i.e._, \(((x_{n,m,0},0),(x_{n,m,1},\Delta t_{n,m}),\ldots,(x_{n,m,K},K\Delta t_{n,m}))\) where \(x_{n,m,k}\in\mathcal{E}\), \(x_{n,m,0}=s_{n}\), \(x_{n,m,K}=g_{n}\), and \(K\Delta t_{n,m}=T_{n,m}\). Agents are assumed to move between Figure 1: Example of periodic MAPP problem with \(N=2\) (left) and periodic plans with \(M=1\) (middle) and \(M=2\) (right). Solid lines show trajectories. Numbered circles indicate where agents are at each elapsed period after their appearance. two locations \(x_{n,m,k},x_{n,m,k+1}\) in a straight line with constant velocity. This discretized representation should also satisfy the maximum velocity condition in Eq. (1) that is rewritten as \[v_{n,m,k}:=\left|\frac{x_{n,m,k+1}-x_{n,m,k}}{\Delta t_{n,m}}\right|\leq v_{\max}, \tag{4}\] and that for the clearance from boundaries in Eq. (2): \[\mathrm{dist}_{\mathcal{E}}(x_{n,m,k})\geq r. \tag{5}\] Note that satisfying the collision-free condition in Eq. (3) is a bit non-trivial. Let us define \[\mathrm{r}_{q}(t):=t-\left\lfloor\frac{t}{q}\right\rfloor q, \tag{6}\] and \[\begin{split} C:=&\{(n,m,k,n^{\prime},m^{\prime},k^ {\prime})|\\ &\quad 1\leq n,n^{\prime}\leq N,\ 0\leq m,m^{\prime}<M,\ 0\leq k,k^{ \prime}<K,\\ &\quad 0\leq\mathrm{r}_{M^{\prime}}((m-m^{\prime})\tau+k\Delta t_{n,m }-k^{\prime}\Delta t_{n^{\prime},m^{\prime}})<\Delta t_{n^{\prime},m^{\prime}},\\ &\quad(n,m,k)\neq(n^{\prime},m^{\prime},k^{\prime}).\}.\end{split} \tag{7}\] Then, the collision-free condition is rewritten as, for all \((n,m,k,n^{\prime},m^{\prime},k^{\prime})\in C\), \[\begin{split} d_{n,m,k,n^{\prime},m^{\prime},k^{\prime}}:=& |x_{n,m,k}-((1-\alpha)x_{n^{\prime},m^{\prime},k^{\prime}}+\alpha x_{n^{ \prime},m^{\prime},k^{\prime}+1})|\\ \geq& 2r,\end{split} \tag{8}\] where \[\alpha=\frac{\mathrm{r}_{M\tau}((m-m^{\prime})\tau+k\Delta t_{n,m}-k^{\prime} \Delta t_{n^{\prime},m^{\prime}})}{\Delta t_{n^{\prime},m^{\prime}}}. \tag{9}\] Note that by setting \(t=k\Delta t_{n,m}\), \(t^{\prime}=(k^{\prime}+\alpha)\Delta t_{n^{\prime},m^{\prime}}\), the inequality in Eq. (8) reduces to the original one of Eq. (3). ### Optimization Initial periodic plans.We first create an initial periodic plan while setting \(r\) smaller than that of the original condition and \(\tau\) large enough. This makes it easy to find feasible trajectories that satisfy the above conditions. Concrete algorithms used to produce such plans depend on task setups, which we present in Appendix B. Objective.Given an initial periodic plan for \(\Gamma_{M}=(\gamma_{n,m})_{1\leq n\leq N,0\leq m<M}\), we optimize it with respect to \(r\), \(\tau\), and each trajectory \(\gamma_{n,m}\) to satisfy the original conditions. We denote the original agent radius as \(r_{0}\). By imposing the cost to violate the original conditions, solving periodic MAPP reduces to a continuous optimization problem with the following objective: \[V(\tau,r,\Gamma_{M}):=\left(\tau-\frac{2r}{v_{\max}}\right)^{2}+\frac{\sigma_{ \mathrm{t}}}{K}\sum_{n,m,k}v_{n,m,k}^{2}+c(\tau,r,\Gamma_{M}), \tag{10}\] \[\begin{split} c(\tau,r,\Gamma_{M})&:=\sigma_{ \mathrm{r}}(r-r_{0})^{2}\\ &+\frac{\sigma_{\mathrm{v}}}{K}\sum_{n,m,k}\left(\max\left\{0,v _{n,m,k}-v_{\max}\right\}\right)^{2}\\ &+\frac{\sigma_{\mathrm{o}}}{K}\sum_{n,m,k}\left(\max\left\{0, \mathrm{dist}_{\mathcal{E}}(x_{n,m,k})^{-1}-r^{-1}\right\}\right)^{2}\\ &+\frac{\sigma_{\mathrm{c}}}{K}\sum_{C}\left(\max\left\{0,d_{n,m,k,n^{\prime},m^{\prime},k^{\prime}}^{-1}-(2r)^{-1}\right\}\right)^{2},\end{split} \tag{11}\] where \(\sigma_{\mathrm{t}}\), \(\sigma_{\mathrm{r}}\), \(\sigma_{\mathrm{v}}\), \(\sigma_{\mathrm{o}}\) and \(\sigma_{\mathrm{c}}\) are constants. With this objective, we aim to decrease \(\tau\) to the minimum \(2r/v_{\max}\) where two agents are adjacent to each other. We also impose costs of trajectories defined by the sums of the squares of velocity to prevent vanishing of the gradients on the trajectories. Optimization method.We solve this optimization problem by using the Levenberg-Marquardt algorithm (Leverberg 1944; Marquardt 1963). To force solutions to strictly satisfy the original conditions, we make the constants \(\sigma_{\mathrm{r}},\sigma_{\mathrm{v}},\sigma_{\mathrm{p}},\sigma_{\mathrm{c}}\) gradually increase to become large enough during the optimization. We also gradually decrease \(\sigma_{\mathrm{t}}\) up to zero because the corresponding velocity term is necessary only for preventing vanishing gradients and is not included in the original conditions. ### Topological Remark The quality of final solutions is dependent on initial periodic plans, while some initial plans will result in the same optimization results. One considerable feature of solutions is their equivalent classes with respect to continuous deformation, including optimizations, from topological perspectives. To analyze this, we introduce an additional constraint in which no agents can pass through any start and goal locations, including those of themselves, instead of considering the conditions of the velocity and size of agents. This is necessary to ensure sufficiently different plans to be distinct enough in terms of their homotopy class. Let: \[\mathcal{C}:=(\mathcal{E}\setminus\{s_{1},g_{1},\ldots,s_{N},g_{N}\})\times \mathbb{R}/\mathbb{Z}. \tag{12}\] A trajectory \(\gamma_{n,m}\) can be considered an embedding \(\tilde{\gamma}_{n,m}\) of an open interval \((0,1)\) to \(\mathcal{C}\): \[(0,1)\ni\alpha\mapsto\left(\gamma_{n,m}(tT_{n,m}),\overline{\left(\frac{m\tau+ \alpha T_{n,m}}{M\tau}\right)}\right)\in\mathcal{C}. \tag{13}\] where the overline means the equivalent class. Then, a periodic plan \((\gamma_{n,m})_{1\leq n\leq N,0\leq m<M}\) can be considered an embedding \((\tilde{\gamma}_{n,m})_{1\leq n\leq N,0\leq m<M}\) of the disjoint union of \(N\times M\) open intervals \((0,1)\) to \(\mathcal{C}\), satisfying the following conditions for any \(1\leq n\leq N\) and \(0\leq m<M\): * \(\lim_{\alpha\to 0}\tilde{\gamma}_{n,m}(\alpha)=(s_{n},\overline{m/M})\) and \(\lim_{\alpha\to 1}\tilde{\gamma}_{n,m}(\alpha)=(g_{n},\mathcal{A})\) for some \(\mathcal{A}\in\mathbb{R}/\mathbb{Z}\). * The second component of \(\tilde{\gamma}_{n,m}(\alpha)\) is locally strictly increasing with respect to \(\alpha\). Note that the collision-free condition can be interpreted as injectivity. Now, a set of plans that are equivalent with respect to continuous deformations is the set of homotopy classes of the embeddings that satisfy the above conditions, and the following proposition holds. **Proposition 1**: _When \(\mathcal{E}\) is open and connected, the above set is independent of the positions of \(s_{1},g_{1},\ldots,s_{N},g_{N}\)._ For the proof, see Appendix A. ## Application to Online MAPP While it is assumed with periodic MAPP that the appearance timing of agents in a stream is periodic, the periodic plans once found can be used for solving a more practical problem called online MAPP, where agents appear at _random times_. ### Online MAPP Problem While sharing certain settings with periodic MAPP, the problem of online MAPP can be viewed as a variant of online MAPP [22] with the following features. Like periodic MAPP, there are several pairs of starts and goals in the environment through which streams of agents have to pass while avoiding collisions. Following online MAPP, we assume that agents can appear at random times but are also allowed to wait at their start locations in a finite or infinite queue until the subsequent agents enter the environment. This assumption is similar to the concept of 'garage' [22] and realistic for practical applications such as AIM. ### Proposed Method Adopting periodic plans to agents with random timing of appearances is straightforward. We first divide a timeline into periods of the interval \(\tau\) obtained with the periodic plan. We then allow at most one agent for each period to enter each start location and follow the corresponding trajectory. Therefore, agents can avoid collisions with other agents in the same stream. Formally, by using the notations of periodic MAPP introduced earlier, we say that an agent is assigned to the \(n\)-th trajectory in the \(a\)-th period when it waits until time \(a\tau\) and moves along \(\gamma_{n,a-\lfloor a/M\rfloor M}\). Let us denote as \(t\), the time of appearance of a new agent at the start position \(s_{n}\). Let \(a:=\lceil t/\tau\rceil\). If the \(n\)-th trajectory in the \(a\)-th period has not yet been assigned by any agent, the new agent will follow that trajectory. Otherwise, the agent will wait to follow the next trajectory in the \(a^{\prime}+1\)-th period, where \(a^{\prime}\) is the order of the period assigned to the last agent that appeared at \(s_{n}\) before \(t\) if the queue has a room. Note that if the length of the queue is finite and the number of currently waiting agents exceeds its limit, the planning for the new agent is considered a failure. ### Queueing Theoretical Analysis We theoretically analyze the waiting times of agents for the proposed method. Assumptions.We assume that agents are managed by queues of infinite length. We also model time intervals of agent appearances as \(c+\alpha\), where \(c\) is a constant2 and \(\alpha\) is a random variable drawn from the exponential distribution with a rate parameter \(\lambda\). We also assume that all agents wait for (the maximum) time \(\tau\) until they start moving. Note that this assumption corresponds to the deterministic service time in terms of the queueing theory [1] and is more conservative than in actuality. Footnote 2: We add \(c\) to account for the time margin needed to avoid collisions between agents. Waiting time analysis.For theoretical analysis, we remove a constant term from the above model by subtracting \(c\) from time intervals between arrivals and service time temporarily. This operation does not change the number of agents in a queue but reduces waiting times for all the agents. The resulting model then reduces to the M/D/1 queue in terms of the queueing theory [1]. By denoting the rate of arrivals as \(\lambda\) and the service time as \(D:=\tau-c\), and when \(\rho=D\lambda<1\), the average waiting time is given as \[W^{\prime}=D+\frac{\rho}{2(1-\rho)}D. \tag{14}\] Moreover, the probability that the waiting time exceeds a limit \(t\) decreases exponentially with respect to \(t\)[1]. By again considering \(c\) we subtracted earlier, the average time now becomes \[W=W^{\prime}+c=\tau+\frac{\lambda(\tau-c)^{2}}{2(1-\lambda(\tau-c))}. \tag{15}\] ## Experiments To evaluate the effectiveness of our method for first solving periodic MAPP then using periodic plans for online MAPP problems, we focus on scenarios of abstracting AIM tasks [13]. ### Experimental Setups Environments.Our AIM scenario involves a single intersection with several entrances and exits. Figure 2 shows the six environments used in the experiments. Each environment abstracts one of the typical situations of intersections with different sizes. The circles with letters's' and 'g' are the start and goal locations, respectively. Specifically, environment (a) is a crossing of two one-way roads, while environment (b) is a crossing of one-way and two-way roads. Environments (c) and (d) are modeling the crossing of two two-way roads of different sizes. Finally, environments (e) and (f) model the T-junction with different sizes. We also assume that each start location is equipped with a 'corridor', as illustrated in Fig. 4, which we can use as a queue (with unlimited or limited capacity) to keep agents waiting until entering the intersection. Agents (_i.e._, vehicles) are modeled by a circle and follow a simple holonomic kinematics model that enable them to move in any direction under a given maximum velocity. Trajectories for a new agent must be planned immediately once that agent appears in the environment. Nevertheless, it is possible to replan trajectories for some agents that have already been moving in the environment to take into account the new agent. **Parameters.** We evaluated two different configurations for the queues: unlimited or limited capacity with five agents at most. The time interval between agent appearances is modeled as \(1.0+\alpha\), where \(\alpha\) follows the exponential distribution with a rate parameter \(\lambda\). We sampled appearance times until the last time reached \(1000\) for infinite queues and \(100\) for finite queues. For each environment and each \(\lambda\in[0.25,0.5,0.75,\ldots,2.5]\), we generated \(10\) different problem instances. Throughout the experiment, \(r\) and \(v_{\max}\) were respectively fixed to \(0.5\) and \(1.0\). **Evaluation metrics.** The quality of plans is measured by the following two metrics. * **Throughput** measured as the number of agents entering the environment in unit time. * **Average delay** calculated as the incremental travel time compared with the shortest possible trajectory averaged over agents. Note that this metric includes the time for the agent waiting in a queue. ### Initial Periodic Plans With periodic MAPP, we generated and evaluated three periodic plans for each problem instance with cycles \(M\in\{1,2,3\}\). We constructed the plans by carefully designing the order of passing at intersection points such that, after \(M\) agents of one direction pass at the intersection, \(M\) agents of another direction pass alternately. Formally, the initial plan for \(M\) was created on the basis of the following rules: * Let \(\pi_{n}\) be the shortest path connecting \(s_{n}\) and \(g_{n}\). Each agent appearing at \(s_{n}\) follows \(\pi_{n}\), while adjusting its velocity to satisfy the next condition. * Let two paths \(\pi_{n}\) and \(\pi_{n^{\prime}}\) intersect at point \(p\). We assume that the length of the part of \(\pi_{n}\) from \(s_{n}\) to \(p\) is shorter than that of the part of \(\pi_{n^{\prime}}\) from \(s_{n^{\prime}}\) to \(p\) (and assume \(i<i^{\prime}\) if they are equal). Then, for any \(a\in\mathbb{Z}\), \(M\) agents appearing at \(s_{n}\) at time \(aM\tau,(aM+1)\tau,\ldots,(aM+M-1)\tau\) must pass \(p\) at the time between times when agents appearing at \(s_{n^{\prime}}\) at \((aM-1)\tau\) and at \(aM\tau\) pass \(p\). The differences between times when two agents pass are set to be not smaller than \(1.0\) to prevent optimization failures. Note that such initial plans can always be constructed as long as the initial \(\tau\) is taken to be large and \(r\) is small enough. See Appendix B for details of the generation algorithm. ### Baseline Methods Because the same problem setup (_i.e._, improving throughput of agent streams for MAPP in continuous space and time) has not been explicitly addressed, we developed two baseline methods called _first-come and first-serve (FCFS)_ and _snapshot optimal (SO)_, which combine general strategies of online MAPF (Svancara et al., 2019) and the state-of-the-art Figure 4: Environment (a) with corridors used as queues for five agents. Figure 3: Optimized periodic plans for each environment with best \(M\). Best viewed in videos in the Appendix. Figure 2: Environments that abstract AIM tasks. Letters ‘s’ and ‘g’ indicate start and goal locations, respectively. MAPP algorithms in continuous space and time [11, 12]. For both baselines, we first used probabilistic roadmap with the fixed number of neighbors (k-PRM) [12] as a standard approach to approximate the continuous space into a roadmap and find collision-free paths on the constructed roadmap. Implementation details are presented in Appendix C. * **FCFS**: This baseline incrementally plans a collision-free trajectory for every new agent that appears while regarding trajectories of other agents already present in the environment as space-time obstacles. This is a natural application of prioritized planning [12] for the "remplan single" strategy introduced in the context of online MAPF [12]. Specifically, we used prioritized safe-interval path planning [11] to handle continuous space and time. * **SO**: By contrast, this baseline uses the "replan all" strategy [12] and finds the optimal solution using continuous conflict-based search [1] for all the agents presenting in the environment each time a new agent appears. 2013), MAPF with kinematic constraints (Honig et al., 2018; Walker et al., 2018), and online MAPF (Svancara et al., 2019; Ma et al., 2017; Li et al., 2021), also known as lifelong MAPF. As summarized by Ma (2021), online MAPF is an online version of MAPF in which a team of agents is asked to solve a stream of tasks. Agents are assigned a new task whenever they appear in the environment (Svancara et al., 2019) or upon reaching their goal (Ma et al., 2017). In contrast, periodic MAPP and our version of online MAPP are different in that streams of agents are asked to solve a certain task, with the unique objective that aims for high throughput. We introduced a challenging problem setup that solves online MAPP in the continuous space and with continuous time. Studies on such continuous setups have significantly been limited compared with discrete cases (Walker et al., 2018; Honig et al., 2018; Andreychuk et al., 2021, 2022; Kasaura et al., 2022; Okumura et al., 2022). Application to AIM.As reviewed by Stern et al. (2019); Ma (2021), AIM is a common application of online MAPF. Typical approaches include the first-come and first-serve for repeatedly determining trajectories for every new agent (Dresner and Stone, 2008) and the application of optimal solvers for a set of all agents present at the moment (Svancara et al., 2019), which we compared in our experiments. For a given trajectory (or simply lane), there are studies that used deep reinforcement learning to achieve an optimal policy for vehicle acceleration control (Kreidieh et al., 2018; Jang et al., 2019; Cui et al., 2021). Optimization for MAPP.Similar to this work, other studies uses continuous optimization of trajectories (_i.e._, trajectory deformation (Kurniawati and Fraichard, 2007)), especially for single-agent cases. For example, the idea of avoiding collisions on the basis of the optimization has existed for a long time (Khatib, 1986). Another widely used motivation for optimization is to take into account kinematic constraints, _e.g._, that in Rosmann et al. (2017) for a single-agent case and that in Honig et al. (2018) for a multi-agent case. Our study is unique in that we optimized trajectories as well as the period for their repeated use. Other related work.Finally, our study has several more connections to other prior works. Solving path planning while relaxing constraints is a technique for single-agent cases (Bonilla et al., 2015; Bonilla et al., 2017; Fusco et al., 2018). The homotopical aspect of path planning has been studied by Bhattacharya (2010) for single-agent and by Bhattacharya and Ghrist (2018) for multi-agent cases. However, these studies are not directly applicable to our problem setup due to the difficulty of considering collisions for agents appearing in a periodic fashion. ## Conclusion We presented a new variant of MAPP called periodic MAPP in which a stream of agents can appear periodically at each start location and leave the environment once they arrive at the goal. We also proposed a solution method of periodic MAPP that generates a periodic plan, _i.e._, a set of collision-free trajectories that can be used repeatedly over periods while maintaining high throughput. We showed that the periodic plans can further be used for solving the online MAPP problem in which agents in each stream appear at a random time, and demonstrated its effectiveness on scenarios of abstracting AIM tasks. Currently, our formulation of periodic MAPP as well as our solution method can cover only situations in which agents follow a simple kinematics model that takes into account only maximum velocity. Promising future work is to address more realistic kinematics of wheeled robots or drones. This would require extending an optimization technique for solution methods such as ones used to plan trajectories for swarms (Honig et al., 2018). Another interesting direction from an application perspective is to tackle a more challenging AIM task in which human-driven vehicles also exist. In such a case, it will be important to combine the proposed method with state-of-the-art AIM methods that can reactively control acceleration on a given trajectory (Kreidieh et al., 2018; Jang et al., 2019; Cui et al., 2021). Figure 8: Average delay with respect to \(\lambda\) for _finite_ queues, (bars represent standard derivations) Figure 7: Throughput with respect to \(\lambda\) for _finite_ queues (bars represent standard derivations) A: Proof of Proposition 1 Let \(s^{\prime}_{1},g^{\prime}_{1},\ldots,s^{\prime}_{N},g^{\prime}_{N}\) be another collection of start and goal locations. It is enough to show that there exists a homeomorphism \(F:\mathcal{E}\rightarrow\mathcal{E}\) such that \(F(s_{n})=s^{\prime}_{n}\) and \(F(g_{n})=g^{\prime}_{n}\) for all \(1\leq n\leq N\). This is proven by Michor and Vizman (1994). ## Appendix B: Generation Algorithm of Initial Plans First, we set \(r=0\) to relax the clearance from boundaries and collision-free constraints. Let \(s_{n}=p_{n,0},p_{n,1},\ldots,p_{n,K_{n}}=g_{n}\) be start and goal locations and all intersection points with other paths on \(\pi_{n}\) ordered from \(s_{n}\) to \(g_{n}\). Also, let \(t_{n,m,k}\) be the traveling time from \(s_{n}\) to \(p_{n,k}\) in the initial plan, and \(L_{n,k}\) be the minimum traveling time from \(p_{n,k}\) to \(p_{n,k+1}\). Then, the following condition must hold: \[t_{n,m,0}=0, \tag{16}\] \[t_{n,m,k}+L_{n,k}\leq t_{n,m,k+1}, \tag{17}\] We also consider constraints of the forms "the \(m\)-th agent appeared \(s_{n}\) must pass through \(p_{n,k}=p_{n^{\prime},k^{\prime}}\) earlier than \(m^{\prime}\)-th agent appeared \(s_{n}\)", where \(0\leq m,m^{\prime}<M\), which is written by: \[m\tau+t_{n,m,k}<m^{\prime}\tau+t_{n^{\prime},m^{\prime},k^{\prime}}. \tag{18}\] We describe these constraints using tuple \(C_{1}:=(n,m,k,n^{\prime},m^{\prime},k^{\prime})\). Likewise, there exist constraints of the forms "the \((M-1)\)-th agent appeared \(s_{n}\) must pass through \(p_{n,k}=p_{n^{\prime},k^{\prime}}\) earlier than \(M\)-th agent appeared \(s_{n^{\prime}}\)". They are given as follows: \[(M-1)\tau+t_{n,M-1,k}<M\tau+t_{n^{\prime},0,k^{\prime}}, \tag{19}\] which we describe them using tuple \(C_{2}:=(n,k,n^{\prime},k^{\prime})\). Moreover, we set a redundancy parameter \(R=1.0\) and make these constraints stricter: \[m\tau+t_{n,m,k}+R\leq m^{\prime}\tau+t_{n^{\prime},m^{\prime},k^{\prime}}. \tag{20}\] \[(M-1)\tau+t_{n,M-1,k}+R\leq M\tau+t_{n^{\prime},0,k^{\prime}}. \tag{21}\] After the values of \(t_{n,m,k}\) satisfying the above conditions are determined, the initial value of \(\gamma_{n,m}\) can be constructed by connecting \(p_{n,k}\) and \(p_{n,k+1}\) with constant velocity motions. Now, we construct a directed graph such that its vertices are tuples \((n,m,k)\) and its edges connect from \((n,m,k)\) to \((n,m,k+1)\) or from \((n,m,k)\) to \((n^{\prime},m^{\prime},k^{\prime})\) for \((n,m,k,n^{\prime},m^{\prime},k^{\prime})\in C\). Then, for the constraints given in this paper, the graph is acyclic and the indegrees of vertices of the form \((n,m,0)\) are zero. So we can compute the minimum values of \(t_{n,m,k}\) satisfying (16), (17), and (20) by dynamic programming after the value of \(\tau\) is determined. Since the value of \(\tau\) is not determined yet, we write \(t_{n,m,k}=a_{n,m,k}\tau+b_{n,m,k}\) and define the order of pairs of \((a,b)\in\mathbb{Z}\times\mathbb{R}\) by the lexicographical order. Then, we can compute the minimum values of \(a_{n,m,k},b_{n,m,k}\) by dynamic programming satisfying the following conditions: \[(a_{n,m,0},b_{n,m,0})=(0,0), \tag{22}\] \[(a_{n,m,k}.b_{n,m,k}+L_{n,m})\leq(a_{n,m,k+1},b_{n,m,k+1}), \tag{23}\] \[(a_{n,m,k}+m,b_{n,m,k}+R)\leq(a_{n^{\prime},m^{\prime},k^{\prime}}+m^{\prime},b_{n^{\prime},m^{\prime},k^{\prime}}) \tag{24}\] for all \((n,m,k,n^{\prime},m^{\prime},k^{\prime})\in C_{1}\). We determine the values of \(a_{n,m,k},b_{n,m,k}\) as the minimum ones. Furthermore, we can prove that these values hold the following inequality by the forms of conditions: \[a_{n,m,k}+m\leq M-1. \tag{25}\] Thus, when \(\tau\) is large enough, \(t_{n,m,k}=a_{n,m,k}\tau+b_{n,m,k}\) while satisfying the conditions (16), (17), (20), and (21). We determine the value of \(\tau\) as the minimum one of such values. ## Appendix C: Implementation Details All the methods are implemented in C++ and evaluated with Intel Core i9-9900K CPU and 32 GB RAM. **Optimization method.** For optimization, we use Levenberg-Marquardt (LM) Algorithm implemented in g2o (Kummerle et al., 2011). Parameters for the optimization are set as the default values of the library. We set \(\sigma_{\rm r}=\sigma_{\rm v}=\sigma_{\rm o}=\sigma_{\rm c}=10^{4}\) and \(\sigma_{\rm t}=1\) as the initial values. Since the set \(C\) in Eq.(7) of the main paper may change depending on the value of \(p\) or \(T_{i,j}\), we have to reset the explicit constraints repeatedly during optimization. The number of iteration of LM algorithm and intervals of resetting constraints are as follows: 1. The first \(500\) iterations: we recalculate \(C\) after each iteration. 2. The next \(39500\) iterations: we recalculate \(C\) for \(10\) iterations. 3. The following \(1000\) iterations: we recalculate \(C\) and increase \(\sigma_{\rm r},\sigma_{\rm v},\sigma_{\rm o}\), and \(\sigma_{\rm c}\) by multiplying \(1.01\) and decrease \(\sigma_{\rm t}\) by dividing \(1.01\) after each iteration. 4. The remaining iterations: we recalculate \(C\) after each iteration. This phase lasts until the plan converges, more precisely, until the difference of costs for \(100\) iterations goes below \(10^{-6}\). Table 2 reports the number of iterations until the optimization converges and required times in minutes. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & \multicolumn{6}{c}{**Environment**} \\ \(M\) & (a) & (b) & (c) & (d) & (e) & (f) \\ \hline \(1\) & 44537 & 42899 & 41113 & 45025 & 41133 & 46111 \\ & 2 & 4 & 5 & 13 & 29 & 54 \\ \hline \(2\) & 74280 & 97481 & 84898 & 49504 & 43514 & 73906 \\ & 15 & 69 & 59 & 50 & 69 & 277 \\ \hline \(3\) & 41910 & 83786 & 52978 & 85698 & 61067 & 41973 \\ & 8 & 96 & 57 & 281 & 188 & 107 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of iterations (top) and required time [min] (bottom) for until the optimization converges for the proposed approach. Baseline methods.For the FCFS approach, we modify the implementation of Prioritized SIPP by Kasaura, Nishimura, and Yonetani (2022) to our problem setting. For the SO approach, we modify the implementation of CCBS by the authors, which is available online3, to our problem setting. For each query, planning was considered a failure if the runtime exceeded \(1.0\) seconds. For PRM, We also use the implementation by Kasaura, Nishimura, and Yonetani (2022). As the parameters for PRM, the number of roadmap vertices is \(1000\) for the FCFS approach and \(20\) for the SO approach. We limit the number of vertices for the SO because CCBS is generally time-consuming when the density of vertices is high and the environments are small. The number of neighbors is \(15\) for the FCFS approach and \(10\) for the SO approach.
2306.05404
A ship-in-a-bottle quantum gas microscope setup for magnetic mixtures
Quantum gas microscopes are versatile and powerful tools for fundamental science as well as promising candidates for enticing applications such as in quantum simulation or quantum computation. Here we present a quantum gas microscopy setup for experiments with highly magnetic atoms of the lanthanoid elements erbium and dysprosium. Our setup features a non-magnetic, non-conducting, large-working-distance, high-numerical-aperture, in-vacuum microscope objective, mounted inside a glue-free quartz glass cell. The quartz glass cell is enclosed by a compact multi-shell ferromagnetic shield that passively suppresses external magnetic field noise by a factor of more than a thousand. Our setup will enable direct manipulation and probing of the rich quantum many-body physics of dipolar atoms in optical lattices, and bears the potential to put exciting theory proposals -- including exotic magnetic phases and quantum phase transitions -- to an experimental test.
Maximilian Sohmen, Manfred J. Mark, Markus Greiner, Francesca Ferlaino
2023-06-08T17:54:18Z
http://arxiv.org/abs/2306.05404v2
# A ship-in-a-bottle quantum gas microscope for magnetic mixtures ###### Abstract Quantum gas microscopes are versatile and powerful tools for fundamental science as well as promising candidates for enticing applications such as in quantum simulation or quantum computation. Here we present a quantum gas microscopy setup for experiments with highly magnetic atoms of the lanthanoid elements erbium and dysprosium. Our setup features a non-magnetic, non-conducting, large-working-distance, high-numerical-aperture, in-vacuum microscope objective, mounted inside a glue-free quartz glass cell. The quartz glass cell is enclosed by a compact multi-shell ferromagnetic shield that passively suppresses external magnetic field noise by a factor of more than a thousand. Our setup will enable direct manipulation and probing of the rich quantum many-body physics of dipolar atoms in optical lattices, and bears the potential to put exciting theory proposals - including exotic magnetic phases and quantum phase transitions - to an experimental test. ## Introduction ### Background Understanding and control of the interplay between light and matter at the microscopic level has enabled revolutionary insights in fundamental physics and plays an increasing role for applications [1; 2; 3; 4; 5]. A pivotal advantage of such quantum optics approaches is the ability to custom-tailor quasi-pure model systems, which can be prepared, manipulated and probed with high fidelity while being near-perfectly isolated from environmental sources of noise. Quantum gas microscopes, in particular, enable the manipulation and imaging of individual, ultracold particles (typically: neutral atoms) pinned in a two-dimensional (2D) optical lattice potential, allowing to study quantum many-body physics on the single-particle level [6; 7; 8]. Owing to the superior signal-to-noise ratio, site-resolved detection is usually based on fluorescence imaging. To counteract recoil-heating of the pinned atoms by the scattered fluorescence photons (which could cause them to hop to nearby lattice sites), it is typically crucial to combine the imaging procedure with an in-trap optical cooling scheme. The first quantum gas microscopes used bosonic alkali atoms (\({}^{87}\)Rb) and went into operation in 2009/10 [9; 10]. They enabled studies of quantum phase transitions and the associated Higgs mode [11], particle correlations [12], quantum dynamics [13; 14] and other, similarly fundamental phenomena [6; 8]. About five years later, two groups demonstrated microscopes for the bosonic lanthanoid \({}^{174}\)Yb [15; 16], whose complex electronic level structure promises opportunities for realising new quantum information protocols. Microscopy of alkali fermions - in contrast to the alkali boson \({}^{87}\)Rb - initially proved challenging, since the small hyperfine splitting and low mass required the refinement of optical cooling procedures [6; 17]. The coming into operation of five different fermion microscopes in 2015 - using either \({}^{6}\)Li [18; 19] or \({}^{40}\)K [20; 21; 22] - marked the beginning of a long series of important results for Fermi systems, such as the direct observation of band [19] and (fermionic) Mott insulators [21; 23], antiferromagnetic ordering [24; 25], and many more [6; 8]. The vast majority of experiments with quantum gas microscopes has so far concentrated on atoms interacting via a short-range contact interaction. In such systems, atoms do not directly experience an energy shift depending on whether a neighbouring lattice site is occupied or not - neighbour interactions are only introduced via a (weak) second-order tunnelling process, the so-called super-exchange interaction [26]. If, in contrast to purely contact-interacting systems, an additional interaction of long-range character - i.e., with an associated length scale similar to or larger than the lattice spacing - is present in a system, profound differences and new physics are to be expected [27; 28]. Currently, three different platforms are the main candidates for realising experiments with long-range interactions on optical lattices, and each of them has individual advantages and drawbacks. First, coupling optical-lattice ground-state atoms to Rydberg states induces a shift of the electronic energy levels, effectively dressing the atoms with a Rydberg character [29; 30; 31]. This results in an effective interaction potential between the atoms, whose strength and range can be controlled, e.g., via the detuning and intensity of the dressing laser, or the specific Rydberg state used. However, Rydberg admixtures have a limited lifetime and are highly susceptible to environmental stray electric fields [29]. Second, one can exploit the electric dipole-dipole interaction (DDI) between ground-state polar molecules [32; 33; 34]. Polar molecules, too, feature strong, long-range interactions which are - over a certain range - tunable via an external elec tric field, but this comes at the price of rather demanding molecule preparation schemes (many steps for, e.g., association, cooling, collisional shielding, and others) as well as substantial particle losses due to complex, reactive collision processes [35]. Third, one can exploit the anisotropic and long-range magnetic DDI between atoms with a strong, permanent magnetic dipole moment such as chromium [36], dysprosium [37], or erbium [38]. Despite their lower interaction strength compared to Rydberg atoms and polar molecules, experimental realisations benefit tremendously from straightforward cooling and preparation schemes as well as long lifetime of samples of ultracold magnetic atoms. Some direct effects of magnetic DDI on lattice physics have already been studied using conventional time-of-flight techniques [39; 40; 41; 42]. Ultimately, however, it would be very desirable to perform experiments with dipolar atoms on a lattice in a quantum gas microscope, where many phenomena are much more directly accessible. ### Key features of erbium and dysprosium All quantum gas microscopes are tailored specifically to the needs of their atomic species. This is necessitated by the demanding properties such a setup has to offer, for example high optical resolution or dedicated imaging and cooling schemes. Hence, let us first summarise some of the decisive features of erbium and dysprosium that distinguish them, e.g., from the alkali metals which predominate in quantum gas microscopy today. _Magnetic moment._ Most importantly, of course, erbium and dysprosium feature large permanent magnetic dipole moments of \(7\,\mu_{\mathrm{B}}\) and \(10\,\mu_{\mathrm{B}}\), respectively, where \(\mu_{\mathrm{B}}\) is the Bohr magneton (cf. rubidium: \(1\,\mu_{\mathrm{B}}\)). _Isotope options._ Both erbium and dysprosium posses a variety of bosonic as well as fermionic isotopes, which can be cooled and captured efficiently in a parallel magneto-optical trap [49] and of which several combinations have been brought to mixture degeneracy [50]. _Feshbach resonances._ The erbium and dysprosium isotopes offer dense Feshbach spectra with a comfortable number of broad resonances at easily accessible field strengths, favourable for contact interaction tuning [51; 52; 53] or molecule formation [54]. Also interspecies Feshbach spectra of several isotope combinations provide conveniently broad resonances [55]. _Electronic energy spectra._ Erbium and dysprosium possess a large number of electronic spectral lines (see, e.g., Fig. 1 in Ref. [56]), with a great variety of different widths, from broad to extremely narrow (see Table 1). From these transitions the most suitable ones can be picked for the desired task (such as cooling, imaging, shelving, etc.). Note that the broadest transitions of erbium and dysprosium are in the blue part of the visible spectrum, hence yield a high resolution according to the Abbe limit. _Large mass._ Erbium and dysprosium are comparatively heavy elements (depending on the isotope, between 161 and 170 atomic mass units). Therefore, recoil velocities are exceptionally low for all optical transitions (in particular, compared to the light alkali elements). _Large spin manifold._ One of the reasons for the large magnetic moments of erbium and dysprosium is the large angular momentum in the electronic ground states, with \(J=6\) (\(J=8\)) for the bosonic isotopes of erbium (dysprosium), and \(F=19/2\) (\(F=21/2\)) for fermionic erbium (dysprosium). The corresponding Zeeman and hyperfine states can be used to emulate spin Hamiltonians or to implement synthetic dimensions [57; 58]. These features of erbium and dysprosium give opportunity for new techniques that we would like to try with our quantum gas microscope. _Free-space imaging._ It has been proposed [59] that - similar to free-space imaging of a single atom released from an optical tweezer [60; 61] - for erbium and dysprosium atoms on an optical lattice the combination of a broad electronic transition (i.e., high photon scattering rate) and large atomic mass (i.e., slow diffusion) might enable fast imaging protocols that work _without_ optical cooling and pinning lattice. A first experimental demonstration of such a free-space imaging of a lattice gas has been reported very recently [62]. The ability to reliably reconstruct site occupations using such a fast, free-space imaging protocol can greatly simplify the detection scheme and offer a way to avoid the parity-projection problem almost all current quantum gas microscopes are facing [8]. In addition, it might prove useful for imaging of bulk systems with high resolution. _Spin manipulation and detection._ The combination of large ground-state spin space and narrow optical transitions makes erbium and dysprosium ideal candidates for high-fidelity spin-state control [47; 48]. This can be used to reliably prepare and probe specific states in the quantum gas microscope, to dynamically drive phase transitions [63], and to perform spin-selective imaging and shelving [64]. _DDI-mediated lattices._ In a doubly-dipolar mixture (species A and B - in our case: one erbium, one dysprosium) it is possible employ an optical lattice whose wavelength \(\lambda_{l}\) is tuned to a value where the polarisability \begin{table} \begin{tabular}{l c c l l} \hline \hline \(\sim\Gamma/2\pi\) & \(\lambda_{\mathrm{Er}}/\mathrm{nm}\) & \(\lambda_{\mathrm{Dy}}/\mathrm{nm}\) & usage example & refs \\ \hline \multirow{4}{*}{30 MHz} & \multirow{4}{*}{401} & \multicolumn{4}{c}{} & \multicolumn{2}{c}{Zeeman slowing, fluorescence imaging [43; 44]} \\ & & & & narrow-line MOT & [43; 44] \\ \cline{1-1} & & & & narrow-line MOT & [45; 46] \\ \cline{1-1} Hz & 1299 & 1001 & spin manipulation & [47; 48] \\ \hline \hline \end{tabular} \end{table} Table 1: _Selected optical transitions of erbium and dysprosium._ vanishes for species A, \(\alpha_{\rm A}(\lambda_{l})\approx 0\), but not for species B.1 Whereas this lattice will be invisible for species A, it can be used to pin species B. Since, however, both species interact via DDI, species A will see an effective periodic potential stemming from the pinned species B. As the pinned atoms can vibrate, the DDI-mediated periodic potential itself supports phononic excitations - in contrast to a conventional optical lattice, which is infinitely stiff. Following a different approach, very recently an optical lattice supporting phonons was realised using a confocal optical resonator [66], shedding first light on the rich physics of elasticity in quantum solids. Footnote 1: More precisely, \(\Im\{\alpha_{\rm B}(\lambda_{l})\}\approx 0\), so negligible absorption [65], but \(\Re\{\alpha_{\rm B}(\lambda_{l})\}>0\). ### Potential research directions There exists a wealth of promising research proposals that could be followed using a quantum gas microscope for dipolar atoms such as erbium or dysprosium, and yet further ones for a combination of two dipolar species. For instance, in lattice systems, the additional presence of the DDI requires an extension of the standard Bose- and Fermi-Hubbard models and lattice spin models. This gives rise to novel, qualitatively different quantum phases and transport dynamics. Here we list some examples of potential research directions that could in our opinion be particularly interesting to follow. _Exotic ground states._ The long-range and anisotropic nature of the DDI dramatically changes the behaviour of bosonic as well as fermionic atoms in optical lattices. Dipolar bosons on a square lattice, for example, are expected to possess many-body ground-states resembling a charge-density wave. At half-filling, this can take the form of checkerboard or stripe configurations, and the phase diagram might host supersolid regions [27; 28; 67]. From theory it is expected that changing the dipole orientation drives direct transitions between these phases; in close proximity of the transition, the existence of meta-stable emulsion phases has been predicted [68]. For spin-polarised dipolar fermions on a 2D lattice, in contrast, changing the dipole orientation is expected to give rise to a topological phase transition linked to the deformation of the Fermi surface, the so-called Lifshitz transition [69]. Taking the spin degree of freedom into account, erbium as well as dysprosium allow to implement a large number of different models, including spin-1/2 and spin-1 systems, with up to 20 (22) available spin states for fermionic erbium (dysprosium). A recent proposal suggests to use a 1D chain of spin-1/2 dipolar fermions to form a bond-order-wave phase, which is strongly correlated and topologically protected [70]. Other theory works have shown that also topological flat bands [71] or fractional Chern insulators [72] might be realisable. In principle, the whole class of systems described by spin Hamiltonians - like quantum magnetism models, spin liquids, and frustration - could be implemented using erbium and/or dysprosium atoms [73; 74]. _Non-equilibrium dynamics._ Beyond the preparation and detection of exotic ground states, a quantum gas microscope is also well suited to study system dynamics. For example, it is expected that the DDI leads to cluster formation, where two or more atoms on neighbouring lattice sites are effectively bound together. These clusters can move around, but their speed is non-trivially dependent on the lattice geometry and system parameters [75; 76]. Interestingly, such clusters can become fully localised without a need for disorder. Another phenomenon in reach is Levy flights - spin excitations that move through a sparse, randomly occupied lattice [77]. As a final example, it could be interesting to study the long-term dynamics of spin states within the Heisenberg XXZ model, where a peculiar difference in the equilibrium distribution is expected between scenarios with half-integer- compared to integer-spin manifolds [78]. Evidently, this is only a subjective choice of promising research directions. Many more already exist or might become apparent once a dipolar quantum gas microscope is in operation. In the following, we describe the quantum gas microscope that we have designed as an addition to our erbium-dysprosium quantum mixture experiment. The microscope is housed in a separate ultra-high vacuum (UHV) section and has been assembled, evacuated, baked, tested, and finally attached to the existing apparatus through a mechanical UHV gate valve. ## I Microscope objective The most important component of the Er-Dy quantum gas microscope is its imaging objective. In the following, we will first discuss design options and motivate our choice. Next, we detail our design's optical and mechanical properties, and finally present the measured performance. Figure 1: _Types of quantum gas microscopes._ (a) All optics outside vacuum chamber, (b) solid-immersion lens (SIL) close to sample, (c) optics partly in vacuum, (d) entire optics in vacuum. ### Initial options and considerations Quantum gas microscopes are highly complex machines, often operating at the edge of what is technologically possible. Therefore, the optical design typically needs to be carefully tailored to the experimental needs. For the Er-Dy experiment, we identified the following key requirements for the imaging system: 1. a high numerical aperture (NA) \(>0.8\) to be able to (i) resolve lattices of small spacing (\(\leq 532\,\mathrm{nm}\)) and for (ii) a high fluorescence photon collection rate; 2. a large working distance on the order of millimetres to give optical access from the side, to grant full freedom of (transverse) lattice laser wavelengths \(\lambda_{l}\), and to avoid possible effects of close surfaces on the dipolar atoms; 3. use of non-magnetic and non-conducting materials to avoid magnetisation and eddy currents; \begin{table} \begin{tabular}{l l l l} \hline \hline type & pro & contra & refs \\ \hline \multirow{3}{*}{(a)} & easy alignment & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{long WD}&\text{limited NA}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 22]}\end{array}\right]} \\ \cline{1-1} & high NA & short WD & \\ \cline{1-1} & \multirow{2}{*}{\(\lambda_{l}\) fixed by coating [15, 18, 21]} \\ \cline{1-1} & moderate NA & \multirow{2}{*}{\(\left[\begin{array}{@{}c@{}}15,&18,&21\\ \text{moderate WD}&\text{relative vibrations}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 00]}\end{array}\right]\)} \\ \hline \multirow{3}{*}{(c)} & high NA & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{long WD}&\text{limited NA}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 22]}\end{array}\right]\)} \\ \cline{1-1} & high NA & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{long WD}&\text{UHV risks}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 81-84]}\end{array}\right]\)} \\ \cline{1-1} & high NA & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{easy alignment}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 00]}\end{array}\right]\)} \\ \cline{1-1} & long WD & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{long WD}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 00]}\end{array}\right]\)} \\ \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & \multirow{3}{*}{\(\left[\begin{array}{@{}c@{}}10,&16,&19,\\ \text{long WD}&\text{\@@cite[cite]{[\@@bibref{}{Lunderson1999}{}{}]} 00]}\end{array}\right]\)} \\ \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-6} \cline{1-1} & & & & & & \\ \cline 4. near achromaticity at the imaging wavelengths of erbium and dysprosium. When a sample inside a UHV environment is imaged, the vacuum window necessarily becomes part of the light path and needs to be considered in the optical layout. The aberrations introduced by a plane-parallel glass plate are mainly spherical and chromatic and scale with the thickness of the plate [85]. Different quantum gas experiments have found different solutions for dealing with these aberrations, which can be broadly grouped in four categories as sketched in Fig. 1. Table 2 gives a brief overview over the most important properties and limitations of these approaches. (a) _Optics outside vacuum._ This straight-forward option usually features a long working distance (WD) of some to tens of millimetres which, for reasonable optics diameters, limits the NA to below \(\sim 0.7\). The usual approach is to use a custom objective carefully corrected for a window of some millimetres thickness [10; 16; 19]. However, the use of a commercial objective in combination with a very thin (\(<1\,\mathrm{mm}\)) and - due to bending under vacuum - small-diameter window has also been demonstrated [22]. (b) _Solid-immersion lens (SIL)._ Lenses in shape of a truncated sphere close to the sample increase the NA [86]. Hemispherical SILs offer an enhancement factor equal to their refractive index, \(n\), and can be mounted in vacuum [9] or be optically contacted to the glass window [15; 18; 21]. Notably, a window that is part of the SIL does not add aberrations. SILs with Weierstrass truncation offer an even higher NA enhancement by \(n^{2}\)[84; 87; 88]. However, SILs have a very small (typically micrometre-scale) working distance, necessitating sophisticated transport strategies to bring the atoms into focus. Additionally, most often horizontal as well as vertical lattice beams have to be reflected off the front surface, posing many constraints for the optical coating, whose performance can quickly become limiting. (c) _Optics partly in vacuum._ A first lens inside vacuum can reduce the marginal ray angles and hence aberrations by the following window. Imaging systems of this type have been demonstrated with moderate NAs around 0.5 [79; 80]. Their disadvantage is that the relative alignment of in- and ex-vacuum components is absolutely crucial and that great care must be taken to prevent mechanical vibrations or long-term drifts. (d) _All optics in vacuum._ In-vacuum objectives are conceptually simple and allow for both a millimetre-level working distance and high NA. If the objective is at infinite conjugation, the glass window does not add aberrations. Despite these advantages, this design is less frequently encountered in cold-atoms experiments, since the objective must be UHV-compatible and the UHV chamber must be large. Nevertheless, objectives of this kind have successfully been demonstrated in metal chambers [81; 83], in a glued glass cell [82] and, very recently, in a quartz glass cell similar to ours [84] - a parallel development for a neighbouring setup in Innsbruck. For our objective, option (a) was rejected since it can hardly deliver the targeted NA and resolution. The SIL design (b) - due to its short WD and coating limitations - seemed incompatible with the number of different wavelengths that need to be directed onto (multiple lattices, cooling, spin manipulation, etc.) and collected from (broad-angle fluorescence) the erbium _and_ dysprosium atoms. Option (c) seemed risky since, at the level of optical resolution aimed for, tolerances on alignment imperfections are small, and drifts or relative vibrations could have proven fatal for the optical performance. We hence opted for an in-vacuum objective (d), which - in combination with a glue-free glass cell - reminded us of the challenge to build a ship in a bottle. ### The Er-Dy objective design In this section, we present the high-NA in-vacuum objective for microscopy of erbium and dysprosium atoms which we have developed together with a manufacturer2. It was designed for achromatic performance on the broad, blue imaging transitions of erbium and dysprosium at 401 and 421 nm wavelength, respectively, as well as for 633 nm, which is the alignment wavelength of the manufacturer and, by coincidence, close to the red dysprosium transition at 626 nm. The objective contains five singlet lenses of different glasses with a sufficient transmission for blue light, and no cemented elements to avoid outgassing. The objective tube and all lens retaining mechanics are fully fabricated from machinable ceramics, see Fig. 3 (a). All volumes inside the housing and in between lenses \begin{table} \begin{tabular}{l l r} \hline \hline quantity & unit & design value \\ \hline eff. focal length & mm & 20.0 \\ total length & mm & 70.0 \\ chrom. focal shift & μm & 0.39 \\ depth of focus & μm & \(\pm 0.25\) \\ working f-numbera & & 0.56 \\ object-space NA & & 0.89 \\ \cline{3-3} & & 401 nm & 421 nm \\ \cline{3-3} object-space Airy radius & μm & 0.27 & 0.29 \\ wavefront errora peak-valley & \(\lambda\) & 0.098 & 0.11 \\ wavefront errora RMS & \(\lambda\) & 0.032 & 0.021 \\ Strehl ratiob & & 0.96 & 0.98 \\ \(\O\) diff.-lim. FOV & μm & 160 & 180 \\ \hline \hline \end{tabular} \end{table} Table 3: _Important design values of the Er–Dy objective._ are vented through borings to avoid virtual leaks under UHV. The objective's optical design values are summarised in Table 3; moreover, some important calculated characteristics are plotted in Fig. 2. A custom miniature dielectric mirror3 of \(1.5\,\mathrm{mm}\) diameter has been glued to the objective front lens using a high-performance UHV-compatible adhesive4, see Fig. 3 a. This miniature mirror blocks only \(\sim 3\,\%\) of the solid angle covered by the objective lens (\(\mathrm{NA}=0.89\)), so hardly affects the number of collected photons, and will serve to reflect off the vertical lattice beams (\(\lambda_{l}=1550\,\mathrm{nm}\)). This fixes the lattice position relative to the objective and will help to reduce drifts and vibrations, thus facilitate keeping the atoms in focus. Footnote 3: Optics Technology, Inc., NY/USA Footnote 4: Optocast 3415, Electronic Materials, Inc., CO/USA The objective is mounted in a home-built ceramics part, shown in Fig. 3 (b). It features three titanium flat springs around its perimeter and sits on three ruby balls to define the objective position within the glass cell (cf. Section II.1). The individual pieces of the mount are assembled using vented titanium screws and beryllium-copper disc springs which take up mechanical stress, such as upon temperature changes during bake-out. The top rim of the mount bears an arrangement of custom solid-quartz-glass mirrors5 with UV-enhanced aluminium coating, which we colloquially refer to as the 'crown mirrors'. Eight crown mirrors (elliptic in Fig. 3 b) are in staggered alignment with the side windows of the glass cell and will serve to reflect laser beams onto the atoms, entering and exiting through the large top window. One crown mirror (rectangular in Fig. 3 b) protects the objective from being hit by the divergent, high-power optical-transport beam (when its focus is away from the glass cell). We chose the angle of the crown mirrors such that normal incidence on the top window (and, thus, any standing-wave backreflection) is avoided. Footnote 5: Optica AG, Switzerland We highlight that our microscope cell and interior are exclusively built from materials which are _non-magnetic_ and - besides small parts like titanium screws and beryllium-copper disc springs - _non-conducting_. We thus maximally avoid magnetisation effects and eddy currents that could deteriorate the magnetic environment close to our magnetic atoms as well as limit field switching times. ### Optical performance The performance of the microscope objective was tested by imaging the tip of a scanning near-field optical microscopy (SNOM) fibre6 onto a CCD camera with 50\(\times\) magnification [88; 89]. The SNOM fibre tip has a nominal aperture of 50 to 100 nm and therefore can be used as a good approximation of a point source. Footnote 6: MF001, Tipsnano O�, Estonia At constant magnification, the peak intensity in a power-normalised image is proportional to the Strehl ratio [90]. Figure 4 (a) shows the peak normalised intensity when moving the SNOM fibre tip along the optical axis using a piezo actuator. We observe a clear maximum - corresponding to the respective focus position - for each of the two investigated wavelengths (401 and 421 nm). The axial distance between these maxima, the chromatic focal shift, is \(\sim 2.4\,\mathrm{\SIUnitSymbolMicro m}\). The objective is therefore close to, but not fully achromatic, as initially targeted. According to the manufacturer, this is most probably due to insufficient accuracy of their prior knowledge of refractive indices of the lens glasses (they had to be extrapolated down to 401 nm wavelength, where the dispersion is steep). For imaging of only one species per time, this does not pose a problem, since the shift can be corrected by re-adjusting the camera position. To be able to image both species in a single experimental run, the chromatic shift needs to be corrected for. Possible experimental solutions include (i) using a dichroic mirror to separate the beam paths and image the two species on separate cameras, or to image both species shortly after each other in combination with either (ii) an adaptive optical element, such as a fast focus-tunable lens [91], (iii) an imaging lens on a fast translation stage to dynamically adjust the focus position, or by (iv) dynamically shifting the vertical lattice position between the respective focal planes. Figure 4 (b) shows images close to the foci [i.e., around the maxima in Fig. 4 (a)] at 401 and 421 nm, respectively, as well as azimuthally averaged and fitted spot profiles. These measurements give an upper bound on the optical Figure 3: _In-vacuum optics._ (a) The microscope objective with the miniature lattice mirror (_inset_). The red dashed arrow indicates a vertical lattice beam. (b) The objective mount with crown mirrors. Round mirrors serve to reflect light onto the atoms and back (red dotted arrows), the rectangular mirror protects the objective from the divergent transport beam (green dashed arrows). resolution \(d_{0}\) according to the Rayleigh criterion of \[d_{0}=\begin{cases}0.29(1)\,\mathrm{\SIUnitSymbolMicro m}&\text{at}\quad\lambda=40 1\,\mathrm{nm},\\ 0.30(1)\,\mathrm{\SIUnitSymbolMicro m}&\text{at}\quad\lambda=421\,\mathrm{nm}, \end{cases}\] close to the values predicted by our simulations (Table 3). The fluorescence from the atoms is collimated by the microscope objective (focal length \(f=20\,\mathrm{mm}\)), passes the vacuum window, and needs to be re-focused (focal length \(f^{\prime}\)) on a camera chip to form an image. For Nyquist sampling, a sufficient image magnification \(M\) is needed. Aiming for, e.g., a sampling of five pixels per lattice site with a sensor pixel size of \(d_{\mathrm{px}}=16.5\,\mathrm{\SIUnitSymbolMicro m}\) (a typical value for EMCCD cameras), a lattice constant of \(0.266\,\mathrm{\SIUnitSymbolMicro m}\), we would require \(M\gtrsim 310\) and \(f^{\prime}\gtrsim 6.2\,\mathrm{m}\). For larger lattice spacings or smaller pixel size these numbers are smaller, but still likely on the order of metres. Such long light paths would naturally suffer from stability problems caused by mechanical vibrations or air flow. Therefore we used numerical methods to design a telefocus system which consists solely of stock lenses, has a large effective focal length (\(6.2\,\mathrm{m}\)) but a small physical length (\(1.1\,\mathrm{m}\)) and is fully achromatic at \(401\) and \(421\,\mathrm{nm}\). ## II Vacuum Integration Experiments with ultracold atoms have to be conducted under UHV on the order of \(10^{-11}\,\mathrm{millibar}\) to isolate the samples from the environment and permit sufficiently long lifetimes. The basic design of the our microscope UHV setup consists of a quartz glass cell - housing the objective - attached to a stainless-steel7 tube cross which connects to vacuum instrumentation and the main experiment. Footnote 7: 316LN and 316L (AISI classification) for low relative magnetic permeability. ### Quartz glass cell The microscope objective is mounted inside a quartz glass cell (Fig. 5). The quartz glass cell (compared to, e.g., a metal vacuum chamber) brings three main advantages: First, superb vacuum quality, second, it is inherently non-magnetic and non-conducting, third, it offers maximum optical access. Our custom-manufactured8 glass cell consists of a hollow, octagonal quartz glass corpus with one 3" window attached to the top and seven 1" windows attached to side borings using a glass-frit bonding technique. Fused silica (i.e., high-purity synthetic quartz glass) was our window material of choice due to its low light absorption down to below \(400\,\mathrm{nm}\), and small thermal lensing effects even at high light intensity [92]. On the inside, our windows feature an extremely broadband (reflectivity \(<0.5\,\mathrm{\char 37}\) over several hundred nanometres) and broad-angle (\(0^{\circ}\) to \(>45^{\circ}\)) gradient-index antireflection nanostructure coating.9 On the outside, the windows are uncoated; this combination offers maximum flexibility with respect to the wavelengths that can be transmitted into or out of the cell while still allowing external cleaning of the cell from dust, etc. Footnote 8: Precision Glassblowing of Colorado, Inc., CO/USA. Footnote 9: RAR.L2, Tel Aztec LLC, MA/USA One small- and one large-diameter quartz glass tube with polished flat end lips (Fig. 5) are attached to the bottom and to one of the side borings of the quartz glass corpus, respectively. The bottom tube allows to insert the microscope objective and its mount (Fig. 3) into the quartz glass cell. The side tube connects the cell to the existing UHV apparatus and forms the single support of the cell against gravity (Fig. 7), whereby we avoid static overdetermination which could lead to stress peaks and breaking. Figure 4: (a) _Chromatic focal shift_. Normalised intensity vs on-axis position (\(z\)), for for \(401\,\mathrm{nm}\) (blue) and \(421\,\mathrm{nm}\) (red). Solid lines are Gaussian fits to the maxima; dashed lines represent the centres, grey shadings the standard deviations. The distance between the maxima is around \(2.4\,\mathrm{\SIUnitSymbolMicro m}\). (b,c) _Optical resolution_. Images of a SNOM fibre tip (_respective left_) at \(401\,\mathrm{nm}\) (b) and \(421\,\mathrm{nm}\) (c). Distances (\(x,y\)) are in object-plane units. The azimuthally averaged intensity profiles (_respective right_) are plotted vs the radial coordinate (\(\rho\)). The red lines are Gaussian fits. ### Mounting concept During the microscope assembly, the objective in its mount, sitting on three rub balls10 on the bottom fused-silica window, was carefully inserted into the quartz glass cell from below using a scissor jack, until a glass-to-glass indium seal could be formed (see below). The three titanium flat springs around the objective mount define the horizontal position of the objective along the axis of the bottom tube [Fig. 3 (b)]. To connect the quartz glass side tube to the tube cross, we engineered a small, steel flat-to-knife-edge adapter piece. In a first, critical step, the flat face of this adapter piece (cf. Fig. 3 in Ref. [84]) was connected to the quartz glass cell using indium sealing. In a second, uncritical step, the knife-edge face of the adapter was connected to the steel cross (Fig. 6) using a standard copper gasket and con-flat (CF) steel flange. To form our indium seals in a controlled fashion and to protect them afterwards, we designed clamping rings that could be assembled around the quartz glass cell tube lips. These clamps were in-house machined from a high-performance polymer.11 Screw holes arranged circularly around the clamps allowed to press the respective sealing surface together and squash the indium O-ring in between. Conical beryllium-copper disc-springs, placed head-to-head under each screw, facilitated the loading of the clamps with even forces. The horizontal arms of the custom steel cross connect the quartz glass cell to the experimental main chamber via a UHV gate valve12 and form part of the transport distance for our atomic samples. The bottom vertical arm connects to a non-evaporable getter (NEG) element13, whereas the top arm connects to (i) a combined NEG/ion pump module14, (ii) an ionisation gauge15, and (iii) an all-metal angle valve12 for attachment of external mechanical pumps. All metal vacuum pieces were electropolished and vacuum-glowed16 prior to assembly for reduction of H\({}_{2}\) outgassing as well as for reduction of magnetic permeability. Footnote 10: Edmund Optics, Inc., NJ/USA Footnote 11: PAS-PEEK GF30, glass-fibre reinforced polyether ether ketone, Fagile GmbH, Austria Footnote 12: VAT Vakumunretile AG, Switzerland Footnote 13: Capacitor Z200, SAES Getters S.p.A., Italy Footnote 14: Nextorr D200, SAES Getters S.p.A., Italy Footnote 15: Tungsten-filament Bayard-Alpert gauge, Agilent Technologies, Inc., CA/USA Footnote 16: Reuter Technologie GmbH, Germany. For microscope experiments, quantum gas samples produced in the Er-Dy apparatus main chamber are loaded into a single-beam optical dipole trap (ODT, 532 nm, around 15 W). The ODT focus is then shifted through the vacuum connection into the microscope cell by translation of a relay lens on an air-bearing linear stage (see Fig. 6). ### Indium sealing Forming a glass-to-metal connection for UHV applications is not trivial. For example, braze-alloy seals - as used by the majority of commercial manufacturers - require that the thermal expansion coefficients of the metal and the glass do not differ too strongly; otherwise temperature changes (which are not completely avoidable during the production process or bake-out) could lead to cracking of the glass. Therefore, e.g., connecting stainless steel to quartz glass as in our case would require to form a gradient-index transition,17 i.e., to use several different glasses between the two end materials to gradually match the expansion coefficients. However, gradient-index transitions are typically long (10 to 20 cm) and mechanically weak. Footnote 17: Larson Electronic Glass LLC, CA/USA; private communication. In our design process, two concerns disfavoured a gradient-index transition. First, the prolongation of the atom transport distance; second, the risk of breaking when Figure 5: _The microscope cell._ Manufactured from quartz glass (body) and fused silica (windows). Polished faces on left and bottom tube are for indium sealing. The viewports feature a broadband, broad-angle, gradient-index nanostructure coating on the inside. For scale, the hole distance in the breadboard below is 25 mm. Figure 6: _Vacuum connection from main chamber_ (left) _via steel cross_ (centre) _to quartz glass cell_ (right). The optical transport axis for the atoms is indicated in green. The magnetic shielding around the quartz glass cell is also shown. the cell and microscope are supported solely by a long, fragile gradient-index transition. We therefore decided to produce a direct quartz-glass-to-steel connection via indium sealing [93, 94, 95]. We also used this indium sealing procedure to form a glass-to-glass connection between our glass cell bottom tube and the bottom window once the objective was inserted. For us, indium sealing offered two main advantages. First, compared to adhesive glues, indium yields superior vacuum quality. Second, compared to, e.g., heat-diffusion bonding, it can be formed at room temperature - which is crucial since our objective may not be heated to above 90\(\,{}^{\circ}\)C.18 Footnote 18: Special Optics, Inc., NJ/USA; private communication. Indium is a soft metal with a low melting point (about 156\(\,{}^{\circ}\)C [96]), low permeability, and low outgassing rates [97]. Exposed to air it is covered by a thin, passivating oxide layer. When mechanically deformed, however, like when pressed onto a glass surface, this oxide layer breaks and fresh, reactive metal is exposed. Such activated indium wets and reacts with glass, forming a tight seal. In the following, we outline some cornerstones of our indium sealing procedure. _Indium material._ We used round indium wire19, activated in hydrochloric acid (37\(\,\)%) for around 1\(\,\)min straight before use, and connected two freshly cut, angled ends to an O-ring. Footnote 19: \(\mathcal{O}\) 0.05” \(\approx\) 1.3\(\,\)mm, In 99.995\(\,\)%, Indium Corporation of America Co., MD/USA _Surface finish._ Our metal contact surfaces were lathed (not milled), with the stroke direction concentric with the indium O-ring; our quartz glass contact surfaces were polished (optically clear, not matt) and remained uncoated in the sealing area. _Surface cleaning._ Polluted surfaces underwent usual UHV cleaning: (i) cleansed in water and detergent, (ii) rinsed with distilled water, (iii) blow dried, (iv) acetone-cleaned in ultrasonic bath, (v) rinsed with fresh acetone, (vi) air-dried. _Seal formation._ We gently and evenly squashed each indium O-ring between the two respective, clean contact surfaces by alternate tightening of screws around the clamping ring (visible on right and bottom of Fig. 7); a feeler gauge can help to monitor this process. We tightened the screws up to a few Nm torque, accompanied by careful visual inspection. After tightening of the screws, the indium seals typically had a thickness of around 0.3\(\,\)mm. Thereafter we performed helium leak tests; small leaks may be closed by (i) stronger squashing, (ii) warming up of the seal region, or (iii) simply leaving the indium flowing for some hours [98]. ### Baking and attachment When no more helium leaks were detected, our microscope assembly (still detached from the main experiment) underwent a gentle bake-out. For this, the quartz glass cell and the connection to the steel cross were enclosed in a clean metal container with resistive heating pads on the inside and insulation foam mats on the outside. The less sensitive parts of the steel cross were wrapped in layers of aluminium foil, heating wire, and insulating foam mats. Several thermocouples were used for temperature monitoring and to avoid gradients larger than a few degrees celis over the whole assembly. We linearly increased the temperature by a few degrees celis per hour up to 90\(\,{}^{\circ}\)C, then kept this temperature while monitoring the vacuum on a residual gas analyser.20 After about two weeks, all relevant gas traces (in particular, H\({}_{2}\)O) had fallen by several orders of magnitude and levelled off. We then linearly ramped down to room temperature by a few degrees celisus per hour. Note that the disk springs in our clamping rings serve a double purpose during the bake-out: first, they can take up force peaks when parts expand; second, they maintain the force when the indium softens and flows. After the bake-out, clamping screws were re-tigttened to about the same torque as before; later, no more tightening was necessary. Footnote 20: Stanford Research Systems, Inc., CA/USA After the bake-out was completed, the microscope assembly was flooded with dry argon gas, moved on its breadboard to our experiment table, and connected to the Er-Dy main chamber through a CF flange. After evacuation, no further baking was required. When the vacuum level in the microscope section had dropped to a level of around 10\({}^{-11}\)\(\,\)millibar, the UHV gate valve between microscope section and Er-Dy main chamber was opened; lifetimes of quantum gas samples measured in the main chamber remained unaffected by this. Figure 7 shows a photograph of the quartz glass cell with objective, after attachment to the Er-Dy apparatus. Figure 7: _The quartz glass cell with objective under UHV after attachment to the experiment._ ## III Magnetic environment In all ultracold-atom experiments the ability to set the magnetic field in a precise manner is absolutely essential, to define quantisation axes, to control level splittings, or to tune s-wave interactions at Feshbach resonances. For magnetic atoms such as erbium and dysprosium, where magnitude as well as direction of the magnetic field are of decisive importance for the stability and behaviour of a sample, this is even more the case. In the following, we describe our coil system for shaping magnetic fields in the quartz glass cell (Section III.1), as well as a ferromagnetic enclosure we have designed to shield the atoms from external magnetic fields and noise (Section III.2). ### Microscope coils The trade-off in the design process of the microscope cell coils was between achieving a maximum flexibility in the shaping and switching of magnetic fields, blocking a minimum of optical access, as well as constraints posed by our magnetic shield (spatially and in terms of material magnetisation). Our coil system consists of four pairs of coils which are held in place by a mechanical support structure. The individual pieces of the support structure are CNC-milled from a high-performance polymer,21 assembled around the quartz glass cell, and mounted directly to the cell flange. Two pairs of coils (slow vs fast) are along the vertical (i.e., gravity) direction (\(z\)), close to Helmholtz configuration; see Fig. 8. In contrast to the slow (high-field) pair of vertical bias coils, the fast vertical coil pair has only few windings and is intended for fast jumps in magnetic fields and for generation of RF radiation. Two identical, mutually orthogonal horizontal coil pairs are arranged along the diagonals of the transport direction (cf. Fig. 8) and allow to set the field in the \((x,y)\)-direction. All coils except the two bottom vertical coils could be wound prior to assembly. Footnote 21: PAS-PEEK GF30, glass-fibre reinforced polyether ether ketone, Faigle GmbH, Austria Even though FEM simulations (Section III.3) indicate that the magnetisation threshold of the innermost layer of our magnetic shielding will not be reached up to fields corresponding to more than a hundred Gauss in the cell centre, in order to avoid magnetisation effects it will be advisable to restrict the fields at the atom location to the few- or low tens-of-Gauss level. Table 4 summarises the characteristics of the coils, Fig. 8 shows the parts of the coil design as well as the corresponding calculated fields, gradients and curvatures. ### Magnetic shielding guidelines For very field-sensitive measurements it becomes necessary to protect the atomic sample from magnetic stray fields in the environment, such as the earth magnetic field or fields created by electric instrumentation. The protection strategy depends on the noise frequency. Low-frequency (tens of Hz down to DC) noise can be reduced either by active compensation22 or by enclosing \begin{table} \begin{tabular}{l l l l} \hline & vert. slow & vert. fast & horz. \\ \hline \(r_{0}\) (mm) & 59 & 64 & 22.5 \\ coil separation (mm) & 54 & 78 & 98 \\ windings & 56 & 4 & 16 \\ centre field (G/A) & 8.0 & 0.5 & 0.6 \\ inductancea (F) & 3\(\times 10^{-5}\) & 2\(\times 10^{-6}\) & 3\(\times 10^{-6}\) \\ resistanceb (m\(\Omega\)) & 700 & 55 & 80 \\ \hline \end{tabular} Footnote a: Analytical approximation for ideal Helmholtz pairs. b: For 1-mm\({}^{2}\) copper wire. \end{table} Table 4: _Coil specifications for the microscope chamber._ Figure 8: _Magnetic field coil system for the microscope chamber._ The fields are calculated by direct integration of the Biot–Savart law for 1 A of current, respectively. Note that in this figure we label the respective _local coil symmetry axis_ by \(z\). Slow vertical bias coils (_left column_), fast vertical bias or RF coils (_middle column_) and horizontal coils (_right column_). _First row_: Coil geometry. _Second row_: Field on axis (note that we plot the offset with respect to the atom position). _Third row_: Field gradient on axis. _Fourth row_: Field curvature on axis. the experimental chamber in a passive shielding. There are two types of passive magnetic shields, (i) superconducting shields at cryogenic temperature, and (ii) soft-ferromagnetic shields. Since ferromagnetic shields work at room temperature, they are much easier to integrate into quantum gas experiments [99, 100, 101]. The working principle of ferromagnetic shields is flux shunting, i.e., the shield has a high relative permeability and thus 'guides' the field lines around the protected volume. Among the most commonly used soft ferromagnetic materials are Mu-metal and Supra-50. Of these two, Mu-metal has the higher relative permeability and Supra-50 has the higher saturation flux density (see Table 5). For fast oscillating fields (tens of Hz and higher), eddy current cancellation ('skin effect') becomes the dominant shielding process. This effect is the strongest for good conductors such as copper, but in practice also ferromagnetic DC shields typically provide a sufficient AC shielding [102]. The crucial task is therefore to find a good shielding for slowly-varying fields. For the Er-Dy experiment, we designed a passive multilayer ferromagnetic shield which will be surrounded by an additional, active field stabilisation system. The performance of a magnetic shield is characterised by the shielding factor \[S=B^{\prime}/B, \tag{1}\] where \(B\) (\(B^{\prime}\)) is the field at the centre point in presence (absence) of the shield. Estimated requirements for our future microscope experiments suggested to target a shielding factor \(S\sim 10^{3}\). Basic analytical estimates (see Appendix) can already provide important guidelines for designing a shield, some of which we summarise below: _Geometry._ The best shielding performance is obtained for a spherical shell, followed by a cylinder (intermediate) and box (inferior) [102]. Shields that 'approximate a sphere' (i.e., have a similar characteristic length along all three spatial directions) show better performance. _Size._ At fixed wall thickness, smaller shields are superior (Eq. 5). _Multilayer shields._ Nesting multiple shields with thin walls is better than a single shield with a thick wall - 'it helps to shield the shielding' (Eq. 8). _Discontinuities._ Avoid discontinuities (cuts, improper welds, etc.) to ensure unhindered guiding of magnetic flux. If discontinuities are unavoidable (e.g., for an assembly of multiple parts), the parts should have sufficient overlap and mechanical contact. \begin{table} \begin{tabular}{l l l} \hline \hline material & \(\mu_{r}\) & \(B_{s}\) (G) \\ \hline Mu-metal & \(4.7\times 10^{5}\) & \(0.75\times 10^{4}\) \\ Supra-50 & \(2\times 10^{5}\) & \(1.5\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 5: _Relative permeability, \(\mu_{r}\), and saturation flux density, \(B_{s}\), for the Er–Dy shield materials [103]._ Figure 9: _FEM simulation of shielding efficiency._ (a–c) Vector fields (magenta arrows) of \(\vec{B}\) when an external homogeneous magnetic fields is applied along the one of the three spatial directions, respectively. \(\vec{e}_{x}\) is along the transport axis, \(\vec{e}_{z}\) is along the cylinder axis. The strength of magnetic flux inside the metal is colour-coded, increasing from blue to red. (d–f) Calculated magnetic flux density along the \(x\)-, \(y\)-, and \(z\)-axis, plotted for the same external fields as in the top row (note that axes pass through shield openings). _Holes._ Avoid openings - they lead to flux leakage. If an opening is unavoidable, it can help to add a collar (exponential vs cubic suppression; Eqs. 3-4). ### Microscope shield design Our guidelines discussed above led us to favour a compact multilayer cylindrical shield. In particular, 'compact' means that the shield fits only the cell and magnetic coils, such that, e.g., all optomechanical components need to be placed outside. In our design process, we first drafted a prototype design for a four-shell shielding that complied with all spatial constraints posed by our experiment. Each shell consists of a bottom and a top half, allowing to assemble the shield around the microscope cell. Second, we meshed the detailed prototype CAD model - including holes for optical access, cables, screws, etc. - and numerically analysed it using a finite-element method (FEM). The simulations allowed us to study and refine our design with respect to two major factors: passive shielding performance and magnetic saturation caused by the microscope coils inside. _Shielding efficiency._ To study of the shielding efficiency, we placed our model in an external, static, homogeneous magnetic field pointing along the three spatial directions.23 In our case, the two most important improvements upon this study were (i) to maximally downsize the hole for the vacuum connection and to give it a collar, and (ii) to conically reduce the hole diameters for the side windows from outside to inside (at constant NA). Shielding efficiency simulation data for the revised design is shown in Fig. 9. Footnote 23: Note that for a more conservative simulation of the shielding performance it could be helpful to artificially add small air gaps between shield pieces. In this way one takes into account that – due to manufacturing tolerances – pieces are typically not perfectly flush, which reduces the flux guiding. _Avoiding saturation._ We simulated the effects of the microscope coils inside our magnetic shielding. In our case, the two most important improvements following this study were (iii) to change the material of the innermost shell from Mu-metal to Supra-50 and, wherever needed, (iv) rounding of edges as well as adjustment of hole patterns and hole diameters to reduce local flux focusing. Some results of the saturation analysis for the revised design are shown in Fig. 9. Figure 10: _FEM simulation of microscope coil effects._ Left column (\(\mathrm{a},\mathrm{c}\)) shows \(B\) on the innermost shield layer (colour code); small magenta arrows indicate flux direction. Right column (\(\mathrm{b},\mathrm{d}\)) shows the flux density \(B\) along spatial axes (note that all axes pass through shield openings). Top row (\(\mathrm{a},\mathrm{b}\)) is for \(10\,\mathrm{A}\) of current in the slow vertical coil pair (along \(\vec{e}_{z}\)). Bottom row (\(\mathrm{c},\mathrm{d}\)) is for \(10\,\mathrm{A}\) in one of the (diagonal) horizontal coil pairs (direction \(\vec{e}_{+}\), where \(\vec{e}_{\pm}=(\vec{e}_{x}\pm\vec{e}_{y})/\sqrt{2}\)). In (d), the blue, green, and red line correspond to the flux density \(B\) along the coil axis (\(\vec{e}_{+}\)), along the perpendicular axis (\(\vec{e}_{-}\)), and along \(z\), respectively. Our revised shield design was manufactured by Magnetic Shields Ltd, UK (Fig. 11). All shells are held in position by nylon spacers. After fabrication, the shells underwent a heat treatment (4 hours at 1150 \({}^{\circ}\)C) for magnetic annealing. The experimentally measured shielding factors are \(>10^{3}\) in both axial and transverse direction, meeting our initial target. ## Summary & Outlook To conclude, we have designed and constructed a quantum gas microscopy apparatus for the highly magnetic lanthanoid elements erbium and dysprosium. Our microscope objective is non-magnetic, non-conducting, and features a high NA as well as a millimetre-scale working distance. The objective is mounted in vacuum, inside a glue-free, nanotexture-coated quartz glass cell. This combination offers a unique flexibility in terms of laser wavelengths that can be delivered onto the atoms, from \(<380\,\mathrm{nm}\) to over 1700 nm. We have developed and tested procedures and mechanical part designs for the formation of compact glass-glass and glass-metal (in particular: quartz glass to stainless steel) UHV seals using indium wire, which might inspire new routes in UHV apparatus design. In terms of magnetic field control, we have designed, simulated and manufactured a versatile coil system which allows an accurate tuning of magnetic field in all spatial directions but impacts the optical access as little as possible. Further, we have designed and optimised a compact, four-layer ferromagnetic magnetic shield which can be assembled around quartz glass cell and coils. The shield suppresses external magnetic field noise by a factor of \(>10^{3}\) in all spatial directions and will facilitate conducting high-precision measurements. Work towards getting our setup ready for dipolar lattice experiments is currently underway. Important next steps include (i) the implementation of optical lattices, (ii) lattice loading from the transport ODT, (iii) implementation of the fluorescence excitation and detection systems as well as (iv) the development of - possibly lattice- and/or cooling-free - imaging procedures. Once fully operational, we believe that our microscope will make important contributions to the understanding of the complex physics of dipolar quantum many-body systems. ## Conflict of Interest Statement The authors declare that their research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions This work is based on a chapter of MS's dissertation, presented to the University of Innsbruck in July 2021 [104]. MS designed, simulated, assembled, and tested the microscope objective, UHV setup, and magnetic shield, with help from all team members. MG offered practical advice and general expertise during the design process of the microscope. MJM and FF lead the Er-Dy experiment and supervised the development of the microscope apparatus. MS wrote the manuscript with valuable input from all other authors. ## Funding This study received support from the European Research Council through the Advanced Grant DyMETEr (No. 101054500) and the QuantERA grant MAQS by the Austrian Science Fund (FWF, No. I4391-N). M.S. acknowledges funding from the FWF via DK-ALM (No. W1259-N27). ## Acknowledgements We would like to thank all past and present members of the Er-Dy team, especially Philipp Ilzhofer, Gianmaria Durastante, Arno Trautmann, Claudia Politi, Matthew A. Norcia, Lauritz Klaus, and Eva Casotti. We thank our group's Erbium, Tweezer, and Theory team, as well as the whole Innsbruck Ultracold Atoms community. We are particularly indebted to Emil Kirilov, Innsbruck, for valuable discussions during the development process of indium handling and sealing procedures. We thank Aaron Krahn, Anne Hebert, Gregory A. Phelps, Sepehr Ebadi, and Susannah Dickerson from the Harvard Erbium team for fruitful exchange. MS is particularly grateful for an insightful research stay at Harvard. We acknowledge Dimitrios Trypogeorgos, INO-CNR and Universita di Trento, for helpful discussions during the early design process of Figure 11: _Four-shell ferromagnetic shield._ (a) Bottom (_left_) and top half (_right_, flipped upside-down) of the magnetic shield. (b) The fully assembled shield. The collar on the side increases the shielding efficiency over the opening for the vacuum connection along the transport direction. our magnetic shielding. We would further like to thank the IQOQI mechanical workshop team for their expert advice and manufacturing of many pieces necessary for the construction of our microscope setup. ## Data Availability Statement The data sets presented in this work are willingly available upon reasonable request.
2302.10298
Quantum Machine Learning hyperparameter search
This paper presents a quantum-based Fourier-regression approach for machine learning hyperparameter optimization applied to a benchmark of models trained on a dataset related to a forecast problem in the airline industry. Our approach utilizes the Fourier series method to represent the hyperparameter search space, which is then optimized using quantum algorithms to find the optimal set of hyperparameters for a given machine learning model. Our study evaluates the proposed method on a benchmark of models trained to predict a forecast problem in the airline industry using a standard HyperParameter Optimizer (HPO). The results show that our approach outperforms traditional hyperparameter optimization methods in terms of accuracy and convergence speed for the given search space. Our study provides a new direction for future research in quantum-based machine learning hyperparameter optimization.
S. Consul-Pacareu, R. Montaño, Kevin Rodriguez-Fernandez, Àlex Corretgé, Esteve Vilella-Moreno, Daniel Casado-Faulí, Parfait Atchade-Adelomou
2023-02-20T20:41:31Z
http://arxiv.org/abs/2302.10298v1
# Quantum Machine Learning hyperparameter search ###### Abstract This paper presents a quantum-based Fourier-regression approach for machine learning hyperparameter optimization applied to a benchmark of models trained on a dataset related to a forecast problem in the airline industry. Our approach utilizes the Fourier series method to represent the hyperparameter search space, which is then optimized using quantum algorithms to find the optimal set of hyperparameters for a given machine learning model. Our study evaluates the proposed method on a benchmark of models trained to predict a forecast problem in the airline industry using a standard HyperParameter Optimizer (HPO). The results show that our approach outperforms traditional hyperparameter optimization methods in terms of accuracy and convergence speed for the given search space. Our study provides a new direction for future research in quantum-based machine learning hyperparameter optimization. **KeyWords:** Quantum Computing, Quantum Machine Learning, Fourier Series, QML, Hyperparameters, Interpolation, Regression model, quantum search problem ## I Introduction In recent years, _Machine Learning_ (ML) algorithms have successfully solved various tasks, reaching state-of-the-art in multiple areas [1, 2, 3, 4, 5, 6, 7]. This is not only due to the development of new algorithms (more powerful and prominent), but also the selection of good hyperparameters contributed to this advance. Performing machine learning on large datasets is a resource-intensive task, but _hyperparameter tuning problem_[8, 9] increases those resource requirements by orders of magnitude. Despite advances in hyperparameter optimization, the precise selection of hyperparameters remains a challenge regarding computational complexity and finding the best approach. Therefore, the scientific community has been working to discover efficient techniques to solve this challenge [10, 11, 12, 13, 14, 10]. Due to its stochastic nature and great computational capacity, _quantum computing_ is a great bet and approach to take into account in the efficient search problems of hyperparameters. In this article, we propose a hybrid (quantum + classical) approach algorithm to facilitate this task by fitting and executing Fourier-regression models on a large scale on any type of hyperparameters in the most efficient way possible. Our proposed quantum algorithm is based on this work [15], which allows us to find the best hyperparameters using a quantum approach, given a good enough representation of the search space (or hyperparameter space) represented by the model hyperparameters related to a metric. The model was trained based on the results offered by multiple search methods, such as _Grid search_, _Random Search_ and _Bayes-Based Search_ for a given training set from a Vueling forecast problem dataset. Three-way cross-validation considers each search algorithm's average scores during the training process. The proposed quantum method provides the best score in time compared to classically, assuming that the hyperparameter/score input search space is highly nonlinear and might not be continuous. The trade-off between speed and precision depends on the number of features to evaluate. We use _Pennylane_ framework and _AWS Braket_ to validate our algorithm. The document is organized as follows. Section (II) presents our primary motivation behind this work. Section (III) shows previous work on hyperparameters tuning. Section (IV) illustrates the quantum machine learning framework and its connection to the hyperparameters tuning problems. In the section (V), we propose the scenarios and the models we will implement to tackle the hyperparameters tuning problems. Section (V.3) proposes our model, considering our primary reference. Section (VI) delivers the obtained results. Section (VI.4) discusses practically relevant results and their implications. Finally, this paper ends with conclusions and future work in Section (VII). ## II Motivation and Problem Statement The aviation industry is highly competitive, and one of the key factors for airlines to remain profitable is to maximize revenues while minimizing costs. To that effect, many of them are adopting advance analytics solutions to transform the company towards that goal. Solutions range from classical machine learning problems such as a predictive maintenance [16] or dynamic pricing [17], to optimization problems such as network optimization [18; 19]. Nonetheless, optimization problems are computationally expensive and, in most cases, classical computing falls short in yielding a reasonable processing time. This also applies to machine learning hyperparameter optimization. Vueling has been implementing and using production state-of-the-art hyperparameter tuning algorithms to achieve high-accuracy results. However, the company is aware of the potential advancements in the field of quantum computing and wishes to stay ahead of the curve. With this in mind, Vueling is proactively exploring ways to incorporate quantum technology into its technology stack to remain at the forefront of its industry and ensure long-term success. One of the critical taks in the airline industry is managing passenger no-shows. A no-show occurs when a passenger who has purchased a ticket fails to show up for the light. The data used in this work is a proprietary Vueling no-show dataset, that contains crucial information for understanding and predicting passenger behavior. The dataset contains as target data the number of _no-shows_ per flight and contains 252 183 datapoints with 42 features each, such as flight information (origin, destination, time of flight, etc.), seat reservation status, \(mean\) no-shows for different time windows on a given route, number of tickets at different price points as well as other proprietary information. The dataset and the predicting models created allow the company to track performance over time, evaluate the effectiveness of different strategies, and make data-driven decisions that can improve overall performance and profitability. The scope of this work is not to solve the specific no-show prediction problem but to use real-life data to implement a Quantum Fourier hyperparameter tuning algorithm that rivals traditional techniques. In short, showing an efficient and useful alternative for such optimization problems. ## III Work context Machine learning is used in various fields and areas, allowing computers, among other uses, to identify patterns in large data and make predictions. Such a process involves, after all, determining the appropriate algorithm based on a sample space and obtaining an optimal model architecture by adjusting some control variables from its learning process known as _Hyperparameters_ (HP). Thus, these hyperparameters must be tuned to adapt a machine learning model to different problems and datasets. Selecting the best hyperparameters for machine learning models directly impacts model performance. It often requires a thorough understanding of machine learning algorithms and appropriate hyperparameter optimization techniques. Although there are several automatic optimization techniques, they have different advantages and disadvantages. In contrast, _parameters_ are internal to the model. They are learned or estimated solely from the data during training since the algorithm attempts to understand the mapping between input features and target. Model training usually starts with initializing the parameters to random values. As training/learning progresses, the initial values are updated using an optimization algorithm (e.g., gradient descent). The learning algorithm continually updates the parameter values as training continues, but hyperparameter values remain unchanged. At the end of the learning process, the model parameters constitute the model itself. These steps inspired the scientific community to develop a research field known as _Hyperparameters Optimization_ (HPO) [12; 14]. The primary aim of this field is to automate the hyperparameter tuning process and enable users to apply machine learning models to practical problems (efficiently, reducing computation time and improving performance). In our previous works [20; 21; 22], we proposed optimization algorithms that can be used to solve HPO problems with continuous functions, discrete functions, categorical variables, convex or non-convex functions, etc. Next, we review some of them to highlight their limitations to find solutions. _Grid search_ (GS) [23; 24; 25] is a _decision-theoretic approach_[26] that exhaustively searches for a fixed domain of hyperparameter values. GS is one of the most used strategies due to its simplicity of implementation. This algorithm discretizes the search space to generate a set of possible hyperparameter configurations. Then, it evaluates each of these configurations and selects the one with the highest performance. GS's main limitation is that it takes time and is affected by the dimensionality curse [24]. Therefore, it is not suitable for a large number of hyperparameters. Moreover, GS often needs help finding the global optimum of continuous parameters because it requires a predefined and finite set of hyperparameter values. It is also unrealistic for GS to identify continuous integer hyperparameter optima with limited time and resources. Therefore, compared to other techniques, GS is only effective for a small number of _categorical hyperparameters_[24]. _Random search_ (RS) [24; 27; 28] is a variant of Grid Search and attempts to solve the above problem by randomly sampling configurations of the search space. As it does not have an implicit end condition, the number of sampled structures to be evaluated will be chosen. RS samples the search space and evaluates sets from specified probability distributions. In short, it is a technique in which the hyperparameters' random combinations are used to find the best solution for the model under consideration. RS is more efficient than GS and supports all domains of hyperparameters. In practical applications, using RS to estimate randomly chosen hyperpa rameter values helps analysts explore an ample search space. However, since RS does not consider the results of previous tests, it may include many unnecessary evaluations that reduce its performance. _Hyperband_[29] is considered an improved version of Random Search [9]. Hyperband balances model performance and resource usage to be more efficient than RS, especially with limited time and resources [30]. However, GS, RS, and Hyperband have a significant limitation: they treat each hyperparameter independently and do not consider hyperparameter correlations [13]. Therefore, they will be inefficient for ML algorithms with conditional hyperparameters, such as _Support Vector Machine_ (SVM) [31], _Density-Based Spatial Clustering of Applications with Noise_ (DBSCAN) [32; 33], and _logistic regression_[34; 35]. _Gradient-based algorithms_[36; 37] are not a predominant choice for hyperparameter optimization because they only support continuous hyperparameters and can only find a local, not global, optimum for non-convex HPO problems. Therefore, gradient-based algorithms can only optimize specific hyperparameters, such as the learning rate in _Deep Learning_ (DL) models [38]. Based on their surrogate models, the _Bayesian optimization_ (BO) [39] models, BO based on _Gaussian Process_ (GP) [40; 41] and its derivatives, are divided into three different models. BO algorithms determine the next hyperparameter value based on previously evaluated results to reduce unnecessary evaluations and improve efficiency. BO-GP mainly supports continuous and discrete hyperparameters but does not support conditional hyperparameters [30]. At the same time, _Sequential Model Algorithm Configuration_ (SMAC) [30] and _Tree-Structured Parzen Estimator_ (BO-TPE) [42] can handle categorical, discrete, continuous, and dependent hyperparameters. SMAC performs best using many categorical and conditional parameters or cross-validation, while BO-GP performs best with only a few continuous parameters. BO-TPE preserves some dependent relationships, so one of its advantages over BO-GP is its native support for some conditional hyperparameters [30]. The _Metaheuristic algorithms_[43], including _Genetic Algorithm_ (GA) [44] and _Particle Swarm Algorithm_ (PSO) [45], are more complex than other HPO algorithms but they often work well for complex optimization problems. They support all hyperparameters and are particularly efficient for large configuration spaces because they can obtain near-optimal solutions in several iterations. However, GA and PSO have their advantages and disadvantages in practice. The main advantage of PSO is that it can support large-scale parallelization and is exceptionally suitable for continuous and conditional HPO problems. At the same time, GA runs sequentially, which makes parallelization difficult. Thus, PSO often runs faster than GA, especially for large configuration spaces and data sets. However, good population initialization is essential for PSO; otherwise, it may converge slowly and only identify a local optimum rather than a global one. Regardless, the impact of a good population initialization is less significant for GA than for PSO [45]. Another limitation of GA is that it introduces additional hyperparameters, such as its population size and mutation rate [44]. _Quantum computing_[46; 20; 47] is a field of computation that uses quantum theory principles. A quantum computer is a stochastic machine that uses the laws of quantum mechanics to do computation. Due to the characteristics of HPO problems, quantum computing is an excellent ally to seek a paradigm shift and accelerate or find an efficient strategy to apply to them. There are focus on using quantum computing in optimizing hyperparameters. In [48], the authors employed a quantum genetic algorithm to address the hyperparameter optimization problem. The algorithm is based on qudits instead of qubits, allowing more available states. Experiments were performed on two _MNIST_ and _CIFAR10_ datasets, and results were compared against classic genetic algorithms. In [11], the authors presented a quantum-inspired hyperparameter optimization technique and a hybrid quantum-classical machine learning model for supervised learning. They compared their hyperparameter optimization method to standard black box objective functions. They observed performance improvements in the form of reduced expected execution times and suitability in response to growth in the search space size. They tested their approaches in a car image classification task and demonstrated a large-scale implementation of the hybrid quantum neural network model with tensor train hyperparameter optimization. Their tests showed a qualitative and quantitative advantage over the corresponding standard classical tabular grid search approach used with a _ResNet34_ deep neural network. The hybrid model achieved a classification accuracy of 0.97 after 18 iterations, while the classical model achieved an accuracy of 0.92 after 75 iterations. This last work had an exciting approach that only contemplates discrete hyperparameters. We have also found some exciting work dealing with HPO [49; 50; 48; 10]. In the latter [50], the authors took the first steps toward _Automated Quantum Machine Learning_ (AutoQML). They proposed a concrete problem description and then developed a classical-quantum hybrid cloud architecture that allows for parallelized hyperparameter exploration and model training. As an application use-case, they train a _quantum Generative Adversarial neural Network_ (qGAN) to generate energy prices that follow a known historic data distribution. Such a QML model can be used for various applications in the energy economics sector. The SWOT of the hyperparameter optimization is summarized in Table 1. After exploring the state of the art of classical and quantum hyperparameter tuning, we have yet to find a generic model that solves the domain's types of hyperparameters and reduces the search time for said hyperparameters in this quantum era. ## IV Quantum Machine Learning Quantum machine learning (QML) [51, 52, 53] explores the interplay and takes advantage of quantum computing and machine learning ideas and techniques. Therefore, quantum machine learning is a hybrid system involving both classical and quantum processing, where computationally complex subroutines are given to quantum devices. QML tries to take advantage of the classical machine learning does best and what it costs, such as distance calculation (inner product), passing it onto a quantum computer that can compute it natively in the Hilbert vector space. In this era of large classical data and few qubits, the most common use is to design machine learning algorithms for classical data analysis running on a quantum computer, i.e., quantum-enhanced machine learning [53, 54, 55, 56, 57, 58, 59]. _Quantum circuits_ are mathematically defined as operations on an initial quantum state. Quantum computing generally makes use of quantum states built from qubits, that is, binary states represented as \(\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\). Their number of qubits \(n\) commonly defines the states of a quantum circuit, and, in general, the circuit's initial state \(\ket{\psi}_{0}\) is the zero state \(\ket{0}\). In general, a quantum circuit implements an internal unit operation \(U\) to the initial state \(\ket{\psi}_{0}\) to transform it into the final output state \(\ket{\psi}_{f}\). This gate \(U\) is wholly fixed and known for some algorithms or problems. In contrast, others define its internal functioning through a fixed structure, called Ansatz[60] (_Parameterized Quantum Circuit_ (PQC)), and adjustable parameters \(\theta\)[61]. Parameterized circuits are beneficial and have interesting properties in this quantum age since they broadly define the definition of ML and provide flexibility and feasibility of unit operations with arbitrary precision [62, 63, 64]. Figure (1) depicts the concept of hybrid computing (quantum + classical), which characterizes the NISQ era. This takes advantage of quantum computing's capacity to solve complex problems and the experience of classical optimization algorithms (COBYLA[65], SPSA[66], BFGS[67], etc.) to train variational circuits. Classical algorithms are generally an iterative scheme that searches for better candidates for the parameters \(\theta\) at each step. The value of the hybrid computing idea in the NISQ era is necessary because it allows the scientific community to exploit both capacities and reaps the benefits of the constant acceleration of the oncoming quantum-computer development. Furthermore, learning techniques can be improved by embedding information (data) into the variational circuit through the quantum gate \(U\)[68, 53]. The _Variational Quantum Circuit_ (VQC) [69, 70] consists of a quantum circuit that defines the base structure similar to neural network architecture (Ansatz), while the variational procedure can optimize the types of gates (one or two-qubit parametric gates) and their free parameters. The usual supervised learning processes within quantum machine learning can be defined as follows: * _Quantum Feature Map_: It is the data preparation. In the literature, this stage is recognized as State preparation. * _The Quantum model_: It is the model creation. In the literature, it is recognized as unitary evolution. * _The classical error computation_ : It is the stage of Computing the error where the model best approximates the input set; in machine learning, this stage is known as the prediction. ## V Implementation As aforementioned, our proposal is based on HPO. Math \begin{table} \begin{tabular}{|c|c|c|c|} \hline **HPO Methods** & **Strengths** & **Limitations** & **Time Complexity** \\ \hline \hline \multirow{2}{*}{GS} & simple & Time-consuming & \multirow{2}{*}{\(O(n^{k})\)} \\ & & Only efficient with categorical & \\ \hline \multirow{2}{*}{RS} & More efficient than GS & Not consider previous result. & \\ & Enable parallelination & Not efficient with conditional & \\ \hline \multirow{2}{*}{Gradient-based} & \multirow{2}{*}{Fast convergence for continuous HPs} & Only support continuous HPs, & \multirow{2}{*}{\(O(n^{k})\)} \\ & & May only detect local optimums. & \\ \hline \multirow{2}{*}{BO-GP} & \multirow{2}{*}{Fast convergence for continuous HPs} & Poor capacity for parallelination. & \multirow{2}{*}{\(O(n^{3})\)} \\ & & Not efficient with conditional HPs. & \\ \hline \multirow{2}{*}{Hyperband} & \multirow{2}{*}{Enable parallelination} & Not efficient with conditional HPs, & \multirow{2}{*}{\(O(nleqn)\)} \\ & & Require subsets with small budgets & \\ \hline \multirow{2}{*}{GA} & Efficient with all types of HPs & Poor capacity for parallelination. & \\ & Not require good initialization & & \\ \hline \multirow{2}{*}{PSO} & Efficient with all types of HPs & \multirow{2}{*}{Require proper initialization.} & \multirow{2}{*}{\(O(n^{2})\)} \\ & Enable parallelination & & \\ \hline \end{tabular} \end{table} Table 1: The Benchmark of the standard HPO algorithms (\(n\) is the number of hyperparameter values and \(k\) is the number of hyperparameters) ematically, we can formulate it as follows: Since a model's performance on a validation set can be modeled as a function \(f:X\rightarrow\mathbb{R}\) of its hyperparameters \(\vec{x}\in X\). With \(X\) the hyperparameters' space and where \(f\) can be any error function, such as the _RMSE_ in a regression problem or the _AUC Score_ for a classification problem. The problem that the HPO must solve is to find \(\vec{x}\) such that \(\vec{x}\in\operatorname*{argmin}_{\vec{x}\in X}f(\vec{x})\). Formally we can define our problem as follows. Let \(f(\vec{x})\) be our objective function with \(\vec{x}\) as the vector of all the classical input hyperparameters, and we are willing to find the best combination by writing it down as: \[\vec{x}^{*}\approx\arg\min_{\vec{x}\in X}f(\vec{x}), \tag{1}\] In the optimization context, \(f(x)\) is the objective function to be minimized, where \(\vec{x}^{*}\) is the hyperparameter configuration that produces the optimum value of \(f(\vec{x})\), and a hyperparameter \(\vec{x}\) can take any value in the search space \(X\). The first step of our algorithm will be to find the function \(f(\vec{x})\) that generalizes our data. So, let us define our quantum model as follows: \[f(\vec{x}):=\bra{0}U^{\dagger}(\vec{x},\beta,\vec{\theta})\sigma_{x}U(\vec{x},\beta,\vec{\theta})\ket{0}, \tag{2}\] Where \(\sigma_{z}\) is our observable, \(U(\vec{x},\beta,\vec{\theta})\) the parameterized circuit with the input data \(\vec{x}\), \(\beta\) our scaling factor and \(\vec{\theta}\), the parameterized variable. From this point and considering equation (1), we will find the minimum of (2). In the next step, according to figure (4), let \(\beta\) and \(\vec{\theta}\) be the parameters that define our function \(f(\vec{x})\) from equation (2). In this stage, we consider these parameters as constant inputs (we do not vary them). Now let \(V_{\vec{\theta},\beta}(\vec{x})\) be the new circuit that can be an instantiation of the previous circuit (\(U(\vec{x},\beta,\vec{\theta})\)), and let us execute the gradient of the said quantum circuit (\(f(\vec{x})\)). The outcome of these operations will yield the optimal value of \(f(\vec{x})\), and the arguments for this operation will be the best hyperparameters we look for. The variable \(\vec{x}\) will have the same dimension as the best hyperparameters we are finding. From the _variational principle_[73], the following equation \(\bra{V_{\beta,\vec{\theta}}(\vec{x})}_{\psi(\overrightarrow{\gamma})}\geq \lambda_{i}\) can be reached. With \(\lambda_{i}\) as eigenvector and \(\bra{V_{\beta,\vec{\theta}}(\vec{x})}_{\psi(\overrightarrow{\gamma})}\) as the expected value. In this way, the VQC (Figure (1)) finds (3) such an optimal choice of parameters \(\overrightarrow{\gamma}\), that the expected value is minimized and that a lower eigenvalue is located. \[\bra{V_{\vec{\theta},\beta}(\vec{x})}=\bra{\psi\left(\gamma\right)}\ket{V_{ \vec{\theta},\beta}(\vec{x})}\ket{\psi\left(\gamma\right)} \tag{3}\] Where \(V_{\vec{\theta},\beta}(\vec{x})\approx f(\vec{x})\). ### Dataset Generation The proposed quantum hyperparameter process is summarized in three stages. The first stage is the dataset generation, the second is finding the function that represents quantum-based Fourier regression, and the third is finding the minimum of the regression function. This section deals with dataset generation and criteria--the following subsection details all the experiment steps. To implement the proposed algorithm, we generate one dataset per machine learning model regarding the original Vueling dataset considering its hyperparameters evaluating different search methods, as well as _Grid search_, _Random Search_ and _Bayes-Based Search_. We train the \(N\) ML models with the Vueling dataset to obtain a new reduced dataset. Three-way cross-validation is used during the training process. Therefore, for each set of hyperparameters, the model is trained three times and keeps the average of the three as the value to be predicted by the quantum system. If more precision is needed, the generated dataset should have more elements. Instead of precision, whether the speed is prioritized at the expense of its precision, the number of features of the dataset should be considerably less than the number of features of the original dataset. The resulting dataset size is reduced so as not to end up doing a quantum Grid Search. The new databases are stored in _dataframes_ using Pandas. Figure 1: _Variational Quantum Algorithm_ (VQC) working principle based on the quantum variational circuit. The quantum circuit computes the objective function, and the classical computer computes the circuit parameters. We can use this model to find the minima or the maxima of a given parameterized function that is our quantum circuit. Our quantum circuit will be separated into two large blocks: the _Feature Map_ and the _Variational Quantum Circuit_[71]. ### Experiments Steps The scenario is given in Figure (4), and the proposed process to validate our experimentation is summarized as follows: 1. From the database that contains \(252,183\) datapoints with \(42\) features each, we generate a set (\([n]\)) of randomly chosen hyperparameters. 2. We train \(n\) models with these hyperparameters. Get \([n]\)_scores (accuracy, \(r^{2}\), MSE, etc.)_ and record the dataset. 3. We transform the categorical hyperparameters into linear variables according to figure (2). 4. We transform this dataset into a Quantum Space for the native quantum algorithm. 5. We find the best \(U(\vec{x},\beta,\vec{\theta})\) according to figure (3). 6. We train a Quantum-Coding System plus the best \(U(\vec{x},\beta,\vec{\theta})\) found to the predict best hyperparameters. 7. We get the best set of hyperparameters by using Quantum Computer. 8. We go back to the classical model and re-train it with the best hyperparameters. Classically there is a process \(Classical\_process\) with its executing time \(T_{c}\), and from quantum computing, there is a process \(Quantum\_process\) with its executing time \(T_{q}(translation)+T_{q}(solution)\). We will only get a time advantage if: \(T_{c}>[T_{q}(translation)+T_{q}(solution)]\). Why \(T_{q}(translation)\)? because quantum computers only execute quantum data, so we must translate all the data we got from the classical domain into the quantum one. To test the proper functioning of our algorithm, we design five cases (\(A\), \(B\), \(C\), \(D\), and \(E\)), respectively, equivalent to the layer number of the variational algorithm (1, 2, 3, 4, and 5). ### Our Quantum Model Based on [15], we propose the hybrid model and strategy from figures (2), (3) and (4) to tackle the generic HPO and reduce the search and analyzing time. ## VI Results Figures (5) to (7) show the results of the process we follow to achieve our goal. It can be observed in Figure (6) how the quantum model is molded into the shape of the data, defining a mesh that represents the continuous function, which is used to find the best hyperparameters. We have tested the proposed algorithm and steps with various models and techniques and generated several databases and experiments. The detailed results of each experiment can be seen in the tables (III), (IV), (V), and Figure 3: We use this model based on _Fourier series_ in quantum machine learning [15]. Depending on some scaling \(\beta\) parameters and the input data \(\vec{x}\), the Feature Map will be in charge of coding our data. We rely heavily on the fact that the _Feature Map_ (\(F(\vec{x},\beta)\)) must be variational, that the data’s loading is repeated in all the layers, and that the variational circuit (\(V(\vec{\theta})\)) searches with the help of the input parameters for the best function within the space of functions that defines the capacity of the variational circuit (\(U(\vec{x},\beta,\vec{\theta})=F(\vec{x},\beta)V(\vec{\theta})\)). This configuration will efficiently approximate our given dataset to a continuous function \(f(\vec{x})\). Figure 2: This block diagram presents our approach of a generic hyperparameter tuning with quantum computing-based gradient descent or _Adam optimizer_[72]. The first block allows us to normalize all inputs to \(0\) and \(\pi\). Then for the categorical variables, to save on the number of qubits, we encode the categorical variables in binary and map them over the number of qubits that our model has. The next block allows us to pack all the input variables into a single vector. Each variable is equivalent to a dimension of our vector. If the categorical variable is coded with \(m\) dimensions, these \(m\) dimensions will be directly mapped to the vector. Having the input vector, we now embed the data in the quantum computer thanks to our _Feature Map_ function. Said function will map our input variable continuously, with the help of our parameterized gates, \(RX\), \(RY\), and \(RZ\). From here, we apply our quantum circuit. After the measurement, we must undo the previous operations in reverse order to recover our input hyperparameters. (VI). In addition, we have carried out some comparative studies that can be seen in Figures (8) to (12). ### 2 Hyperparameters From the database that contains \(252,183\) datapoints with \(42\) features each, we generate the database for the _Histogram Gradient Boosting Regression_. The result of our algorithm by using _Histogram Gradient Boosting Regression_, precisely a _lightGBM_ implementation and executing the following tests will be shown in three hyperparameters' configurations (2, 3, and 4). In the case of _2 Hyperparameters_: _learning_rate and max_iteration_, we obtain the following outcomes: * Testing classically with the full database with \(240\) training samples took \(6\) minutes \(6\) seconds, with \(R^{2}\) result for the \(R^{2}_{\rm train}=0.464\) and \(R^{2}_{\rm test}=0.388\). * By creating a subset with only \(70\) samples, that took \(1\) minute \(38\) seconds to be created and having the \(R^{2}=0.382\). * Now, executing our quantum algorithm for finding the best hyperparameters using the prepared dataset (\(70\) samples) took only \(4\) minutes and \(43\) seconds to find the best hyperparameters with \(R^{2}\) score \(=0.444\). ### 3 Hyperparameters In the case of _3 Hyperparameters_: _learning_rate, max_iteration, and loss_, we obtain the following outcomes: * Testing classically with the full database with \(480\) training samples took \(11\) minutes \(36\) seconds, with \(R^{2}\) result for the \(R^{2}_{\rm train}=0.464\) and \(R^{2}_{\rm test}=0.388\). * By creating a subset with only \(35\) samples, that took \(53\) seconds to be created and having the \(R^{2}=0.382\). * Now, executing our quantum algorithm for finding the best hyperparameters using the prepared dataset (\(35\) samples) took only \(1\) minute and \(56\) seconds to find the best hyperparameters with \(R^{2}\) score \(=0.444\). \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **ML Model** & **Hyperparameter** & **Type** & **Search Space** \\ \hline \multirow{8}{*}{Random Forest Classifier} & \multirow{8}{*}{ \begin{tabular}{l} **n\_estimators** \\ \end{tabular} } & Discrete & \([5-250]\) \\ \cline{2-3} & & & \([1-50]\) \\ \cline{2-3} & & \(\min\_sample\_split\) & Discrete & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-10]\) \\ \cline{2-3} & & & \([1-100]\) \\ \cline{2-3} & & & \([1-10]\) \\ \ seconds to find the best hyperparameters with \(R^{2}\) score \(=0.4377\). ### 4 Hyperparameters In the case of _4 Hyperparameters_: _learning_rate, max_iteration, loss and max_bins_, we have the following outcomes: * Testing classically with the full database with 1980 training samples took 41 minutes 45 seconds, with Figure 4: This graph shows the steps followed to achieve our goal. First, we generate a set (\([n]\)) of randomly chosen hyperparameters and then train \(n\) models with these hyperparameters. From the latter process, we get \([n]\)_scores (accuracy, \(r^{2}\), MSE, etc.)_ and record the dataset. Next, we transform the categorical hyperparameter into linear variables according to figure (2). From this point, we transform this dataset into a Quantum Space for the native quantum algorithm. Having the dataset into the quantum domain, we now find the best \(U(\vec{x},\vec{\theta},\vec{\theta})\) according to to figure (3). This means having our generic continuous function given by equation (2). In this stage, we only need to find the minimum, according to equation (1). So, we train a Quantum-Coding System plus the best \(U(\vec{x},\beta,\vec{\theta})\) found to predict the best hyperparameters. Now, we get the best set of hyperparameters by using a quantum computer (or a quantum-inspired one); we go back to the classical model and re-train it with the best hyperparameters. Figure 5: We can observe the result of applying a quantum-based Fourier regression approach. We can observe how the model allows interpolating the data from the dataset better. Figure 6: We can see the space covered by the new function \(f(\vec{x})\) resulting from applying the gate \(U(\vec{x},\vec{\theta},\vec{\theta})\). In this case, the number of layers is 4. The axes on the graph represent hyperparameters from _HistGradBoost_ model. Here, the score is normalized on \(\pi\). Please refer to table (II) for more details. Figure 7: We can observe how our algorithm finds the local maximum of our multivariate function. In this case, the number of layers is 5. The axes on the graph represent hyperparameters from _HistGradBoost_ model. Please refer to table (II) for more details. \(R^{2}\) result for the \(R^{2}_{\text{train}}=0.464\) and \(R^{2}_{\text{test}}=0.388\). * By creating a subset with only 280 samples, that took 5 minutes 54 seconds to be created and having the \(R^{2}=0.388\). * Now, executing our quantum algorithm for finding the best hyperparameters using the prepared dataset (280 samples) took only 15 minutes and 2 seconds to find the best hyperparameters with \(R^{2}\) score \(=0.415\). ### Discussions In this Section, we discuss the obtained results while elaborating on which cases our proposed model works best. We compared the classical and hybrid proposed models from figure (8). Specifically, we compared the classically trained data model with the _HistGradBoost_ model using _Random Search_ and _Cross-Validation_ with our model. The ratio between the graphs is from saving time, with only a single layer in the quantum circuit. In the case of figure (8), we notice that we saved 56% of total time. The VQA took about 100 seconds, the search algorithm for the optimal hyperparameters took about 33 seconds, and the model retraining took about 50 seconds compared to 2747 seconds to get the same hyperparameters. Using Random search, the experiment was done with a 1500-element database over 3 columns. With the hyperparameters _param_learning_rate_\([0.01-1]\), _param_iter_\([1-1000]\), and _param_loss_ as categorical [squared_error, absolute_error] (for more details (II)). Figure (9) shows the experiment with the same model (_HisGradBoost_) but this time with _Grid Search_ and _Cross-Validation_. In this case, the VQA took about 425 seconds, the search algorithm for the optimal hyperparameters took about 36 seconds, and the model retraining took about 8 seconds compared to 2505 seconds to get the same parameters. Although the other times are negligible compared to the magnitudes we are dealing with, we have wanted to plot them on this graph ((9)). The experiment was done with a 1920-element database over 4 columns. With the hyperparameters _param_learning_rate_\([0.01-1]\), _param_iter_\([1-1000]\), _param_max_bins_\([31-255]\) and _param_loss_ as categorical [squared_error, absolute_error] (for more details (II)). The results of the experiments with the _RandomForest_ model are shown by the figure (10). The experiment compares the classically trained data model with the _RandomForest_ model using _Cross-Validation_. The time saving is 63%. For that, the VQA took about 121 seconds, the search algorithm for the optimal hyperparameters took about 68 seconds, and the model retraining took about 600 seconds compared to 2116 seconds to get the same parameters. The experiment was done \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline **Model** & \multicolumn{4}{c|}{**HistGradBoost with Random CV**} \\ \hline **Cases** & A & B & C & D & E \\ \hline **Classical HistGradBoost performance time (a)** & 2747.00 & 2747.00 & 2747.00 & 2747.00 \\ \hline **Total proposed model performance time (a)** & 146.86 & 206.09 & 313.46 & 374.91 & 555.11 \\ \hline **Time saving (a)** & 2598.14 & 2538.91 & 2431.54 & 2370.09 & 2189.89 \\ \hline **Time saving (\%)** & 94.65 & 92.49 & 89.50 & 86.35 & 79.79 \\ \hline **Dev Score** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Dev Score (\%)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **\# HP\({}_{a}\)** & 3.00 & 3.00 & 3.00 & 3.00 & 5.00 \\ \hline **\# Layers** & 1.00 & 2.00 & 3.00 & 4.00 & 5.00 \\ \hline **Lond Data(ms)** & 1.30 & 2.40 & 4.10 & 4.81 & 0.00 \\ \hline **VQA (\(\mathbf{x}\))** & 84.70 & 150.05 & 239.06 & 282.38 & 397.03 \\ \hline **Finding Quantum best HP\({}_{a}\) (a)** & 28.63 & 44.63 & 68.01 & 85.49 & 140.42 \\ \hline **Data mapping from Quantum to Classic Space (\(\mathbf{x}\))** & 30.35 & 41.76 & 51.51 & 70.66 & 73.78 \\ \hline **Landing Original dataset time (a)** & 1.44 & 1.07 & 1.10 & 1.07 & 1.09 \\ \hline **Model training (a)** & 32.08 & 10.34 & 5.29 & 5.97 & 16.56 \\ \hline **Proposed model Test score** & 0.3879 & 0.39 & 0.3879 & 0.3879 & 0.3879 \\ \hline **Original Train score** & 0.4578 & 0.4598 & 0.4578 & 0.4578 & 0.4598 \\ \hline **Original Test Score** & 0.3879 & 0.3879 & 0.3879 & 0.3879 & 0.3879 \\ \hline \end{tabular} \end{table} Table 3: This table shows the experiments for five cases (A, B, C, D, and E) with the input database trained with both the test and train for the _HistGradBoost_ model. The time required to generate said database classically and have the optimal hyperparameters applying _Random Search_ with _Cross-Validation_ is 2747 seconds. The configuration parameters of our hybrid model to obtain the data from the tables are the following: _lrVQA_ = 0.15, _maxEpochVQA_ = 70, _lrBH_ = 0.0005, _maxEpochBH_ = 1500, _loadOptBH_ = \(False\) and for the different quantum layers (_qLayer_\(=1,2,3,4\), and 5). Our algorithm already finds the hyperparameters for the defined target score, considerably reducing the experimentation time using minimum layers. In this case, from 94% for one layer to 79% for five layers. All the tests were done locally on _Mac Book Pro_ with _8-Core Intel_[74]. with an 88-element database with over 2 columns. With the hyperparameters _param_n_estimators_\([5-250]\) and _param_max_depth_\([1-15]\) (for more details (II)). The experimentation with _Ridge_ model is presented in Figure (11), which compares the classically trained data model with the _Ridge_ model using _Grid Search_ and _Cross-Validation_ with our model. The ratio between the graphs represents the savings that we achieve. Here we are saving 73% of the experimentation time. In this case, the VQA took about 178 seconds, the search algorithm for the optimal hyperparameters took about 53 seconds, and the model retraining took about 5 seconds compared to 2075 seconds to get the same parameters. The experiment was done with a 560-element database with over 3 columns. With the hyperparameters _param_alpha_\([0.0001-1]\), _param_max_iter_\([1000-500000]\) and _param_solver_ as categorical [svd, cholesky, lsqr, sparse_cq, sag ] (for more details (II)). We have realized that for databases classically generated for less than one minute, it is not worth using our algorithm since the setup time in the case of a single hyperparameter is 50 seconds, and for two hyperparameters is one minute and 20 seconds. Moreover, to verify the proper functioning of our algorithm and how it shares part of its philosophy with _BayesSearch_, we wanted to compare it with _BayesSearch_ with cross-validation. As quantum computing is today, the result is quite comparable. The comparison has been simulated on a classical computer. That is, as an example, having the limitations of RAM without the ability to take advantage of quantum parallelization. Our algorithm has been equated by 89% time to the classical_BayesSearch_. Still, we have also experienced cases where _BayesSearch_ has exceeded our algorithm by about 100 seconds as the highest observed \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline **Model** & \multicolumn{4}{c|}{**HistGradBoot with Grid CV**} \\ \hline **Cases** & A & B & C & D & E \\ \hline **Classical HistGradBoot performance time (a)** & 2505.00 & 2505.00 & 2505.00 & 2505.00 & 2505.00 \\ \hline **Total proposed model performance time (a)** & 368.30 & 774.53 & 944.76 & 1470.23 & 1757.00 \\ \hline **Time saving (a)** & 2136.69 & 1730.46 & 1560.24 & 1034.76 & 747.9930 \\ \hline **Time saving (\%)** & 85.29 & 69.08 & 62.28 & 41.31 & 29.86 \\ \hline **Dev Score** & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\ \hline **Dev Score (\%)** & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\ \hline **\# HP\({}_{a}\)** & 4.00 & 4.00 & 4.00 & 4.00 & 4.00 \\ \hline **\# Layers** & 1.00 & 2.00 & 3.00 & 4.00 & 5.00 \\ \hline **Load Data(\(a\))** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **VQA (\(a\))** & 308.31 & 700.37 & 855.79 & 1302.36 & 1585.81 \\ \hline **Finding Quantum best HP\({}_{a}\) (a)** & 45.68 & 65.80 & 81.75 & 146.56 & 160.60 \\ \hline **Data mapping from Quantum to Classic Space (a)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Landing Original dataset time (a)** & 1.60 & 1.11 & 1.00 & 1.60 & 1.23 \\ \hline **Model training (a)** & 12.72 & 7.25 & 6.21 & 19.72 & 9.27 \\ \hline **Proposed model Test score** & 0.3785 & 0.3785 & 0.3785 & 0.3785 & 0.3785 \\ \hline **Original Train score** & 0.4583 & 0.4583 & 0.4583 & 0.4583 & 0.4583 \\ \hline **Original Test Score** & 0.3879 & 0.3879 & 0.3879 & 0.3879 & 0.3879 \\ \hline \end{tabular} \end{table} Table 4: This table shows the experiments for five cases (A, B, C, D, and E) with the input database trained with both the test and train for the _HistGradBoost_ model. The time required to generate said database classically and have the optimal hyperparameters applying _Grid Search_ with _Cross-Validation_ is 2505 seconds. The configuration parameters of our hybrid model to obtain the data from the tables are the following: _lrVQA_\(=0.15\), _maxEpochVQA_\(=70\), _lrBH_\(=0.0005\), _maxEpochBH_\(=1500\), _loadOptBH_\(=False\) and for the different quantum layers (_qLayer_\(=1,2,3,4\), and \(5\)). Our algorithm already finds the hyperparameters for the defined target score, considerably reducing the experimentation time using minimum layers. In this case, from 85% for one layer to 29% for five layers. All the tests were done locally on _MacBookPro_ with _8-Core Intel_[74]. Figure 8: This graph compares the classical model and the hybrid proposed model—the classic model on the right of the image and the proposed model on the left. We are comparing the classically trained data model with the _HistGradBoost_ model using _Random Search_ with _Cross-Validation_. The ratio between the graphs results from saving time in this case, with only a single layer in the quantum circuit. Table (III) analyzes in detail considering the number of layers. In this case, we save 56% of time compared to the classical counterpart. A 1500-element database with 3 columns considering the following the _hyperparameters param_learning_rate_\([0.01-1]\), _param_iter_\([1-1000]\), and _param_loss_ as categorical [squared_error, absolute_error] (for more details (II)). value. In all cases, our algorithm has found the values of the same hyperparameters as _BayesSearch_. We only compared the algorithm with a single layer. It can be seen in figure (12) the operation at the time and process level of the proposed algorithm. Note how the quantum part goes very fast, and we consume the most time in the retraining of _Bayes Search_. Since the _Bayes Search_ with _Cross-Validation_ goes back to performing some steps already contemplated in our proposal. \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline **Model** & \multicolumn{4}{|c|}{**RandomForest with Grid CV**} \\ \hline **Cross** & A & B & C & D & E \\ \hline **Classical Random Forest performance time (a)** & 2116.00 & 2116.00 & 2116.00 & 2116.00 & 2116.00 \\ \hline **Total proposed model performance time (a)** & 660.22 & 1018.24 & 932.06 & 955.51 & 1021.88 \\ \hline **Time saving (a)** & 1446.77 & 1097.76 & 1183.93 & 1160.48 & 1094.12 \\ \hline **Time saving (5k)** & 68.37 & 51.88 & 55.95 & 54.84 & 51.71 \\ \hline **Dev Score** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Dev Score (\%)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **\# HPs** & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 \\ \hline **\# Layers** & 1.00 & 2.00 & 3.00 & 4.00 & 5.00 \\ \hline **Load Data(a)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **VQA (a)** & 80.04 & 101.80 & 148.99 & 218.43 & 245.46 \\ \hline **Finding Quantum best HPs (a)** & 23.35 & 37.88 & 53.02 & 70.10 & 76.26 \\ \hline **Data mapping from Quantum to Classic Space (a)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Leading Original dataset time (a)** & 1.29 & 1.49 & 1.18 & 1.30 & 1.02 \\ \hline **Model training (a)** & 564.55 & 877.07 & 728.88 & 665.00 & 699.14 \\ \hline **Proposed model Test score** & 0.3478 & 3478 & 0.3478 & 0.3478 & 0.3478 \\ \hline **Original Train score** & 0.6273 & 0.6273 & 0.6273 & 0.6273 & 0.6273 \\ \hline **Original Test Score** & 0.3478 & 0.3478 & 0.3478 & 0.3478 & 0.3478 \\ \hline \end{tabular} \end{table} Table 5: This table shows the experiments for five cases (A, B, C, D, and E) with the input database trained with both the test and train for the _RandomForest_ model. The time required to generate said database classically and have the optimal hyperparameters applying _Grid Search_ with _Cross-Validation_ is 2116 seconds. The configuration parameters of our hybrid model to obtain the data from the tables are the following: _lrVQA_ = 0.15, _maxEpochVQA_ = 70, _lrBH_ = 0.0005, _maxEpochBH_ = 1500, _loadOptBH_ = \(False\) and for the different quantum layers (_qLayer_= \(1,2,3,4\), and \(5\)). Our algorithm already finds the hyperparameters for the defined target score, considerably reducing the experimentation time using minimum layers. In this case, from 68% for one layer to 51% for five layers. All the tests were done locally on _MacBookPro_ with _8-Core Intel_[74]. Figure 10: This graph compares the classical model and the hybrid proposed model—the classic model on the right of the image and the proposed model on the left. We are comparing the classically trained data model with the _RandomForest_ model using _Grid Search_ and _Cross-Validation_ with our model. The ratio between the graphs results from saving time in this case, with only a single layer in the quantum circuit. Table (4) analyzes in detail considering the number of layers. In this case, we save 81% of time compared to the classical counterpart. A 1920-element database over 4 columns considering the following hyperparameters _param_learning_rate_\([0.01-1]\), _param_iter_\([1-1000]\), _param_max_bins_\([31-255]\) and _param_loss_ as categorical [squared_error, absolute_error] (for more details (II)). Figure 9: This graph compares the classical model and our hybrid proposal—the classic model on the right of the image and the proposed model on the left. Specifically, we are comparing the classically trained data model with the _HisGradBoost_ model using _Grid Search_ and _Cross-Validation_ with the model. The ratio between the graphs results from saving time in this case, with only a single layer in the quantum circuit. Table (4) analyzes in detail considering the number of layers. In this case, we save 81% of time compared to the classical counterpart. A 1920-element database over 4 columns considering the following hyperparameters _param_learning_rate_\([0.01-1]\), _param_iter_\([1-1000]\), _param_max_bins_\([31-255]\) and _param_loss_ as categorical [squared_error, absolute_error] (for more details (II)). ## VII Conclusion _Hyperparameter tuning_ is a research area with great interest in this big-data era. In this article, we have studied using classical and quantum hybrid algorithms to offer a generic solution. We have investigated several scenarios and experiments to propose an efficient model for solving the hyperparameter optimization problem. We have designed an algorithm that fits the current status of quantum computing. Our algorithm and processes can be used in all _quantum-inspired_ machines solving real cases in society, waiting for a quantum computer or a system that allows us to have quality service to run it on a commercial quantum computer. Our algorithm has proven robust in all tested scenarios and has given outstanding results. For this reason, we firmly believe that, beyond the hardware limitations and beyond achieving an effi \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Model** & \multicolumn{4}{c|}{**Ridge with Grid Search CV**} \\ \hline **Cases** & A & B & C & D & E \\ \hline **Classical Ridge performance time (\(\mathbf{z}\))** & 2075.00 & 2078.00 & 2075.00 & 2075.00 & 2078.00 \\ \hline **Total proposed model time (\(\mathbf{z}\))** & 163.13 & 314.93 & 538.35 & 632.64 & 871.59 \\ \hline **Time saving (\(\mathbf{z}\))** & 1911.87 & 1760.07 & 1536.65 & 1442.36 & 1203.41 \\ \hline **Time saving (\(\mathbf{\%}\))** & 92.13 & 84.82 & 74.05 & 69.51 & 57.99 \\ \hline **Dav Score** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Dav Score (\%)** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **\# HP\(\mathbf{z}\)** & 5.00 & 5.00 & 5.00 & 5.00 & 5.00 \\ \hline **\# Layers** & 1.00 & 2.00 & 3.00 & 4.00 & 5.00 \\ \hline **Load Data(\(\mathbf{z}\))** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **VQA (\(\mathbf{z}\))** & 118.77 & 243.11 & 403.12 & 490.68 & 078.17 \\ \hline **Finding Quantum best HP\(\mathbf{z}\) (\(\mathbf{z}\))** & 39.87 & 67.79 & 130.66 & 137.81 & 188.86 \\ \hline **Data mapping from Quantum to Classic Space (\(\mathbf{z}\))** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline **Loading Original dataset time (\(\mathbf{z}\))** & 1.15 & 0.98 & 1.10 & 1.07 & 1.13 \\ \hline **Model training (\(\mathbf{z}\))** & 3.34 & 3.04 & 3.46 & 3.07 & 3.44 \\ \hline **Proposed model Test score** & 0.3012 & 0.3012 & 0.3012 & 0.3012 & 0.3012 \\ \hline **Original Train score** & 0.3134 & 0.3134 & 0.3134 & 0.3134 & 0.3134 \\ \hline **Original Test Score** & 0.3012 & 0.3012 & 0.3012 & 0.3012 & 0.3012 \\ \hline \end{tabular} \end{table} Table 6: This table shows the experiments for five cases (A, B, C, D, and E) with the input database trained with both the test and train for the _Ridge_ model. The time required to generate said database classically and have the optimal hyperparameters applying _Grid Search_ with _Cross-Validation_ is 2116 seconds. The configuration parameters of our hybrid model to obtain the data from the tables are the following: _lrVQA_ = 0.15, _maxEpochVQA_ = 70, _lrBH_ = 0.0005, _maxEpochBH_ = 1500, _loadOptBH_ = _False_ and for the different quantum layers (_qLayer_= \(1,2,3,4,\) and \(5\)). Our algorithm already finds the hyperparameters for the defined target score, considerably reducing the experimentation time using minimum layers. In this case, from 92% for one layer to 57% for five layers. All the tests were done locally on _MacBookPro_ with _8-Core Intel_[74]. Figure 11: This graph compares the classical model and the hybrid proposed model—the classic model on the right of the image and the proposed model on the left. We are comparing the classically trained data model with the _Ridge_ model using _Grid Search_ and _Cross-Validation_ with the model. The ratio between the graphs results from saving time in this case, with only a single layer in the quantum circuit. Table (VI) analyzes in detail considering the number of layers. In this case, we can see that we save 73% of time compared to the classical counterpart. A 560-element database with 3 columns considering the following hyperparameters _param_alpha_ \([0.0001-1]\), _param_max_iter_\([1000-50000]\) and _param_solver_ as categorical [svd, cholesky, lsqr, sparse_cq, sag ] (for more details (II)). Figure 12: This graph presents the performance of the proposed algorithm compared to its classical counterpart using the _BayesSearch_ with _Cross-Validation_. In this case, the proposed algorithm represented 92.5% of the classical model as a reference which means a 7.5% time improvement. A 30-element database with 2 columns were used, considering the following hyperparameters _param_n_estimators_\([5-250]\) and _param_max_depth_\([1-15]\). cient _qRAM_, if there is a lot of classical data and few functional qubits, algorithms such as the one proposed in this work are the most suitable for the real solutions today. ## Acknowledgements The authors want to thank Guillermo Alonso de Linaje for the discussions and consideration during the experiments. Also, the authors wish to thank the _Vueling Airlines SA_ for allowing the use of their proprietary dataset to perform this work. **Compliance with Ethics Guidelines** Funding: This research received no external funding. Institutional review: This article does not contain any studies with human or animal subjects. Informed consent: Informed consent was obtained from all individual participants included in the study. Data availability: Data sharing is not applicable. No new data were created or analyzed in this study. Data sharing does not apply to this article.
2304.14370
Heisenberg Limit beyond Quantum Fisher Information
The Heisenberg limit provides a fundamental bound on the achievable estimation precision with a limited number of $N$ resources used (e.g., atoms, photons, etc.). Using entangled quantum states makes it possible to scale the precision with $N$ better than when resources would be used independently. Consequently, the optimal use of all resources involves accumulating them in a single execution of the experiment. Unfortunately, that implies that the most common theoretical tool used to analyze metrological protocols - quantum Fisher information (QFI) - does not allow for a reliable description of this problem, as it becomes operationally meaningful only with multiple repetitions of the experiment. In this thesis, using the formalism of Bayesian estimation and the minimax estimator, I derive asymptotically saturable bounds on the precision of the estimation for the case of noiseless unitary evolution. For the case where the number of resources $N$ is strictly constrained, I show that the final measurement uncertainty is $\pi$ times larger than would be implied by a naive use of QFI. I also analyze the case where a constraint is imposed only on the average amount of resources, the exact value of which may fluctuate (in which case QFI does not provide any universal bound for precision). In both cases, I study the asymptotic saturability and the rate of convergence of these bounds. In the following part, I analyze the problem of the Heisenberg limit when multiple parameters are measured simultaneously on the same physical system. In particular, I investigate the existence of a gain from measuring all parameters simultaneously compared to distributing the same amount of resources to measure them independently. I focus on two examples - the measurement of multiple phase shifts in a multi-arm interferometer and the measurement of three magnetic field components.
Wojciech Górecki
2023-04-27T17:43:45Z
http://arxiv.org/abs/2304.14370v1
# Dectoral Thesis ###### Abstract The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information Information is a quantum Fisher Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information is a quantum Fisher Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information Information. The quantum Fisher Information Information Information is a quantum Fisher Information Information Information Information Information. The quantum Fisher Information Information Information Information Information is a quantum Fisher Information Information Information Information Information. The quantum Fisher Information Information Information Information Information is a quantum Fisher Information Information Information Information Information Information. The quantum Fisher Information Information Information Information is a quantum Fisher Information Information Information Information Information Information Information. The quantum Fisher Information Information Information Information Information is a quantum Fisher Information Information Information Information Information. The quantum Fisher Information Information Information Information Information Information is a quantum Fisher Information Information Information Information Information Information Information. The quantum Fisher Information Information Information Information Information Information is a quantum Fisher Information Information Information Information Information Information. The quantum Fisher Information Information Information Information Information Information Information Information is a quantum Fisher Information Information Information Information Information Information Information Information. The quantum Fisher Information Information Information Information Information Information Information Information Information is a quantum Information Information Information Information Information Information Information Information Information Information Information Information Information Information. The quantum Fisher Information
2301.09299
Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay
Self-supervised learning has become a popular approach in recent years for its ability to learn meaningful representations without the need for data annotation. This paper proposes a novel image augmentation technique, overlaying images, which has not been widely applied in self-supervised learning. This method is designed to provide better guidance for the model to understand underlying information, resulting in more useful representations. The proposed method is evaluated using contrastive learning, a widely used self-supervised learning method that has shown solid performance in downstream tasks. The results demonstrate the effectiveness of the proposed augmentation technique in improving the performance of self-supervised models.
Yinheng Li, Han Ding, Shaofei Wang
2023-01-23T07:00:04Z
http://arxiv.org/abs/2301.09299v1
# Self-Supervised Image Representation Learning: Transcending Masking with Paired Image Overlay ###### Abstract Self-supervised learning has become a popular approach in recent years for its ability to learn meaningful representations without the need for data annotation. This paper proposes a novel image augmentation technique, overlaying images, which has not been widely applied in self-supervised learning. This method is designed to provide better guidance for the model to understand underlying information, resulting in more useful representations. The proposed method is evaluated using contrastive learning, a widely used self-supervised learning method that has shown solid performance in downstream tasks. The results demonstrate the effectiveness of the proposed augmentation technique in improving the performance of self-supervised models. Self-supervised Learning, Autoencoder, Data Augmentation, Computer Vision, Masked Modeling, ## 1 Introduction Self-supervised learning has seen significant success in recent years in both natural language processing and computer vision domains, particularly with the advent of attention mechanisms and large language models such as BERT and GPT. While transformer-based models are the focus in the NLP community for self-supervised learning, computer vision research has employed a variety of approaches. One trend is the use of contrastive learning with data augmentation, while another trend is to adapt the idea of masked training from NLP to computer vision. However, applying masked training to images poses challenges. Unlike text, images have high dimensions and spatial relationships, making it difficult to mask a single pixel and resulting in large attention matrices. As a result, researchers have divided images into patches and treated each patch as the smallest unit, as seen in works such as [7, 8]. In this paper, we propose a new method for image corruption, which is closer to the idea of "masking" and aims to better leverage masked training in computer vision tasks. Our method is to overlay one image onto another, a technique that has been implemented by [10] and shown to be effective. To the best of our knowledge, this technique has not been used in self-supervised learning. We demonstrate the effectiveness of this method in comparison to existing augmentation techniques through experiments on self-supervised models. ## 2 Related Work **Autoencoders** are a traditional self-supervised learning algorithm that trains to reconstruct the input from a learned low-dimension representation. Variants of autoencoders include denoising autoencoders [11], where noise is added to the input during training. Our proposed method can be viewed as a special type of denoising autoencoder, where we use image overlap to create a corrupted image. Recently, self-supervised learning using autoencoders for computer vision tasks has also achieved great success [8]. **Data augmentation** is a widely used technique in computer vision research, with numerous techniques such as flipping, cropping, and coloring ([12]). Image overlay (also known as image mixture) is also a type of augmentation method, but has been less explored compared to others. ## 3 Approach The proposed method is based on the idea of image overlay, which is a form of image corruption, similar to image masking. There are several reasons why image overlay is a suitable method for self-supervised representation learning in computer vision. First, image overlay is conceptually similar to image masking. When a patch of an image is masked, it can be viewed as overlaying an image mask onto the image, which is a special case of overlaying one image on top of another. Second, image overlay provides better guidance for self-supervised representation learning compared to standard masking. In masked image training, the model is asked to reconstruct the missing part of the image, which is a difficult task in the domain of vision due to the high dimension of an image and the uncertainty of the real world. For example, predicting what is behind a window is a challenging task even for humans, due to the infinite possibilities. On the other hand, distinguishing between two films overlapped with one another is a relatively easy task as long as one has certain knowledge of the world. Therefore, training a model to distinguish and recover the overlapped image demands the model to pick up knowledge about the object and the world without worrying about uncertainty. Third, the method of image overlay is also similar to the idea of constructive learning, where the model needs to distinguish between positive and negative pairs. In this method, the model is also trained to distinguish between two samples, except they are in the same image. Lastly, this method is straightforward to implement as it does not require constructing any complex transformations on the image or sampling positive and negative pairs. The only requirement is to change the transparency of an image and overlay it onto another image. ### Pretraining A detailed approach is shown in Figure2. Denote input image \(x_{i}\), current batch \(X_{b}\), encoder \(f_{e}(x)\), decoder \(f_{d}(x)\) **Step1**: random sample image \(x_{j}\) from \(X_{b}\) **Step2**: generate an augmented image \(\hat{x_{i}}=\alpha x_{j}+(1-\alpha)x_{i}\). \(\alpha\) is a hyperparameter which controls the transparency of the image overlap. It should be less than 0.5 as the input image \(x_{i}\) should dominates the information in the picture. **Step3**: train the model to minimize the loss \(L(x_{i})=MSE(f_{d}(f_{e}(\hat{x_{i}})),x_{i})\). MSE is short for mean squared error. Once the model is trained, we use \(f_{e}\) as the encoder to extract meaningful representation from the image. The performance will be evaluated in downstream tasks such as image classification. Figure 1: proposed transformation ### Autoencoder We use a Resnet50 architecture ([13]) as the backbone for the autoencoder, following the implementation in [14]. The encoder is composed of Resnet50 and the decoder is a reversed version of Resnet50. ### Downstream Evaluation To evaluate the performance of the pretrained model, the encoder is fine-tuned on a downstream task. In our experiment, we use image classification as the downstream task. A linear head is added to the encoder for classification purposes. During downstream evaluation and testing, no additional augmentation is applied to the input. ## 4 Object256 Experiment We conduct our experiments on the Object256 dataset ([15]), which consists of 29,780 images covering 256 objects. The dataset is split into 23,824 images for the training set and 5,956 images for the testing set. In the pretraining phase, we train the autoencoder with image overlay \(f^{\alpha}\), where \(\alpha\in{0,0.1,0.2,0.3,0.4}\) and \(f^{\alpha}=f_{d}(f_{e}(\hat{X}))\). Additional configuration details can be found in Table a. Our data loader's input transformation includes (1) image overlay with \(\alpha\) (\(\alpha=0\) means no overlay) (2) random resized crop (to ensure the input image is of size \(244\times 244\)). To compare our augmentation method with other commonly used methods, we also trained two sets of models with random mask and AutoAugment (auto-policy) transformation: \(f^{randommask}\) and \(f^{autopolicy}\). The random mask transformation randomly sets \(m\%\) (where \(m\in\{1,2,3,4\}\)) of pixels in an image to 0 (blank), which is equivalent to adding white noise to the input, as shown in Figure 3. AutoAugment ([16]) is a set of augmentations that were optimized on the Imagenet dataset. Figure 4 shows the reconstruction results on 64 random samples from the testing set. All \(\alpha\) configurations show very similar reconstruction results, indicating that \(f^{\alpha}\) is sufficiently trained. We then use the pretrained encoder \(f^{\alpha}_{e}\) and fine-tune it on the downstream image classification task using the obj256 training set for fine-tuning and the test set for evaluation. In the fine-tuning phase, each encoder is trained for 20 epochs without any additional transformation. Configuration details can be found in Table b. The best accuracy on the testing set over 20 epochs is reported. We also train a Resnet model without pretraining for 20 epochs as a baseline for comparison. The first experiment is to compare the model's performance on downstream classification task with different \(\alpha\)s. According to table 1, the accuracy across different \(\alpha\) is around 37%. This accuracy looks very low given a resnet50 Figure 2: training pipeline model can easily achieve 80% on imagenet with one thousand class. However, we looked up other people's experiment and confirmed that this is a reasonable result given our architecture. More results can be found from this website [(17)]. According to their experiment, the top performer without transferred learning is only 39.06%. The reason that all models have a low accuracy on this dataset is that this is a small dataset with a large number of classes. The best model in table3 under finetuning is a pre-trained encoder with \(\alpha=0.3\). We also did linear probing where we freeze all layers except the last layer for classification and trained for 5 epochs. But we found all models have very similar performance during linear probing. Lastly, we compared the performance of using image overlap, using random masking, supervised learning without pretraining and auto-policy augmented pretraining. Results are shown in table4. We found the model with image overlap augmentation still performs the best among all the other models. ## 5 Discussion and Conclusion In this paper, we proposed a new method of image "masking" using image overlay for self-supervised learning in computer vision. We showed that this method is simple and easy to implement, and it outperforms other commonly used self-supervised methods on the Object256 dataset. However, it is important to note that the Object256 dataset is relatively small and the classification accuracy is generally low across all models. Therefore, more extensive Figure 4: This is the reconstruction result using \(f^{\alpha=0.2}\). We only show one result because results from different \(\alpha\)’s are visually similar to this picture. Figure 3: an example of random mask transformation experimification on larger datasets such as Imagenet is needed to fully evaluate the effectiveness of our proposed method. Additionally, we only used a Resnet architecture in our experiments due to computational constraints. Therefore, another direction for future work is to test our method using transformer-based neural networks. Overall, our proposed method of image overlay for self-supervised learning in computer vision provides a promising approach to learn meaningful representations without the need for data annotation. It is simple to implement, and we hope that future research will further explore its potential.
2307.02042
Huygens'Principle Reveals Dispersion in Inhomogeneous Media
Dispersion is an important factor of optical materials. Due to the effect of techniques and equipment in the manufacturing process of optical materials, the inhomogeneity of the material may be caused. In this paper, microsphere optical media are used to replace the inhomogeneous zones, and Huygens'principle is used to study the dispersion caused by the material inhomogeneity. First, we study the effect of a single inhomogeneous zone, and then the effect of a thin medium with a large number of inhomogeneous zones. It is deduced that the dispersion law of a macro-optical medium is also consistent with Cauchy formula. Finally, it is pointed out that Huygens'principle is suitable for studying the interaction between light and particles.
Li Mingcong, Zhao Zhenming
2023-07-05T05:56:35Z
http://arxiv.org/abs/2307.02042v1
# Huygens' Principle Reveals Dispersion in Inhomogeneous Media ###### Abstract Dispersion is an important factor of optical materials. Due to the effect of techniques and equipment in the manufacturing process of optical materials, the inhomogeneity of the material may be caused. In this paper, microsphere optical media are used to replace the inhomogeneous zones, and Huygens' principle is used to study the dispersion caused by the material inhomogeneity. First, we study the effect of a single inhomogeneous zone, and then the effect of a thin medium with a large number of inhomogeneous zones. It is deduced that the dispersion law of a macro-optical medium is also consistent with Cauchy formula. Finally, it is pointed out that Huygens' principle is suitable for studying the interaction between light and particles. Key words: Dispersion, Cauchy dispersion formula, Light-Matter interaction, Huygens' principle Introduction Dispersion is a common optical phenomenon. As early as 1666, Newton studied dispersion using prisms[1] In 1837, Cauchy gave his dispersion formula[2]. In 1872, Sellmeier developed Cauchy dispersion formula and put forward the Sellmeier equation, which is also an empirical formula[3]. Dispersion belongs to light-matter interaction. According to current theory, the dispersive properties of an optical medium depend on the interaction between the incident light and the electrons in the medium, which is a typical interaction between light and microscopic particles. From Huygens' principle, where there are waves, there are wavefronts and wavelets[4]. With the understanding about the essence of Huygens' principle[5], it was found that Huygens' principle is feasible for studying the interaction between light and microscopic particles, and has been applied to the research of Rayleigh scattering[6], atomic stimulated radiation[7] and photon absorption[8]. In this work, we use Huygens' principle to analyze the microscopic interaction between light and a medium and the propagation of light in inhomogeneous media. It is found that the inhomogeneity of optical media can lead to dispersion. ## 2 The influence of a single inhomogeneous zone on light propagation Optical uniformity is an important index for optical materials. In common optical media, there are local inhomogeneity in density or composition, which leads to fluctuations in refractive index of materials. It is important to study the effect of refractive index fluctuations on light propagation. We use a microsphere with refractive index n and radius R to describe the local refractive index fluctuations in a single inhomogeneous zone. In Fig.1, the wavefront of the plane wave propagating along the \(z\) axis is changed by the microsphere, and the deformation degree of the wavefront is related to the wavelength. When the light travels along the \(c_{1}c_{2}\) path, the influence of the microsphere on the optical path and phase is, \[\delta\left(r,\varphi\right)=k\left(n-1\right)\cdot 2R\cos\alpha \tag{1}\] where \(k\) is the incident light wave number. The wavelength of the incident light is \(\lambda\) and the amplitude is \(A\) Figure 1: Influence of a microsphere on the wavefront so complex amplitude distribution in the plane is given by \[U\left(r,\varphi,0\right)=At\left(r,\varphi\right) \tag{2}\] where \(t(r,\varphi)\) is the transmittance function, which is \[t(r,\delta)=\left\{\begin{matrix}1&r>R\\ e^{i\delta}&r\leq R\end{matrix}\right. \tag{3}\] In Fig.2, according to the parameters of the incident light and the microsphere, the complex amplitude of the light at \(z=0\) can be obtained. Using Huygens' principle, the amplitude of the light at point Q is \[U_{Q}=\frac{1}{i\lambda}\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}U(r, \varphi,0)\frac{e^{ik\rho}}{\rho}K(\theta)r\mathrm{d}r\mathrm{d}\varphi=\frac{ A}{i\lambda}\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}t(r,\varphi)\frac{e^{ik \rho}}{\rho}K(\theta)r\mathrm{d}r\mathrm{d}\varphi \tag{4}\] where, \(r\mathrm{d}r\mathrm{d}\varphi\) is the area element, \(\rho\) is the distance between the area element and point \(Q\), and \(K(\theta)\) is the inclination factor. Expanding the transmittance function: \[t(r,\varphi)=\left\{e^{i\rho}=1+i\delta-\frac{\delta^{2}}{2}-\frac{i\delta^{ 3}}{6}+\frac{\delta^{4}}{24}+\frac{i\delta^{5}}{120}+\varepsilon(\delta) \right.\quad r\leq R \tag{5}\] Substituting Eq.(5) into Eq.(4) and retaining the first six terms gives: \[U_{Q}\approx Ae^{ikz}+\frac{A}{i\lambda}\int\limits_{0}^{R}\int\limits_{0}^{2 \pi}[i\delta-\frac{\delta^{2}}{2}-\frac{i\delta^{3}}{6}+\frac{\delta^{4}}{24} +\frac{i\delta^{5}}{120}]\frac{e^{ik\rho}}{\rho}K(\theta)r\mathrm{d}r\mathrm{ d}\varphi \tag{6}\] The range of the wave front affected by the particle is very small. Therefore \(K(\theta)e^{ik\rho}/\rho\) can be regarded as a constant, and Eq.6 can be written as: \[\begin{split} U_{Q}&=Ae^{ikz}+\frac{A}{i\lambda}K( \theta)\frac{e^{ik\rho}}{\rho}\int\limits_{0}^{R}\int\limits_{0}^{2\pi}[i \delta-\frac{\delta^{2}}{2}-\frac{i\delta^{3}}{6}+\frac{\delta^{4}}{24}+ \frac{i\delta^{5}}{120}]r\mathrm{d}r\mathrm{d}\varphi\\ &=U_{0}+\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[i \delta-\frac{\delta^{2}}{2}-\frac{i\delta^{3}}{6}+\frac{\delta^{4}}{24}+ \frac{i\delta^{5}}{120}]r\mathrm{d}r\end{split} \tag{7}\] In Eq.(7), the first term \(Ae^{ikz}\) is the complex amplitude of the incident light at point \(Q\), and the following five terms are the complex amplitudes of lights diffracted by the microsphere. We denote these five terms as \(U_{1}\;U_{5}\) in turn. \[U_{1}=\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[i\delta]r\mathrm{d}r \tag{8}\] \[U_{2}=\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[-\frac{\delta^{2} }{2}]r\mathrm{d}r \tag{9}\] \[U_{3}=\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[-\frac{i\delta^{3 }}{6}]r\mathrm{d}r \tag{10}\] \[U_{4}=\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[\frac{\delta^{4} }{24}]r\mathrm{d}r \tag{11}\] \[U_{5}=\frac{Ake^{ik\rho}}{i\rho}K(\theta)\int\limits_{0}^{R}[\frac{i\delta^{5 }}{120}]r\mathrm{d}r \tag{12}\] Using the phase shift expression given in Eq.(1) and according to \(r=R\sin\alpha\), \(\mathrm{d}r=R\cos\alpha\mathrm{d}\alpha\), and then integrating we can get \[U_{1}=\frac{2(n-1)R^{3}k^{2}}{3}\frac{Ake^{ik\rho}}{i\rho} \tag{13}\] \[U_{2}=\frac{i(n-1)^{2}R^{4}k^{3}}{2}\frac{Ake^{ik\rho}}{i\rho} \tag{14}\] \[U_{3}=-\frac{4(n-1)^{3}R^{5}k^{4}}{15}\frac{Ake^{ik\rho}}{i\rho} \tag{15}\] \[U_{4}=-\frac{i(n-1)^{4}R^{6}k^{5}}{9}\frac{Ake^{ik\rho}}{i\rho} \tag{16}\] \[U_{5}=\frac{4(n-1)^{5}R^{7}k^{6}}{105}\frac{Ake^{ik\rho}}{i\rho} \tag{17}\] The phases of \(U_{1}\), \(U_{3}\), \(U_{5}\) and \(U_{0}\) are the same. \(i=e^{i\pi/2}\) in \(U_{2}\) indicates that the phase of \(U_{2}\) is \(\pi/2\) behind that of \(U_{0}\), and the \(-i=e^{-i\pi/2}\) in \(U_{4}\) shows that its phase is \(\pi/2\) ahead of that of \(U_{0}\). ## 3 Property of inhomogeneous thin medium Usually, there are a lot of non-uniform refractive index zones in an optical medium. They are very tiny in dimension, but large in number, and evenly distributed in a fine optical medium. To study the dispersion of such a medium, we start with a layer of the medium. In Fig.3, the thickness of the monolayer medium is \(l\), and the incident light propagates along the \(Z\)-axis. The light is diffracted by the microsphere located at point \(P\). According to equations (13) - (17), the amplitude of the diffraction light at point \(O_{l}(0,0,l)\) is: \[\begin{split} U_{O_{l}}=&\frac{AK(\theta)e^{ik_{PP} }}{i\rho_{{}_{P}}}[\frac{2k^{2}(n-1)R^{3}}{3}+\frac{i3k^{3}(n-1)^{2}R^{4}}{2} \\ &-\frac{4k^{4}(n-1)^{3}R^{5}}{15}-\frac{ik^{5}(n-1)^{4}R^{6}}{9}+ \frac{4k^{6}(n-1)^{5}R^{7}}{105}]\end{split} \tag{18}\] where \(\rho_{{}_{P}}\) represents the distance of \(\overline{PO_{l}}\). When a large number of microspheres exist in the medium, the incident light will be diffracted by each microsphere. Then the total amplitude at point \(O_{l}\) is \[\begin{split} U_{O_{l}}=&\sum_{\rho_{P}}\frac{AK( \theta)e^{ik\rho_{P}}}{i\rho_{{}_{P}}}[\frac{2k^{2}(n-1)R^{3}}{3}+\frac{i3k^{3 }(n-1)^{2}R^{4}}{2}\\ &-\frac{4k^{4}(n-1)^{3}R^{5}}{15}-\frac{ik^{5}(n-1)^{4}R^{6}}{9} +\frac{4k^{6}(n-1)^{5}R^{7}}{105}]\end{split} \tag{19}\] In Eq. (19), the particles are thought to be concentrated on the \(xy\) plane for simplicity. Point \(P(r_{{}_{P}},\varphi_{{}_{P}},z_{{}_{P}})\) is equivalent to point \(P^{\prime}(r_{{}_{P}},\varphi_{{}_{P}},0)\), then \[\begin{split} U_{O_{l}}=&\sum_{\rho_{P}}\frac{AK( \theta)e^{ik\rho_{{}_{P^{\prime}}}}}{i\rho_{{}_{P^{\prime}}}}[\frac{2k^{2}(n- 1)R^{3}}{3}+\frac{i3k^{3}(n-1)^{2}R^{4}}{2}\\ &-\frac{4k^{4}(n-1)^{3}R^{5}}{15}-\frac{ik^{5}(n-1)^{4}R^{6}}{9} +\frac{4k^{6}(n-1)^{5}R^{7}}{105}]\end{split} \tag{20}\] where, \(\rho_{{}_{P^{\prime}}}\) represents the length of \(P^{\prime}O_{l}\). Let the density of the microspheres be \(N\). Then the number of the microspheres on per unit area of the \(xy\) plane is \(Nl\), and the sum in Eq. (20) can be transformed into an Figure 3: Microsphere diffraction and equivalent substitution in a thin monolayer medium integral: \[\begin{split} U_{O_{l}}=&[\int\limits_{0}^{\infty} \int\limits_{0}^{2\pi}\frac{NIAK(\theta)e^{ik\rho_{r^{\prime}}}}{i\rho_{r^{ \prime}}}r\mathrm{d}r\mathrm{d}\varphi][\frac{2k^{2}(n-1)R^{3}}{3}+\frac{i3k^{3 }(n-1)^{2}R^{4}}{2}\\ &-\frac{4k^{4}(n-1)^{3}R^{5}}{15}-\frac{ik^{5}(n-1)^{4}R^{6}}{9} +\frac{4k^{6}(n-1)^{5}R^{7}}{105}]\end{split} \tag{21}\] According to Fresnel wave zone method[9], we can have \(\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\frac{AK(\theta)e^{ik\rho_{r^{ \prime}}}}{i\rho_{r^{\prime}}}r\mathrm{d}r\mathrm{d}\varphi=i\lambda Ae^{ikl}\). Then Eq.(21) can be rewritten as \[\begin{split} U_{O_{l}}=&\frac{i4\pi k(n-1)R^{3} ANl}{3}-\frac{6\pi k^{2}(n-1)^{2}R^{4}ANl}{2}-\frac{i8\pi k^{3}(n-1)^{3}R^{5}ANl}{15} \\ &+\frac{2\pi k^{4}(n-1)^{4}R^{6}ANl}{9}+\frac{i8\pi k^{5}(n-1)^{ 5}R^{7}ANl}{105}\end{split} \tag{22}\] Let \[a=\frac{4\pi k(n-1)R^{3}AN}{3} \tag{23}\] \[b=\frac{6\pi k^{2}(n-1)^{2}R^{4}AN}{2} \tag{24}\] \[c=\frac{8\pi k^{3}(n-1)^{3}R^{5}AN}{15} \tag{25}\] \[d=\frac{2\pi k^{4}(n-1)^{4}R^{6}AN}{9} \tag{26}\] \[e=\frac{8\pi k^{5}(n-1)^{5}R^{7}AN}{105} \tag{27}\] Substitute equations (23)-(27) into (22), we have \[u_{O_{l}}=Ae^{ikl}(ial-bl-icl+dl+iel) \tag{28}\] Adding the amplitude \(Ae^{ikl}\) of the incident light at point \(O_{l}\) and Eq. (28), the total amplitude at point \(O_{l}\) can be obtained as \[U_{O_{l}}=Ae^{ikl}+u_{O_{l}}=Ae^{ikl}(1+ial-bl-icl+dl+iel) \tag{29}\] The optical medium in Fig.3 is very thin, that is, \(l\) in Equation (29) is very small, then: \[1+ial-bl-icl+dl+iel=e^{ial-bl-icl+dl+iel}=e^{i(a-c+e)l-(b-d)l} \tag{30}\] Substitute (30) into (29): \[U_{O_{l}}=Ae^{ikl}e^{i(a-c+e)l-(b-d)l}=Ae^{-(b-d)l}e^{i(k+a-c+e)l} \tag{31}\] The influence of a layer of medium with a thickness of \(l\) on an incident light can be expressed as the transmittance \(T\): \[U_{O_{l}}=Ae^{-(b-d)l}e^{i(k+a-c+e)l}=AT \tag{32}\] where \[T=e^{-(b-d)l}e^{i(k+a-c+e)l} \tag{33}\] ## 4 Thick medium dispersion properties A macro medium with a length of \(L\), as shown in Fig.4, can be equivalent to \(m\) layers of thin medium with a thickness of \(l\). Then we have \[U_{L} = U_{0}T^{m}=U_{O}[e^{-(b-d)l}e^{i(k+a-c+e)l}]^{m}=U_{0}e^{-(b-d)ml}e^{ i(k+a-c+e)ml}\] \[= U_{0}e^{-(b-d)L}e^{i(k+a-c+e)L}=U_{0}e^{ik^{\prime}L}\] where \[k^{\prime}=i(b-d)+(k+a-c+e) \tag{35}\] It can be seen from Eq. (34) that when light propagates in a non-uniform medium, the real part \(k+a-c+e\) of the wave number \(k^{\prime}\) determines the wavelength \(\lambda=2\pi/(k+a-c+e)\) of the light, and the imaginary part \(i(b-d)\) determines the attenuation of the light. Assume that the optical wavelength in a uniform medium is \(\lambda\), and its frequency is \(\nu=v_{0}/\lambda\), then the propagation velocity of light in a dispersive medium can be obtained from the real part of wave number as: \[v=\nu\lambda^{\prime}=\frac{v_{0}}{\lambda}\frac{2\pi}{k+a-c+e}=v_{0}\frac{k} {k+a-c+e} \tag{36}\] According to Equation (36), the refractive index of the medium is: \[n(\lambda)=\frac{v_{0}}{v}=\frac{k+a-c+e}{k}=1+\frac{a}{k}-\frac{c}{k}+\frac{e }{k} \tag{37}\] Substitute (23), (25), and (27) into (37), it can be obtained \[n(\lambda) = 1+\frac{4\pi(n-1)R^{3}AN}{3}-\frac{8\pi k^{2}(n-1)^{3}R^{5}AN}{1 5}+\frac{8\pi k^{4}(n-1)^{5}R^{7}AN}{105} \tag{38}\] \[= 1+\frac{4\pi(n-1)R^{3}AN}{3}-\frac{1}{\lambda^{2}}\frac{32\pi^{3 }(n-1)^{3}R^{5}AN}{15}+\frac{1}{\lambda^{4}}\frac{128\pi k^{4}(n-1)^{5}R^{7}AN }{105}\] \[= n_{0}-\frac{a^{\prime}}{\lambda^{2}}+\frac{b^{\prime}}{\lambda^ {4}}\] where \[n_{0}=1+\frac{4\pi(n-1)R^{3}AN}{3}\] \[a^{\prime}=\frac{32\pi^{3}(n-1)^{3}R^{5}AN}{15} \tag{39}\] \[b^{\prime}=\frac{128\pi k^{4}(n-1)^{5}R^{7}AN}{105}\] It is obvious that equation (38) is the dispersion law for inhomogeneous media, which is consistent with the Cauchy dispersion equation. Figure 4: Replace medium of length \(L\) by \(m\) layers of thin medium Conclusion Using Huygens wavelet as the fundamental element of light field, the interaction between light and particles can be described correctly, and the dispersion caused by medium inhomogeneity can be found. The method using Huygens' principle to study dispersion can also be extended to the atomic scale, as well as being used to study the relationship between optical materials and dispersion.
2310.19913
Stochastic resetting with refractory periods: pathway formulation and exact results
We look into the problem of stochastic resetting with refractory periods. The model dynamics comprises diffusive and motionless phases. The diffusive phase ends at random time instants, at which the system is reset to a given position -- where the system remains at rest for a random time interval, termed the refractory period. A pathway formulation is introduced to derive exact analytical results for the relevant observables in a broad framework, with the resetting time and the refractory period following arbitrary distributions. For the paradigmatic case of Poissonian distributions of the resetting and refractory times, in general with different characteristic rates, closed-form expressions are obtained that successfully describe the relaxation to the steady state. Finally, we focus on the single-target search problem, in which the survival probability and the mean first passage time to the target can be exactly computed. Therein, we also discuss optimal strategies, which show a non-trivial dependence on the refractory period.
Gregorio García-Valladares, Deepak Gupta, Antonio Prados, Carlos A. Plata
2023-10-30T18:25:45Z
http://arxiv.org/abs/2310.19913v2
# Stochastic resetting with refractory periods: pathway formulation and exact results ###### Abstract We look into the problem of stochastic resetting with refractory periods. The model dynamics comprises diffusive and motionless phases. The diffusive phase ends at random time instants, at which the system is reset to a given position--where the system remains at rest for a random time interval, termed the refractory period. A pathway formulation is introduced to derive exact analytical results for the relevant observables in a broad framework, with the resetting time and the refractory period following arbitrary distributions. For the paradigmatic case of Poissonian distributions of the resetting and refractory times, in general with different characteristic rates, results are obtained in closed-form. Finally, we focus on the single-target search problem, in which the survival probability and the mean first passage time to the target can be exactly computed. Therein, we also discuss optimal strategies, which show a non-trivial dependence on the refractory period. * October 2023 _Keywords_: stochastic resetting, first passage time, optimal search process ## 1 Introduction Stochastic resetting [1, 2, 3] has become a very prolific topic within the field of non-equilibrium statistical mechanics. Stochastic resetting or restart can be thought of as one of the most elementary examples of an intermittent search strategy [4, 5, 6, 7], simple enough to analytically address the study of many physical quantities of interest. On the one hand, it has been successfully used in many different applications, ranging from economics [8, 9, 10, 11] to biochemical reactions [12, 13, 14, 15, 16] or ecology [17, 18, 19, 20], mostly motivated by the beneficial effect of restart for lowering the first passage time [1, 2, 21, 22, 23, 24, 25, 26]. On the other hand, it constitutes an excellent test bench for performing non-equilibrium research, providing comprehensive models to study non-equilibrium steady states (NESS) [27, 28, 29, 30, 31, 32, 33, 34], stochastic thermodynamics and fluctuation theorems [35, 36, 37, 38, 39, 14], large deviations [40, 41, 42, 43, 44, 45], or quantum restart [46, 47, 48, 49, 50, 51], to name just a few. Originally, stochastic resetting was introduced as instantaneous events that restart a given dynamics [1, 2]. Nevertheless, this instantaneous cannot represent a real physical situation, since the resets are cost-free--and any actual, physical, implementation thereof must involve some cost. This has led to investigate more refined and realistic models, where dynamical phases of the natural dynamics alternate with return phases [34, 39, 52, 53, 54, 55, 56]. Therein, an explicit mechanism, either deterministic or stochastic, is switched on to drive the system towards its reset state. A simple--possibly the simplest--way for taking into account the cost of resets is adding a refractory period after the "classical" instantaneous reset [57, 58]. This refractory or residence phase can be understood as a random recovery time payed by the system after performing the reset. The study of stochastic resetting with refractory periods has been shown to be useful in the context of enzymatic reactions following the Michaelis-Menten scheme [12, 13, 14, 15]. Therein, an enzyme binds to a substrate in a reversible binding-unbinding reaction, which, in a second step from the bound state, release a certain product. Here, the unbinding step may facilitate the production of products, i.e. interruption of a task may improve its accomplishment--which is the essence of optimal restart strategies. This work focuses on the detailed analysis of stochastic resetting with refractory periods. Specifically, we provide a pathway formulation based on the statistics of any possible reset history of the system. Such a formulation is related to renewal theory [31, 59, 60, 61], being inspired by similar techniques in different resetting setups [39, 59]. We prove the validity of our pathway approach in a very broad framework, which allows us to obtain general results for the case of stochastic resetting with refractory periods.1 Moreover, exact results for the case of Poissonian resets with Poissonian refractory periods are derived. For that relevant situation, the evolution of the probability density distribution (PDF) of a resetting Brownian particles in an infinite domain with refractory periods is explicitly worked out. Also, we obtain the mean first passage time (MFPT) as a function of the rates governing the exponential distributions for both dynamical phases. Interestingly, the minimization of the MFPT that we carry out reveals that the optimal restart rate depends on the typical duration of the refractory periods after the interruption. Footnote 1: Our general framework reproduces the specific results derived for some particular cases already considered in the literature [57, 58]. The rest of the article is organised as follows. The fundamental ingredients of the model are described in section 2. Section 3 is devoted to the detailed analysis of the PDF of the system through our pathway formulation. We explicitly obtain the whole evolution of the system, which reaches a NESS in the long-time limit. Section 4 deals with the MFPT. In addition to reobtaining a general expression for the MFPT wih refractory periods within our framework, an explicit formula in the case of both Poissonian resets and refractory periods is derived. Additionally, the optimal resetting rate is obtained as a function of the rate governing the duration of the refractory periods. The conclusions of our work are summarised in section 5. Finally, extensions of our results and some technicalities are discussed in the appendices. ## 2 Stochastic model We consider a quite general stochastic resetting process. Let the system be represented by a particle that, in absence of resetting, stochastically propagates following a distribution \(p^{\rm(f)}\)--governed by a Fokker-Planck equation, which we call "natural" or "propagation" dynamics. On top of this natural dynamics, random resets to a certain position \(x_{r}\) occur. Time events at which the particle instantaneously goes back to \(x_{r}\) are named resetting events, and denoted by \(t_{i}\), where the subscript \(i=1,2,\ldots\), stands for the order of occurrence. The probability that a resetting event takes place in the time interval \((t,t+\mathrm{d}t)\), is \(\mathrm{d}t\,f(t)\), so the integral \[F(t)=\int_{t}^{\infty}\mathrm{d}t^{\prime}\,f(t^{\prime}), \tag{1}\] is the probability that no resetting events have occurred up to time \(t\). In other words, \(F(t)\) is the probability of having an uninterrupted propagation phase lasting \(t\) at least. In the simplest resetting process, the particle is instantaneously reset to \(x_{r}\) and carries on its natural dynamics--described by \(p^{\rm(f)}\)--right after. Instantaneous resets are difficult to motivate within the context of a realistic dynamics, since they involve an infinite energetic payment which is followed by no recuperation phase. With this problem in mind, we thoroughly analyse herein the effect of _refractory periods_--random resting times after the reset [57, 58]. Specifically, the particle is assumed to be at rest at \(x_{r}\) after the \(i\)-th resetting event up to time \(\tau_{i}\), for an independent random time \(\sigma_{i}=\tau_{i}-t_{i}\). The refractory period duration \(\sigma\) is characterised by the PDF \(w(\sigma)\), and the integral \[W(\sigma)=\int_{\sigma}^{\infty}\mathrm{d}\sigma^{\prime}\,w(\sigma^{\prime}), \tag{2}\] is the probability of having a refractory period longer than \(\sigma\). In other words, \(W(\sigma)\) is the probability of having a minimum refractory period equal to \(\sigma\). An illustrative portrayal of this resetting dynamics for a one-dimensional model is shown in figure 1, where blue and red stand for the propagation and refractory phases, respectively. Taking the renewal structure of the resetting mechanism into account, the PDF \(p(x,t|x_{0})\) of finding the particle in \(x\), starting from \(x_{0}\), after a time evolution of duration \(t\) can be built as \[p(x,t|x_{0}) = F(t)p^{\rm(f)}(x,t|x_{0}) \tag{3}\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})W(t-t_{1})\delta(x-x_{r})\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})\int_{t_{1}}^{t}\mathrm{d} \tau_{1}\,w(\tau_{1}-t_{1})\,p(x,t-\tau_{1}|x_{r}).\] where \(p^{(\rm f)}(x,t|x_{0})\) is the free propagator of the natural dynamics in absence of resetting and \(\delta(x)\) is the Dirac delta distribution. The first term on the right hand side in equation (3) is the contribution from trajectories where there has been no resetting up to time \(t\), thus weighted by the probability \(F(t)\). The second term stems from paths for which there has been a resetting event in the interval \((0,t)\) and the subsequent refractory phase has not ended at time \(t\); therefore it contributes with \(\delta(x-x_{r})\). The last term comes from trajectories where the particle has been reset at \(t_{1}\in(0,t)\), has had a subsequent refractory period finishing at \(\tau_{1}\in(t_{1},t)\), and has reached \(x\) at time \(t\) following its renewed dynamics in the interval \((\tau_{1},t)\). For the sake of simplicity, we are going to take \(x_{r}=x_{0}\) in the remainder of the paper, i.e. the particle starts at the very beginning of the process from the resetting location. Figure 1: Single trajectory for stochastic resetting with refractory periods. The labels \(t_{i}\), \(i=1,2,\ldots\), mark a new reset to \(x_{0}\) where the propagation phase (blue line) ends with an instantaneous reset (dashed black line) and the refractory period begins (red line). Analogously, \(\tau_{i}\) marks the end of the refractory phase after \(t_{i}\), \(i=1,2,\ldots\),. The duration \(\sigma\) of each propagation (refractory) phase comes from the PDF \(f(\sigma)\) (\(w(\sigma)\)). For the sake of simplicity, the initial condition is equal to the resetting position \(x_{0}=x_{r}\). ## 3 Pathway formulation for the probability density function ### General framework We aim at working out an expression for \(p(x,t|x_{0})\) for all times in closed form, thus going beyond the solution of the PDF in the Laplace domain that can be found in the literature [57, 58]. First, the density probability can be split into different pathways, based on how many full renewals of the dynamics have occurred from the beginning, \[p(x,t|x_{0})=\sum_{n=0}^{\infty}\left[p_{n}^{\rm(p)}(x,t|x_{0})+p_{n}^{\rm(r)} (x,t|x_{0})\right]. \tag{4}\] On the right-hand side (rhs) of equation (4), we have distinguished between propagation (\(p_{n}^{\rm(p)}\)) and refractory period (\(p_{n}^{\rm(r)}\)) contributions, depending on the final stage of the evolution up to time \(t\). The particular case \(n=0\) corresponds to the no-renewed evolution, \[p_{0}^{\rm(p)}(x,t|x_{0})=F(t)p^{\rm(f)}(x,t|x_{0}),\quad p_{0}^{\rm(r)}(x,t|x_ {0})=\delta(x-x_{0})\int_{0}^{t_{1}}dt_{1}f(t_{1})W(t-t_{1}). \tag{5}\] For generic \(n\geq 0\), both \(p_{n}^{\rm(p)}\) and \(p_{n}^{\rm(r)}\) can be built in a systematic way, \[p_{n}^{\rm(p)}(x,t|x_{0}) = \prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{d}t_{i}\,f(t_{i }-\tau_{i-1})\int_{t_{i}}^{t}\mathrm{d}\tau_{i}\,w(\tau_{i}-t_{i})\right] \tag{6a}\] \[\times F(t-\tau_{n})p^{\rm(f)}(x,t-\tau_{n}|x_{0}),\] \[p_{n}^{\rm(r)}(x,t|x_{0}) = \delta(x-x_{0})\prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{ d}t_{i}\,f(t_{i}-\tau_{i-1})\int_{t_{i}}^{t}\mathrm{d}\tau_{i}\,w(\tau_{i}-t_{i})\right]\] (6b) \[\times\int_{\tau_{n}}^{t}\mathrm{d}t_{n+1}\,f(t_{n+1}-\tau_{n})W( t-t_{n+1}),\] where \(\tau_{0}=0\) has been introduced for the sake of a compact notation. The above equations can be simplified if we rewrite them as convolutions, \[p_{n}^{\rm(p)}(x,t|x_{0}) = \left\{\left[f*w\right]^{*n}*p_{0}^{\rm(p)}\right\}(x,t|x_{0}), \tag{7a}\] \[p_{n}^{\rm(r)}(x,t|x_{0}) = \delta(x-x_{0})\left\{\left[f*w\right]^{*n}*f*W\right\}(t), \tag{7b}\] where \(p_{0}^{\rm(p)}\) is defined in equation (5). Note that we have introduced the asterisk notation for the convolution product \[[A*B](t)=\int_{0}^{t}dt^{\prime}A(t^{\prime})B(t-t^{\prime}) \tag{8}\] and the convolution power \(A^{*2}=A*A\). Taking advantage of our expressing \(p_{n}^{\rm(p)}\) and \(p_{n}^{\rm(r)}\) as convolutions, their Laplace transforms are written in a straightforward way: \[\stackrel{{\curvearrowleft}}{{p_{n}^{\rm(p)}(x,s|x_{0})= \left(\stackrel{{\curvearrowleft}}{{f}}(s)\widetilde{w}(s) \right)^{n}\stackrel{{\curvearrowleft}}{{p_{0}^{\rm(p)}}}(x,s|x_ {0}),}\right.} \tag{9a}\] \[\stackrel{{\curvearrowleft}}{{p_{n}^{\rm(r)}(x,s|x_ {0})=\left(\stackrel{{\curvearrowleft}}{{f}}(s)\widetilde{w}(s) \right)^{n}\stackrel{{\curvearrowleft}}{{f}}(s)\widetilde{W}(s) \delta(x-x_{0}).}\right.} \tag{9b}\] The sum over \(n\) gives us the total contribution, \[\overbrace{p^{\rm(p)}(x,s|x_{0})} =\sum_{n=0}^{\infty}\overbrace{p_{n}^{\rm(p)}(x,s|x_{0})}^{\overbrace {p_{0}^{\rm(p)}(x,s|x_{0})}^{\overbrace{p_{0}^{\rm(p)}(x,s|x_{0})}^{\overbrace{ p_{0}^{\rm(p)}(x,s|x_{0} and refractory contributions given now by \[p_{n}^{(\mathrm{p})}(x,t|x_{0}) = \prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{d}t_{i}\int_{t_{i }}^{t}\mathrm{d}\tau_{i}\,h(t_{i}-\tau_{i-1},\tau_{i}-t_{i})\right] \tag{15a}\] \[\times F(t-\tau_{n})p^{(\mathrm{f})}(x,t-\tau_{n}|x_{0}),\] \[p_{n}^{(\mathrm{r})}(x,t|x_{0}) = \delta(x-x_{0})\prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{ d}t_{i}\int_{t_{i}}^{t}\mathrm{d}\tau_{i}\,h(t_{i}-\tau_{i-1},\tau_{i}-t_{i})\right]\] (15b) \[\times\int_{\tau_{n}}^{t}\mathrm{d}t_{n+1}\int_{t}^{\infty} \mathrm{d}\tau_{n+1}h(t_{n+1}-\tau_{n},\tau_{n+1}-t_{n+1}).\] In this case, we do not have a clear convolution structure because of the correlation. Notwithstanding, it is possible to simplify these expressions going to the Laplace domain, as shown below. Let us start with the propagation phase, \[\overbrace{p_{n}^{(\mathrm{p})}}^{(\mathrm{p})}(x,s|x_{0}) = \int_{0}^{\infty}\mathrm{d}t\,e^{-st}\prod_{i=1}^{n}\left[\int_{ \tau_{i-1}}^{t}\mathrm{d}t_{i}\int_{t_{i}}^{t}\mathrm{d}\tau_{i}\,h(t_{i}- \tau_{i-1},\tau_{i}-t_{i})\right] \tag{16}\] \[\times F(t-\tau_{n})p^{(\mathrm{f})}(x,t-\tau_{n}|x_{0}),\] \[= \prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{\infty}\mathrm{d}t_{i} \int_{t_{i}}^{\infty}\mathrm{d}\tau_{i}\,h(t_{i}-\tau_{i-1},\tau_{i}-t_{i})\right]\] \[\times\int_{\tau_{n}}^{\infty}\mathrm{d}t\,e^{-st}F(t-\tau_{n})p^ {(\mathrm{f})}(x,t-\tau_{n}|x_{0}).\] Introducing a new integration variable \(t^{\prime}=t-\tau_{n}\), the last integral turns to \[\int_{0}^{\infty}\mathrm{d}t^{\prime}\,e^{-s(t^{\prime}+\tau_{n})}F(t^{\prime })p^{(\mathrm{f})}(x,t^{\prime}|x_{0})=e^{-s\tau_{n}}\overbrace{Fp^{(\mathrm{ f})}}^{(\mathrm{f})}(x,s). \tag{17}\] Afterwards, introducing \(\tau_{n}^{\prime}=\tau_{n}-t_{n}\) and \(t_{n}^{\prime}=t_{n}-\tau_{n-1}\), \[\int_{\tau_{n-1}}^{\infty}\mathrm{d}t_{n}\int_{t_{n}}^{\infty}\mathrm{d}\tau_ {n}\,h(t_{n}-\tau_{n-1},\tau_{n}-t_{n})e^{-s\tau}\overbrace{Fp^{(\mathrm{f})}} ^{(\mathrm{f})}(x,s)=e^{-s\tau_{n-1}}\overbrace{h(s,s)Fp^{(\mathrm{f})}}^{( \mathrm{f})}(x,s), \tag{18}\] where we have employed the notation \[\overbrace{h}^{\sim}(s,m)=\int_{0}^{\infty}\mathrm{d}t\,e^{-st}\int_{0}^{ \infty}\mathrm{d}\tau\,e^{-m\tau}h(t,\tau). \tag{19}\] for the bivariate Laplace transform of the joint probability \(h(t,\tau)\). Note that \(\overbrace{f}^{\sim}(s)=\overbrace{h}^{\sim}(s,0)\) and \(\widehat{w}(s)=\overbrace{h}^{\sim}(0,s)\). Iterating this procedure \(n\) times, we obtain \[\overbrace{p_{n}^{(\mathrm{p})}}^{(\mathrm{p})}(x,s)=\overbrace{h}^{n}(s,s) \overbrace{Fp^{(\mathrm{f})}}^{(\mathrm{f})}(x,s), \tag{20}\] and the Laplace transform of \(p^{(\mathrm{p})}(x,t)\) is given by \[\overbrace{p^{(\mathrm{p})}}^{(\mathrm{p})}(x,s)=\frac{1}{1-\over h}^{\sim} \overbrace{Fp^{(\mathrm{f})}}^{(\mathrm{f})}(x,s). \tag{21}\] A similar procedure can be carried out for the refractory contribution. The Laplace transform of equation (15b) is \[\overbrace{p_{n}^{\rm(r)}(x,s|x_{0})=}^{\sim} \delta(x-x_{0})\int_{0}^{\infty}\mathrm{d}te^{-st}\prod_{i=1}^{n} \left[\int_{\tau_{i-1}}^{t}\mathrm{d}t_{i}\int_{t_{i}}^{t}\mathrm{d}\tau_{i}\,h (t_{i}-\tau_{i-1},\tau_{i}-t_{i})\right]\] \[\times\int_{\tau_{n}}^{t}\mathrm{d}t_{n+1}\int_{t}^{\infty} \mathrm{d}\tau_{n+1}h(t_{n+1}-\tau_{n},\tau_{n+1}-t_{n+1})\] \[= \delta(x-x_{0})\prod_{i=1}^{n+1}\left[\int_{\tau_{i-1}}^{\infty} \mathrm{d}t_{i}\int_{t_{i}}^{\infty}\mathrm{d}\tau_{i}\,h(t_{i}-\tau_{i-1}, \tau_{i}-t_{i})\right]\int_{t_{n+1}}^{\tau_{n+1}}\mathrm{d}t\,e^{-st}\] \[= \delta(x-x_{0})\prod_{i=1}^{n+1}\left[\int_{\tau_{i-1}}^{\infty} \mathrm{d}t_{i}\int_{t_{i}}^{\infty}\mathrm{d}\tau_{i}\,h(t_{i}-\tau_{i-1}, \tau_{i}-t_{i})\right]\frac{e^{-st_{n+1}}-e^{-s\tau_{n+1}}}{s}. \tag{22}\] Integrating in an iterative way as before, one gets \[\overbrace{p_{n}^{\rm(r)}(x,s)=\delta(x-x_{0})\frac{\sim}{h(s,0)-\sim}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! out to be \[\overleftrightarrow{p^{\rm(p)}}(x,s|x_{0}) =\overleftrightarrow{p^{\rm(f)}}(x,s+r_{1}|x_{0})\] \[\quad+\frac{r_{1}r_{2}}{r_{1}+r_{2}}\left(\frac{1}{s}-\frac{1}{s+r _{1}+r_{2}}\right)\overleftrightarrow{p^{\rm(f)}}(x,s+r_{1}|x_{0}), \tag{26a}\] \[\overleftrightarrow{p^{\rm(r)}}(x,s|x_{0}) =\frac{r_{1}}{r_{1}+r_{2}}\left(\frac{1}{s}-\frac{1}{s+r_{1}+r_{2 }}\right)\delta(x-x_{0}), \tag{26b}\] which can be readily inverted, \[p^{\rm(p)}(x,t|x_{0}) =e^{-r_{1}t}p^{\rm(f)}(x,t|x_{0})\] \[\quad+\frac{r_{2}}{r_{1}+r_{2}}r_{1}e^{-r_{1}t}\int_{0}^{t}\, \mathrm{d}\tau\,\left(e^{r_{1}\tau}-e^{-r_{2}\tau}\right)p^{\rm(f)}(x,t-\tau| x_{0}), \tag{27a}\] \[p^{\rm(r)}(x,t|x_{0}) =\frac{r_{1}}{r_{1}+r_{2}}\left(1-e^{-(r_{1}+r_{2})t}\right) \delta(x-x_{0}). \tag{27b}\] Therefore, we have obtained the exact evolution of the system in the time domain--although still in terms of an integral term. Later on, specifically in section 3.2.3, an approximation leading to a closed-form expression is provided. Note that the "standard" expressions--i.e. those corresponding to the case without refractory period--are reobtained by taking the limit \(r_{2}\to\infty\) in equation (27). #### 3.2.1 The case \(r_{1}=r_{2}\). When the Poissonian rates for resetting and refractory periods are equal, \(r_{1}=r_{2}=r\), the above expressions become especially simple, \[p^{\rm(p)}(x,t|x_{0}) =e^{-rt}p^{\rm(f)}(x,t|x_{0})+re^{-rt}\int_{0}^{t}\mathrm{d}\tau \sinh\left(r\tau\right)p^{\rm(f)}(x,t-\tau|x_{0}), \tag{28a}\] \[p^{\rm(r)}(x,t|x_{0}) =e^{-rt}\sinh\left(rt\right)\delta(x-x_{0}). \tag{28b}\] We would like to highlight that these expressions may be directly obtained in the time domain, without resorting to the Laplace transform. Let us go back to equation (6) and substitute the exponential distributions therein, \[p_{0}^{\rm(p)}(x,t|x_{0}) =e^{-rt}p^{\rm(f)}(x,t|x_{0}) \tag{29a}\] \[p_{n}^{\rm(p)}(x,t|x_{0}) =r^{2n}e^{-rt}\int_{0}^{t}\mathrm{d}t_{1}\int_{t_{1}}^{t}\mathrm{ d}\tau_{1}\int_{\tau_{1}}^{t}\mathrm{d}t_{2}\ldots\int_{t_{n}}^{t}\mathrm{d} \tau_{n}\,p^{\rm(f)}(x,t-\tau_{n}|x_{0}),\] \[=r^{2n}e^{-rt}\int_{0}^{t}d\tau_{n}\frac{\tau_{n}^{2n-1}}{(2n-1)! }p^{\rm(f)}(x,t-\tau_{n}|x_{0}),\quad n\geq 1,\] (29b) \[p_{n}^{\rm(r)}(x,t|x_{0}) =e^{-rt}\frac{(rt)^{2n+1}}{(2n+1)!}\delta(x-x_{0}). \tag{29c}\] Summing over all \(n\) yields equation (28). #### 3.2.2 Non-equilibrium steady state. Now, the asymptotic long-time behaviour is derived. For doing so, we assume a specific functional form of the free propagator \(p^{\rm(f)}(x,t|x_{0})\). Specifically, we consider the most usual case, which is pure diffusion. Therein, \(p^{\rm(f)}\) is the Green function for the diffusion equation, \[p^{\rm(f)}(x,t|x_{0})=\frac{1}{\sqrt{4\pi Dt}}\exp\left[-\frac{(x-x_{0})^{2}}{4 Dt}\right], \tag{30}\] with \(D\) being the diffusion coefficient. The long-time behaviour of \(p(x,t|x_{0})\) can be found by making use of the final value theorem \[\lim_{t\to\infty}z(t)=\lim_{s\to 0}s\,\widehat{z}(s). \tag{31}\] Hence, taking into account equation (26), the long-time behaviour for arbitrary \((r_{1},r_{2})\) is achieved, \[\lim_{t\to\infty}p^{\rm(p)}(x,t|x_{0}) =\lim_{s\to 0}\overbrace{s\,p^{\rm(p)}}^{\rm(p)}(x,s|x_{0})=\frac{1} {2}\frac{r_{2}}{r_{1}+r_{2}}\sqrt{\frac{r_{1}}{D}}\exp\left[-\sqrt{\frac{r_{1 }}{D}}|x-x_{0}|\right], \tag{32a}\] \[\lim_{t\to\infty}p^{\rm(r)}(x,t|x_{0}) =\lim_{s\to 0}s\,p^{\rm(r)}(x,s|x_{0})=\frac{r_{1}}{r_{1}+r_{2}} \delta(x-x_{0}). \tag{32b}\] Of course, these results are consistent with those obtained by taking the infinite time limit in (27), as well as with the results found in [57, 58]. Note that the normalization of propagation and refractory phases in the stationary are given by the fraction of the average time spent in the corresponding phase, as physically expected. In figure 2, the convergence of the integral expression (27a) of \(p^{\rm(p)}\) to its NESS (32a) is shown. #### 3.2.3 Relaxation to the steady state. We have already obtained exact expressions for the PDFs of each phase in the time domain, equations (27a) and (27b), as well as their long-time behaviour, equations (32a) and (32b). Still, equation (27a) is not particularly illuminating, since one cannot infer how the relaxation to the NESS occurs in the propagation phase in a transparent way. Figure 2 provides a hint on this issue, it can be observed that \(p^{\rm(p)}(x,t|x_{0})\) reaches the steady state gradually in a central region \(|x-x_{0}|<\hat{x}(t)\). The typical length around the reset point in which the NESS has been already reached, \(\hat{x}(t)\), grows as \(t\) increases, establishing a dynamic separation between a transient outer region, \(|x-x_{0}|>\hat{x}(t)\), and the aforementioned relaxed inner region, \(|x-x_{0}|<\hat{x}(t)\). Similar phenomena have already been observed in other resetting setups [27, 32]. In order to obtain the long-time behaviour of (27a), we start by rewriting it as \[p^{\rm(p)}(x,t|x_{0}) =\frac{1}{\sqrt{4\pi Dt}}\exp\left[-t\,\Phi_{1}\left(1;\frac{x-x_ {0}}{t}\right)\right]\] \[\quad+\frac{r_{1}r_{2}}{r_{1}+r_{2}}\sqrt{\frac{t}{4\pi D}}\int_ {0}^{1}\,\frac{\rmd\omega}{\sqrt{\omega}}\,\exp\left[-t\,\Phi_{1}\left(\omega ;\frac{x-x_{0}}{t}\right)\right]\] \[\quad-\frac{r_{1}r_{2}}{r_{1}+r_{2}}\sqrt{\frac{t}{4\pi D}}e^{-(r _{1}+r_{2})t}\int_{0}^{1}\,\frac{\rmd\omega}{\sqrt{\omega}}\,\exp\left[t\, \Phi_{2}\left(\omega;\frac{x-x_{0}}{t}\right)\right] \tag{33}\] where we have introduced the change of variable \(\omega=1-\tau/t\), and defined \[\Phi_{1}(\omega;y)\equiv r_{1}\omega+\frac{y^{2}}{4D\omega},\quad\Phi_{2}(\omega ;y)\equiv r_{2}\omega-\frac{y^{2}}{4D\omega}. \tag{34}\] For long times, taking constant \((x-x_{0})/t\), the main contribution to the integrals over \(\omega\) stem from the maximum of the exponents, i.e. the minimum for \(\Phi_{1}(\omega;(x-x_{0})/t)\) and the maximum for \(\Phi_{2}(\omega;(x-x_{0})/t)\)--as given by the so-called Laplace method for the asymptotic evaluation of integrals [62]. In the following, we give a simplified picture of the derivation of the dominant behaviour of (27a)--or (33)--stemming from the Laplace method, emphasising the intuitive ideas. A rigorous derivation of a more complex, but still explicit, formula for the long-time behaviour, which properly takes into account all the terms involved in equation (33), as well as the subtleties stemming from the correct application of Laplace's method when the maximum is close to the boundaries of the integration interval, is relegated to A. For long times, taking constant \((x-x_{0})/t\), the main contribution to the integrals over \(\omega\) in (33) arises from the maximum of the exponents, i.e. the minimum for \(\Phi_{1}(\omega;(x-x_{0})/t)\) and the maximum for \(\Phi_{2}(\omega;(x-x_{0})/t)\). On the one hand, \(\Phi_{2}\) is a monotonically increasing function of \(\omega\) and its maximum is always at the upper limit of integration. The corresponding contribution is thus always subdominant against the first, non-integral, term in (33), i.e. the one involving trajectories with no resetting events, as shown in A. On the other hand, \(\Phi_{1}(\omega;y)\) is not monotonic and Figure 2: PDF of the propagation phase. Numerical integration of \(p^{\rm(p)}(x,t|x_{0})\) (colourful dashed lines), given by (27a), at different times and the infinite time NESS (32a) (solid black line) are shown. All the results are shown using \(x_{0}=0\), \(D=1\), \(r_{1}=r_{2}=1\) as parameters. has a single absolute minimum at \(\omega_{0}=|y|/\sqrt{4Dr_{1}}\), since \(r_{1}\) and \(D\) are strictly positive. The minimum of \(\Phi_{1}\) within the integration interval \((0,1)\) is \(\omega_{0}\) if \(\omega_{0}<1\), and direct application of the Laplace method gives \[\frac{r_{1}r_{2}}{r_{1}+r_{2}}\sqrt{\frac{t}{4\pi D}}\int_{0}^{1}\,\frac{ \mathrm{d}\omega}{\sqrt{\omega}}\,\exp\left[-t\,\Phi_{1}\left(\omega;\frac{x- x_{0}}{t}\right)\right]\sim\frac{1}{2}\frac{r_{2}}{r_{1}+r_{2}}\sqrt{\frac{r_{1}}{D}} \exp\left[-\sqrt{\frac{r_{1}}{D}}|x-x_{0}|\right], \tag{35}\] which corresponds to the NESS (32a). However, if \(\omega_{0}>1\), the minimum of \(\Phi_{1}\) within the integration interval is reached at the boundaries. Similarly to the situation with \(\Phi_{2}\), this entails that the corresponding contribution is subdominant against the first term in (33). Summing up, the PDF of the propagation phase can be estimated as \[p^{(\mathrm{p})}(x,t|x_{0})\sim\begin{cases}\frac{1}{2}\frac{r_ {2}}{r_{1}+r_{2}}\sqrt{\frac{r_{1}}{D}}\exp\left[-\sqrt{\frac{r_{1}}{D}}|x-x_{ 0}|\right],&\frac{|x-x_{0}|}{t}<\sqrt{4Dr_{1}},\\ \frac{1}{\sqrt{4\pi Dt}}\exp\left[-r_{1}t-\frac{(x-x_{0})^{2}}{4Dt} \right],&\frac{|x-x_{0}|}{t}>\sqrt{4Dr_{1}}.\end{cases} \tag{36}\] The above rough discussion explains the observed separation into two regimes: the system has relaxed to the NESS within a certain spatial region, which increases linearly with \(t\), as given by the condition \(|x-x_{0}|/t<\sqrt{4Dr_{1}}\), whereas the transient behaviour is observed outside, i.e. for \(|x-x_{0}|/t>\sqrt{4Dr_{1}}\). The comparison between simulations and the analytical approach in figure 3 shows an excellent agreement. ## 4 First passage time with refractory periods In this section, we consider the first passage time problem to a target point \(x_{t}\). In the absence of refractory period (\(r_{2}\to\infty\)), it is known that there appears an optimal resetting rate \(r_{1}^{\mathrm{opt}}\) that minimises the MFPT [1, 2]. Here, we are interested in the analysis of the effect of the refractory period on the MFPT. On a physical basis, it is clear that the MFPT will increase as \(r_{2}\) decreases, i.e. as the time spent at rest increases. Still, since the probability distributions of resetting events and refractory periods are independent, one might naively think that the optimal resetting rate \(r_{1}^{\mathrm{opt}}\) would remain unaffected by \(r_{2}\). The analysis below shows that this expectation is not fulfilled: in fact, the optimal resetting rate \(r_{1}^{\mathrm{opt}}\) presents a non-trivial dependence on \(r_{2}\). ### General formulation Including an absorbing boundary at \(x_{t}\) in our system makes the former expressions invalid: normalisation to unity is not preserved anymore. For the sake of clarity, we do not explicitly add the parametric dependence on \(x_{t}\), but it is important to keep in mind that all functions depend on it from now on, e.g. \(p^{(\mathrm{f})}(x,t|x_{0})\) refers to the propagator in absence of resetting of a particle starting at \(x_{0}\) with an absorbing boundary at \(x_{t}\). For every propagation phase prior to a resetting event, we have to consider that the particle has not reached the target point \(x_{t}\). Let \(Q^{(\mathrm{f})}(x_{0},t)\) denote the free survival probability, i.e. the probability of not being absorbed by the target for a time interval \(t\) in the absence of resetting, provided that the particle started propagating from \(x_{0}\) at \(t=0\). The probability of finding the particle in \(x\) at time \(t\) fulfills the renewal equation \[p(x,t|x_{0}) = F(t)p^{(\mathrm{f})}(x,t|x_{0}) \tag{37}\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})Q^{(\mathrm{f})}(x_{0},t_{ 1})W(t-t_{1})\delta(x-x_{0})\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})Q^{(\mathrm{f})}(x_{0},t_{ 1})\int_{t_{1}}^{t}\mathrm{d}\tau_{1}\,w(\tau_{1}-t_{1})\,p(x,t-\tau_{1}|x_{0})\] Its integral over \(x\) gives us the renewal structure of the survival probability with resetting \(Q_{r}(x_{0},t)\), \[Q_{r}(x_{0},t) = F(t)Q^{(\mathrm{f})}(x_{0},t) \tag{38}\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})Q^{(\mathrm{f})}(x_{0},t_{ 1})W(t-t_{1})\] \[+\int_{0}^{t}\mathrm{d}t_{1}\,f(t_{1})Q^{(\mathrm{f})}(x_{0},t_{ 1})\int_{t_{1}}^{t}\mathrm{d}\tau_{1}\,w(\tau_{1}-t_{1})\,Q_{r}(x_{0},t-\tau_{ 1})\] \[= F(t)Q^{(\mathrm{f})}(x_{0},t)+\left[\left(fQ^{(\mathrm{f})} \right)*w*Q_{r}\right](x_{0},t)+\left[\left(fQ^{(\mathrm{f})}\right)*W\right]( x_{0},t)\] Figure 3: Comparison of the numerical and analytical PDFs of the propagation phase. Parameter values are \(D=1\), \(r_{1}=1\), \(r_{2}=2\) and \(x_{0}=0\). Symbols stand for numerical simulations for \(t=1.5\) (light blue circles) and \(t=3\) (red squares), while solid black lines stand for the analytical approximation given by equation (36). Vertical dashed lines at \(|x|/t=\sqrt{4Dr_{1}}\) indicate the separation between the inner region, where the NESS has already been reached, and the outer region, where the transient behaviour is observed. Note that now the resetting distribution is weighted with the survival probability when compared to the expressions obtained before. To emphasise the power of our pathway formulation, we carry out an analysis completely similar to that in Section 3. That is, we expand the PDF in a series of terms \(p_{n}^{(\mathrm{p}),(\mathrm{r})}\) corresponding to a given number of renewals \(n\), \[p_{n}^{(\mathrm{p})}(x,t|x_{0}) =\prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{d}t_{i}\,f(t_{ i}-\tau_{i-1})Q^{(\mathrm{f})}(x_{0},t_{i}-\tau_{i-1})\int_{t_{i}}^{t}\mathrm{d} \tau_{i}\,w(\tau_{i}-t_{i})\right]\] \[\qquad\times F(t-\tau_{n})p^{(\mathrm{f})}(x,t-\tau_{n}|x_{0})\] \[=\left\{\left[\left(fQ^{(\mathrm{f})}\right)*w\right]^{*n}* \left(Fp^{(\mathrm{f})}\right)\right\}(x,t), \tag{39a}\] \[p_{n}^{(\mathrm{r})}(x,t|x_{0}) =\prod_{i=1}^{n}\left[\int_{\tau_{i-1}}^{t}\mathrm{d}t_{i}\,f(t_{ i}-\tau_{i-1})Q^{(\mathrm{f})}(x_{0},t_{i}-\tau_{i-1})\int_{t_{i}}^{t} \mathrm{d}\tau_{i}\,w(\tau_{i}-t_{i})\right]\] \[\qquad\times\int_{\tau_{n}}^{t}\mathrm{d}t_{n+1}\,f(t_{n+1}-\tau_ {n})Q^{(\mathrm{f})}(x_{0},t_{n+1}-\tau_{n})W(t-t_{n+1})\delta(x-x_{0})\] \[=\left\{\left[\left(fQ^{(\mathrm{f})}\right)*w\right]^{*n}*\left( fQ^{(\mathrm{f})}\right)*W\right\}(t)\delta(x-x_{0}). \tag{39b}\] With this approach, we get \[\widehat{Q}_{r}^{\prime}(x_{0},s)=\frac{1}{1-fQ^{(\mathrm{f})}(x_{0},s) \widehat{w}(s)}\left[\overbrace{FQ^{(\mathrm{f})}(x_{0},s)+\overbrace{fQ^{( \mathrm{f})}(x_{0},s)}^{\prime}}\widehat{W}(s)\right]. \tag{40}\] Of course, this general result for arbitrary \(f\) and \(w\) is consistent with the results in the literature [57, 58]. ### Poissonian resetting and refractory period Now we consider the same model as in subsection 3.2, the resetting time and refractory period distributions are both Poissonian and given by (25a) and (25b). The propagator in the presence of the absorbing boundary is \[p^{(\mathrm{f})}(x,t|x_{0})=\frac{1}{\sqrt{4\pi Dt}}\left\{\exp\left[-\frac{( x-x_{0})^{2}}{4Dt}\right]-\exp\left[-\frac{(x+x_{0}-2x_{t})^{2}}{4Dt}\right] \right\}, \tag{41}\] which is valid for all \(x_{t}\in\mathbb{R}\). The free survival probability and its Laplace transform are \[Q^{(\mathrm{f})}(x_{0},t)=\mathrm{erf}\left(\frac{1}{2}\sqrt{\frac{\tau_{d}}{t} }\right),\quad\overbrace{Q^{(\mathrm{f})}(x_{0},s)}^{\prime}=\frac{1-e^{-\sqrt {\tau_{d}s}}}{s}, \tag{42}\] where we have defined the characteristic diffusion time between the initial position and the target as \[\tau_{d}=\frac{(x_{t}-x_{0})^{2}}{D}\geq 0. \tag{43}\] Note that \(\tau_{d}\), and thus \(Q^{\rm(f)}\) and \(\widehat{Q^{\rm(f)}}\), only depends on the distance \(|x_{t}-x_{0}|\) between the initial position and the target. As a consequence, equation (40) becomes \[\widehat{Q_{r}^{\prime}}(x_{0},s) =(s+r_{1}+r_{2})\frac{\widehat{Q^{\rm(f)}}(x_{0},s+r_{1})}{s+r_{2 }-r_{1}r_{2}\widehat{Q^{\rm(f)}}(x_{0},s+r_{1})} \tag{44}\] \[=(s+r_{1}+r_{2})\frac{1-e^{-\sqrt{\tau_{d}(s+r_{1})}}}{s(s+r_{1}+ r_{2})+r_{1}r_{2}e^{-\sqrt{\tau_{d}(s+r_{1})}}}. \tag{45}\] #### 4.2.1 Mean first passage time. Equation (45) for \(\widehat{Q_{r}}(x_{0},s)\) cannot be easily inverted to time domain, but it represents an excellent resource to compute some relevant physical properties in an exact way. Let us introduce the first passage density of the particle being absorbed by the target: \[f_{\rm FPT}(t;\tau_{d})=-\frac{\partial Q_{r}(x_{0},t)}{\partial t}, \tag{46}\] where we have introduced explicitly in the notation the parametric dependence on \(\tau_{d}\). Therefrom, we derive the MFPT as the mean absorbing time, \[T(r_{1},r_{2};\tau_{d})=\int_{0}^{\infty}\mathrm{d}t\,t\,f_{\rm FPT}(t;\tau_{ d})=\lim_{s\to 0}\widehat{Q_{r}^{\prime}}(x_{0},s), \tag{47}\] which is thus easily computable from (45), \[T(r_{1},r_{2};\tau_{d})=\left(e^{\sqrt{\tau_{d}\tau_{1}}}-1\right)\left(\frac {1}{r_{1}}+\frac{1}{r_{2}}\right). \tag{48}\] Since we are interested in the dependence of the MFPT on the parameters controlling the typical duration of the reset events, \(r_{1}\), and the refractory periods, \(r_{2}\), we have introduced them explicitly in the notation. As expected on a physical basis, two contributions appear in the MFPT coming from the two summands in the second parenthesis. The first one, which depends exclusively on \(r_{1}\), corresponds to the instantaneous resetting without refractory period [1, 2]. The second one stems from the refractory period that we have introduced after each resetting event. Despite its simple functional form, equation (48) exhibits atypical and rich behaviour. We are interested in how the MFPT changes as a function of \(r_{1}\) and \(r_{2}\)-- clearly, it monotonically increases with \(\tau_{d}\), that is, it increases with the distance \(|x_{t}-x_{0}|\) and the inverse of diffusion coefficient \(D\). With this knowledge, we are able to write equation (48) as a function which only depends on \(r_{1}\) and \(r_{2}\) introducing dimensionless parameters \(r_{1}=r_{1}^{*}/\tau_{d}\), \(r_{2}=r_{2}^{*}/\tau_{d}\) and \(T=T^{*}\tau_{d}\). Therefore, the MFPT is \[T(r_{1},r_{2})=\left(e^{\sqrt{r_{1}}}-1\right)\left(\frac{1}{r_{1}}+\frac{1}{r _{2}}\right), \tag{49}\] where we have dropped the asterisk in order not to clutter the formulae. #### 4.2.2 Optimal resetting rate \(r_{1}^{\text{opt}}(r_{2})\). Let us first look for the minimum of \(T(r_{1},r_{2})\) as a function of \(r_{2}\) for fixed \(r_{1}\). We clearly get that \(r_{2}\to\infty\) for the best choice, reobtaining the MFPT in the absence of refractory period [1, 2], \[\lim_{r_{2}\to\infty}T(r_{1},r_{2})=\frac{e^{\sqrt{r_{1}}}-1}{r_{1}}. \tag{50}\] As a function of \(r_{1}\), the minimum of \(T(r_{1},r_{2}\to\infty)\) is reached at \(r_{1}=r_{1}^{(0)}=\left(2+W(-2e^{-2})\right)^{2}\), where \(W(x)\) corresponds to the Lambert \(W\) function, which is the inverse function of equation \(f(x)=xe^{x}\), i.e. the solution of \(W(x)\exp[W(x)]=x\)[1, 2]. Now we focus on looking for the minimum of \(T(r_{1},r_{2})\) as a function of \(r_{1}\) for fixed \(r_{2}\). In figure 4, we show a couple of instances of \(T(r_{1},r_{2})\) as a function of \(r_{1}\) for \(r_{2}\in\{1,5\}\), finding an excellent agreement between theory and simulations. The figure highlights the existence of a certain optimal curve \(r_{1}^{\text{opt}}=r_{1}^{\text{opt}}(r_{2})\), which is obtained by solving \[0=\left.\frac{\partial T(r_{1},r_{2})}{\partial r_{1}}\right|_{r_{1}^{\text{ opt}}}=\left.\frac{2r_{2}\left(1-e^{\sqrt{r_{1}}}\right)+\sqrt{r_{1}}e^{\sqrt{r_{1}}} \left(r_{1}+r_{2}\right)}{2r_{1}^{2}r_{2}}\right|_{r_{1}^{\text{opt}}}. \tag{51}\] Figure 4: Mean first passage time \(T(r_{1},r_{2})\) as a function of the resetting rate \(r_{1}\) for fixed refractory rate \(r_{2}\). An excellent agreement between simulations (symbols) and theory (solid lines) is found. The optimal resetting rate \(r_{1}^{\text{opt}}\), represented by five-pointed stars, monotonically increases with \(r_{2}\)—in the limit \(r_{2}\to\infty\), the value \(r_{1}^{(0)}=\left(2-W(-2e^{-2})\right)^{2}=2.53964\) is reached. Inset: Optimal resetting rate as a function of the refractory period rate. We show the numerical solution of the implicit equation for \(r_{1}^{\text{opt}}\), as given by equation (52) (solid line), and the analytical approximation for small \(r_{2}\), as given by equation (54) (dashed line). Then, \(r_{1}^{\rm opt}\) is implicitly given as a function of \(r_{2}\) by \[2r_{2}\left(1-e^{\sqrt{r_{1}^{\rm opt}}}\right)+\sqrt{r_{1}^{\rm opt}}e^{\sqrt{r _{1}^{\rm opt}}}\left(r_{1}^{\rm opt}+r_{2}\right)=0. \tag{52}\] Now we analyse the limiting behaviour of \(r_{1}^{\rm opt}\) for both \(r_{2}\to\infty\) (no refractory period) and \(r_{2}\to 0\) (infinite refractory period). On the one hand, for \(r_{2}\to\infty\), we obtain that the limiting value \(\lim_{r_{2}\to\infty}r_{1}^{\rm opt}(r_{2})\) is determined by \[\lim_{r_{2}\to\infty}2\left[r_{1}^{\rm opt}(r_{2})\right]^{-1/2}\left[1-e^{- \sqrt{r_{1}^{\rm opt}(r_{2})}}\right]=1, \tag{53}\] i.e. \(\lim_{r_{2}\to\infty}r_{1}^{\rm opt}(r_{2})=r_{1}^{(0)}\)--which is consistent with the optimal resetting strategy without refractory period previously introduced. On the other hand, it is clear from equation (52) that \(\lim_{r_{2}\to 0}r_{1}^{\rm opt}(r_{2})=0\), which is logical from a physical point of view: for infinite refractory period, the best strategy for the MFPT is to avoid resetting. A dominant balance argument shows that \(r_{1}^{\rm opt}\sim r_{2}\) in this limit. In order to further investigate the dependence of \(r_{1}^{\rm opt}\) on \(r_{2}\) for long refractory periods (small \(r_{2}\)), it is handy to expand \(r_{1}^{\rm opt}\) in a power series of \(\sqrt{r_{2}}\). Substituting this expansion into equation (52), one gets after a little bit of algebra \[r_{1}^{\rm opt}=r_{2}-r_{2}^{3/2}+\frac{5}{6}r_{2}^{2}+O\left(r_{2}^{5/2} \right). \tag{54}\] A comparison between the expansion (54) and the numerical estimate for \(r_{1}^{\rm opt}\) is shown in the inset of figure 4. Our result shows that there appears a "resonance" phenomenon, which optimises the MFPT--making it minimum--for a resetting rate that is linked with the refractory period rate. When the resetting point \(x_{0}\) and the target \(x_{t}\) are close, in the sense that \(r_{2}\tau_{d}\ll 1\), \(r_{1}^{\rm opt}\simeq r_{2}\). As \(r_{2}\) is increased, \(r_{1}^{\rm opt}\) consequently increases but it asymptotically saturates for large enough values of \(r_{2}\): for \(r_{2}\tau_{d}\gg 1\), the optimum MFPT asymptotically tends to its limiting value \(r_{1}^{(0)}\) corresponding to stochastic resetting without refractory periods. ## 5 Conclusions In this work, we have carried out a thorough analysis of the effects of introducing a time cost to stochastic resets in a one-dimensional Brownian searcher. First, we have exploited a pathway formulation to derive general results. This puts forward an alternative, appealing from a physical point of view, methodology to address the study of intermittent dynamics. Second, we have particularised the results for the relevant case of Poissonian resetting events and refractory periods. Therein, not only have we obtained the non-equilibrium stationary state, but also a detailed solution of the transient dynamics in the time domain. Finally, we have studied in depth the single-target search problem with refractory period. Specifically, we have investigated the optimal strategy for resetting, in the sense of minimising the mean first passage time to the target, finding that the optimal resetting rate depends on the typical duration of the refractory phase. From a physical perspective, the final result on the optimal resetting rate to find a target is especially interesting. The dependence of the optimal resetting rate on the refractory period is somehow counterintuitive, since the duration of the time intervals between resetting events and the duration of the refractory phase are independent in our model. The optimal strategy entails a non-trivial, resonance-like behaviour in which the optimal resetting rate equals the inverse of the characteristic time of the refractory period. This phenomenon is reminiscent of resonant activation [63, 64] where, when considering the escape problem in a two-well potential mediated by a fluctuating barrier, an optimal fluctuation rate that minimises the escape time emerges. Indeed, our resetting setup with refractory periods can be thought of as a fluctuating potential that switches between a totally confining potential trap and zero. Just recently, the potential connection between optimal resetting and resonant activation has started to be explored [65], which provides an interesting perspective for further research. ## Acknowledgments C. A. Plata acknowledges the funding received from European Union's Horizon Europe-Marie Sklodowska-Curie 2021 programme through the Postdoctoral Fellowship with Ref. 101065902 (ORION). G. Garcia-Valladares, C. A. Plata and A. Prados acknowledge financial support from Grant PID2021-122588NB-I00 funded by MCIN/AEI/10.13039/501100011033/ and by "ERDF A way of making Europe", and also from Grant ProyExcel_00796 funded by Junta de Andalucia's PAIDI 2020 programme. D.G acknowledges the Nordita fellowship program. Nordita is partially supported by Nordforsk. ## Data availability The codes employed for generating the data that support the findings of this study, together with the Mathematica notebooks employed for producing the figures presented in the paper, are openly available in the GitHub page of University of Sevilla's FINE research group. ## Appendix A Asymptotic analysis of the PDF for the propagation phase in the long-time limit In this appendix, we derive an approximate explicit expression for the different integrals appearing in equation (33). For the sake of compactness, in the remaining of this appendix, we take \(x_{0}=0\). The general result can be reobtained at the end with the substitution \(x\to x-x_{0}\). We start by focusing on the first integral term in (33), \[I_{1}=\int_{0}^{1}\,\mathrm{d}\omega\,\omega^{-1/2}\,e^{-r_{1}t\,[\omega+x^{2}/(4D \omega r_{1}t^{2})]}=\int_{0}^{1}\,\mathrm{d}\omega\,\omega^{-1/2}\,e^{-\omega t ^{*}-\frac{x^{*2}}{\omega t^{*}}}, \tag{11}\] where we have introduced the following dimensionless variables \(t^{*}=r_{1}t\) and \(x^{*}=x/\sqrt{4Dr_{1}^{-1}}\). In the following, we drop the asterisks for the sake of a clearer notation. For long times, Laplace's method tells us that the integral is dominated by the maximum of the exponent, i.e. the minimum of \(\phi_{1}(\omega)=\omega t+x^{2}/(\omega t)\) inside the integration interval \((0,1)\). Since the function \(\phi_{1}(\omega)\) has a relative minimum at \(\omega_{0}=|x|/t\), how to estimate the integral depends on the value of \(\omega_{0}\), specifically on whether \(\omega_{0}\) is larger or smaller than unity. Then, it is handy to introduce the change of variable \(\omega=\omega_{0}\nu\), \[I_{1}=\omega_{0}^{1/2}\int_{0}^{1/\omega_{0}}\,\mathrm{d}\nu\,\nu^{-1/2}\,e^ {-\omega_{0}t\,\psi(\nu)}, \tag{12}\] where we we have introduced \(\psi(\nu)=\nu+1/\nu\), which attains its relative minimum at \(\nu_{0}=1\). Now we asymptoticaly estimate \(I_{1}\) in the long-time limit \(t\gg 1\), with \(\omega_{0}=\mathcal{O}(1)\), i.e. \(x=\mathcal{O}(t)\). We must discriminate between different cases: 1. The relative minimum of \(\psi(\nu)\) at \(\nu=1\) lies inside the interval \((0,1/\omega_{0})\), i.e. \(\omega_{0}<1\) or \(|x|<t\), and, in addition, it is far enough from the upper limit, in a sense that is clarified below. The idea of Laplace's method is to expand \(\psi(\nu)\) around the relative minimum at \(\nu=1\), \[\omega_{0}t\psi(\nu)\sim 2\omega_{0}t+\omega_{0}t(\nu-1)^{2},\] (13) which leads to a Gaussian centred at \(\nu=1\) and very small width, proportional to \((\omega_{0}t)^{-1/2}\ll 1\).2 Therefore, the dominant behaviour of the integral comes from a narrow interval around \(\nu=1\). Here, we consider that \(\omega_{0}\) is such that the whole Gaussian belongs in the integration interval \((0,1/\omega_{0})\), i.e. the integral is dominated by the contribution from \((1-\varepsilon,1+\varepsilon)\), with \(\varepsilon\ll 1\), because we can choose \(\varepsilon\) such that Footnote 2: Note that the following term in the Taylor series of \(\omega_{0}t\psi(\nu)\) is \(-\omega_{0}t(\nu-1)^{3}\), which is negligible against the retained quadratic term \(\omega_{0}t(\nu-1)^{2}\) where the Gaussian contributes to the integral, i.e. for \(\nu-1=O(\omega_{0}t)^{-1/2}\). \[\delta_{\mathrm{in}}\equiv 1/\omega_{0}-1>\varepsilon\gg(\omega_{0}t)^{-1/2}.\] (14) With this line of reasoning, \[I_{1}\sim I_{1}^{(i)}=\omega_{0}^{1/2}e^{-2\omega_{0}t}\int_{1-\varepsilon}^{ 1+\varepsilon}\,\mathrm{d}\nu\,e^{-\omega_{0}t\,(\nu-1)^{2}}=\frac{1}{\sqrt{t }}e^{-2\omega_{0}t}\int_{-\varepsilon\sqrt{\omega_{0}t}}^{+\varepsilon\sqrt{ \omega_{0}t}}\,\mathrm{d}z\,e^{-z^{2}}\sim\sqrt{\frac{\pi}{t}}e^{-2|x|},\] (15) where condition (14) allows for the integration limits in the last integral be extended to \(\pm\infty\). Note that the obtained expression corresponds to the first case in equation (36) if we reintroduce dimensions, i.e. the NESS behaviour (32a). 2. The relative minimum of \(\psi(\nu)\) at \(\nu=1\) lies outside the interval \((0,1/\omega_{0})\), i.e. \(\omega_{0}>1\) or \(|x|>t\) and, in addition, it is far enough from the upper limit, in a sense that is also clarified below. In this case, the minimum of \(\psi(\nu)\) within the interval occurs at the upper limit \(1/\omega_{0}\). Hence, Laplace's method tells us to expand \(\psi(\nu)\) around \(\nu=1/\omega_{0}\), \(\psi(\nu)\simeq\omega_{0}+1/\omega_{0}+(1-\omega_{0}^{2})(\nu-1/\omega_{0})+ \omega_{0}^{3}(\nu-1/\omega_{0})^{2}\) and restrict the integral to a narrow interval \((1/\omega_{0}-\epsilon,1/\omega_{0})\), with \(\varepsilon\ll 1\). The quadratic term is negligible against the linear one if \[\frac{\omega_{0}^{2}-1}{\omega_{0}^{3}}\gg\varepsilon.\] (114) Assuming this "far enough" condition holds, we have \[I_{1}\sim I_{1}^{(ii)} =\omega_{0}\,e^{-t(1+\omega_{0}^{2})}\int_{1/\omega_{0}- \varepsilon}^{1/\omega_{0}}\,\mathrm{d}\nu\,e^{\omega_{0}t\,\left(\omega_{0} ^{2}-1\right)(\nu-1/\omega_{0})}\] (115) \[=\frac{e^{-t(1+\omega_{0}^{2})}}{t\left(\omega_{0}^{2}-1\right) }\int_{-\omega_{0}t\left(\omega_{0}^{2}-1\right)\varepsilon}^{0}\mathrm{d}z \,e^{z}\sim\frac{e^{-t(1+x^{2}/t^{2})}}{t\left(\frac{x^{2}}{t^{2}}-1\right)},\] provided that the extension of the lower limit to \(-\infty\) can be justified, i.e. we can choose \(\varepsilon\) such that \[\omega_{0}t\left(\omega_{0}^{2}-1\right)\varepsilon\gg 1.\] (116) Conditions (14) and (116) can be fulfilled without problems when \(\omega_{0}-1=O(1)\), since they tell us that we have to choose \(\varepsilon\) small but much larger than \((\omega_{0}t)^{-1}\ll 1\). As \(\omega_{0}\) approaches unity, \(1/\omega_{0}=1-\delta_{\mathrm{out}}\), with \(\delta_{\mathrm{out}}\ll 1\), (114) and (116) entail that \[\delta_{\mathrm{out}}\gg\varepsilon,\quad\omega_{0}t\delta_{\mathrm{out}} \varepsilon\gg 1\implies\delta_{\mathrm{out}}\gg(\omega_{0}t)^{-1/2}.\] (117) This "far enough" condition makes sense: it is telling us that the separation of the upper limit from unity (the position of the relative minimum of \(\psi\)) must be much larger than the width of the Gaussian, analogously to (113). Equation (115) has the same dominant contribution in the exponent that the non-resetting term \(p^{(\mathrm{f})}(x,t|x_{0})\). However, the tails of the PDF are dominated by the non-resetting term, as expressed by equation (36), since the coefficient of equation (115), \((x^{2}/t)^{-1}\), is subdominant compared to that in \(p^{(\mathrm{f})}\), \(t^{-1/2}\), for \(x/t=O(1)\). 3. The relative minimum of \(\psi(\nu)\) at \(\nu=1\) lies inside the interval \((0,1/\omega_{0})\), i.e. \(\omega_{0}<1\) or \(|x|<t\), but it is close to the upper limit, \(1/\omega_{0}=1+\delta_{\mathrm{in}}\) with \(\delta_{\mathrm{in}}\) not fulfilling (113), i.e. \(\delta_{\mathrm{in}}=O(\omega_{0}t)^{-1/2}\). Herein, we have \[I_{1}\sim I_{1}^{(iii)} =\omega_{0}^{1/2}e^{-2\omega_{0}t}\int_{1-\varepsilon}^{1+\delta _{\mathrm{in}}}\,\mathrm{d}\nu\,e^{-\omega_{0}t\,(\nu-1)^{2}}=\frac{1}{\sqrt{ t}}e^{-2\omega_{0}t}\int_{-\varepsilon\sqrt{\omega_{0}t}}^{+\delta_{\mathrm{in}} \sqrt{\omega_{0}t}}\,\mathrm{d}z\,e^{-z^{2}}\] (118) \[\sim\frac{1}{2}\sqrt{\frac{\pi}{t}}\mathrm{erfc}\left(\frac{|x|-t }{\sqrt{|x|}}\right)e^{-2|x|},\] where we have considered that \(\varepsilon\) can always be choosen such that \(\varepsilon\sqrt{\omega_{0}t}\gg 1\). Note that this expression asymptotically converges to \(I_{1}^{(i)}\) in equation (114) when the limit \(\delta_{\rm in}\sqrt{\omega_{0}t}\gg 1\) is considered. Thus, this expression may be used to approximate \(I_{1}\) for \(\omega_{0}<1\), i.e. \(|x|<t\), regardless of the value of \(\delta_{\rm in}\). 4. The relative minimum of \(\psi(\nu)\) at \(\nu=1\) lies outside the interval \((0,1/\omega_{0})\), i.e. \(\omega_{0}>1\) or \(|x|>t\) but it is close to the upper limit, i.e. \(1/\omega_{0}=1-\delta_{\rm out}\) with \(\delta_{\rm out}\) not fulfilling (14), i.e. \(\delta_{\rm out}=O(\omega_{0}t)^{-1/2}\). Following the general idea of the Laplace method, we expand \(\psi(\nu)\) around \(1/\omega_{0}=1-\delta_{\rm out}\) in a narrow interval \((1-\delta_{\rm out}-\varepsilon,1-\delta_{\rm out})\). In (ii), \(\varepsilon\ll\delta_{\rm out}\), but here we consider that \(\varepsilon\) is at least \(O(\delta_{\rm out})\). Therefore, we get \[\omega_{0}t\psi(\nu)\simeq\omega_{0}t\left[2+\delta_{\rm out}^{2}+2\delta_{ \rm out}(\nu-1+\delta_{\rm out})+(\nu-1+\delta_{\rm out})^{2}\right],\] (15) neglecting \(O(\omega_{0}t\delta_{\rm out}^{3})\), \(O(\omega_{0}t\delta_{\rm out}^{2}\varepsilon)\), \(O(\omega_{0}t\delta_{\rm out}\varepsilon^{2})\), and \(O(\omega_{0}t\varepsilon^{3})\) terms.P Introducing this expansion, we obtain \[I_{1}\sim\omega_{0}\,e^{-(2+\delta_{\rm out}^{2})\omega_{0}t}\int_{1-\delta_{ \rm out}-\varepsilon}^{1-\delta_{\rm out}}d\nu\,e^{-\omega_{0}t\left[2\delta _{\rm out}(\nu-1+\delta_{\rm out})+(\nu-1+\delta_{\rm out})^{2}\right]}.\] (16) The change of variables \(2\delta_{\rm out}\omega_{0}t(\nu-1+\delta_{\rm out})=z\) allows us to write \[I_{1}\sim\frac{1}{2\delta_{\rm out}t}\,e^{-(2+\delta_{\rm out}^{2})\omega_{ 0}t}\int_{-2\delta_{\rm out}\omega_{0}t\varepsilon}^{0}\,{\rm d}z\,e^{z-z^{2}/ (4\delta_{\rm out}^{2}\omega_{0}t)}.\] (17) Now we choose \(\varepsilon\) such that \(\delta_{\rm out}\omega_{0}t\varepsilon\gg 1\), which makes it possible to extend the lower limit of the integral to \(-\infty\), similarly to the other cases we have analysed.\({}^{+}\) Finally, the explicit approximation we get is \[I_{1}\sim I_{1}^{(iv)} = \frac{1}{2\delta_{\rm out}t}\,e^{-(2+\delta_{\rm out}^{2})\omega _{0}t}\int_{-\infty}^{0}\,{\rm d}z\,e^{z-z^{2}/(4\delta_{\rm out}^{2}\omega_{ 0}t)}\] (18) \[= \frac{1}{2\delta_{\rm out}t}\,e^{-(2+\delta_{\rm out}^{2})\omega _{0}t}\,\sqrt{\pi\delta_{\rm out}^{2}\omega_{0}t}\,e^{\delta_{\rm out}^{2} \omega_{0}t}{\rm erfc}\left(\sqrt{\delta_{\rm out}^{2}\omega_{0}t}\right)\] \[= \frac{\sqrt{\pi|x|}}{2t}\,e^{-2|x|}{\rm erfc}\left(\frac{|x|-t}{ \sqrt{|x|}}\right).\] Footnote ¶: Recalling that \(\delta_{\rm out}=O(\omega_{0}t)^{-1/2}\), we have, on the one hand, \(O(\omega_{0}t\delta_{\rm out}^{3})=O(\omega_{0}t)^{-1/2}\ll 1\), \(O(\omega_{0}t\delta_{\rm out}^{2}\varepsilon)=O(\varepsilon)\ll 1\). On the other hand, both \(O(\omega_{0}t\delta_{\rm out}\varepsilon^{2})=O((\omega_{0}t)^{1/2} \varepsilon^{2})\) and \(O(\omega_{0}t\varepsilon^{3})\) must be much smaller than unity, so \(\varepsilon\ll(\omega_{0}t)^{-1/3}\). The other integral term involved in equation (33) has a simpler analysis. We employ again dimensionless variables, but now those stemming from the natural units evidenced by \(\Phi_{2}\), i.e. \(x^{*}=x/\sqrt{4Dr_{2}^{-1}}\) and \(t^{*}=r_{2}t\)--and drop the asterisks once more. The exponent \(\Phi_{2}(\omega;y)\) is a monotonically increasing function of \(\omega\). Thus, the local maximum of the exponent within the integration interval occurs for all cases at the upper limit. Similarly to the case (ii) for \(I_{1}\), direct application of Laplace's method gives \[I_{2}=\int_{0}^{1}\,{\rm d}\omega\,\omega^{-1/2}\,e^{\omega t-\frac{x^{2}}{ \omega t}}\sim\frac{e^{t(1-x^{2}/t^{2})}}{t(1+x^{2}/t^{2})}=\left(x^{2}/t+t \right)^{-1}e^{t(1-x^{2}/t^{2})}. \tag{19}\] In figure 3, the evaluation of the theoretical prediction (33) is performed by computing the approximated expressions in this appendix for \(I_{1}\) and \(I_{2}\). For the inner region, \(I_{1}^{(iii)}\) is used, while \(I_{1}^{(iv)}\) and \(I_{1}^{(ii)}\) are used for the outer region. The change between \(I_{1}^{(iv)}\) and \(I_{1}^{(ii)}\) is made at the crossing point \(x_{\mathrm{cross}}\) where \(I_{1}^{(iv)}=I_{1}^{(ii)}\), which has been numerically evaluated.
2302.11186
UML: A Universal Monolingual Output Layer for Multilingual ASR
Word-piece models (WPMs) are commonly used subword units in state-of-the-art end-to-end automatic speech recognition (ASR) systems. For multilingual ASR, due to the differences in written scripts across languages, multilingual WPMs bring the challenges of having overly large output layers and scaling to more languages. In this work, we propose a universal monolingual output layer (UML) to address such problems. Instead of one output node for only one WPM, UML re-associates each output node with multiple WPMs, one for each language, and results in a smaller monolingual output layer shared across languages. Consequently, the UML enables to switch in the interpretation of each output node depending on the language of the input speech. Experimental results on an 11-language voice search task demonstrated the feasibility of using UML for high-quality and high-efficiency multilingual streaming ASR.
Chao Zhang, Bo Li, Tara N. Sainath, Trevor Strohman, Shuo-yiin Chang
2023-02-22T07:40:01Z
http://arxiv.org/abs/2302.11186v1
# UML: A Universal Monolingual Output Layer for Multilingual ASR ###### Abstract Word-piece models (WPMs) are commonly used subword units in state-of-the-art end-to-end automatic speech recognition (ASR) systems. For multilingual ASR, due to the differences in written scripts across languages, multilingual WPMs bring the challenges of having overly large output layers and scaling to more languages. In this work, we propose a universal monolingual output layer (UML) to address such problems. Instead of one output node for only one WPM, UML re-associates each output node with multiple WPMs, one for each language, and results in a smaller monolingual output layer shared across languages. Consequently, the UML enables to switch in the interpretation of each output node depending on the language of the input speech. Experimental results on an 11-language voice search task demonstrated the feasibility of using UML for high-quality and high-efficiency multilingual streaming ASR. Chao Zhang++++, Bo Li++, Tara N. Sainath, Trevor Strohman, Shuo-yilin Chang Google LLC, USA {boboli,tsainath,strohman,shuoyilin}@google.com Multilingual, ASR, word-piece, UTF-8 byte Footnote †: ddagger\)Equal contributions. \(\ddagger\)Work performed while at Google. ## 1 Introduction Automatic speech recognition (ASR) is used by a massive amount of users. Although more than 7,100 languages are actively spoken in the world [1], only about a hundred most common ones have commercial ASR products, restricting the benefits of such useful artificial intelligence technology for people. To extend ASR to cover more languages and users [2], it is better to serve many languages with a single multilingual ASR system instead of with many monolingual ASR systems, which not only enables code-switch and resource sharing across languages but also reduces maintenance cost. The choice of subword units is critical for multilingual ASR. Phonemes are often used in the modularised ASR framework [3, 4, 5, 6], which requires complex text processing to convert pronuncations to words. Text units are often used to resolve the issue. Graphemes, as a collection of characters, can result in an uneven distribution of subword units across languages [7, 8, 9, 10]. As an alternative, bytes can be used as the common subword units shared across languages by decomposing each character into multiple bytes [11]. Word-piece models (WPMs) [12, 13] and similarly sentence-piece models [14], obtained by segmenting words or sentences into pieces, are superior in performance to graphemes and bytes [15, 16] and are therefore the _de facto_ choices in monolingual end-to-end ASR and natural language processing. However, it is inevitable to have a large number of multilingual WPMs when multiple writing systems are involved [17, 18, 19, 20]. A solution is to use separate monolingual output layers and decoders [21, 22], which considerably increases the storage space and requires the management of more concurrent beam search procedures when scaling up to more languages. In this paper, we propose a universal monolingual output layer (UML) to resolve the aforementioned issues. Compared to using separate monolingual output layers or decoders, the UML reuses the same output layer for different languages, simply by re-associating each output node with different subword units for different languages. This can be achieved with only a tokeniser-level change, which allows a multilingual decoder to keep almost the same structure and size as the monolingual decoder. Compared to graphemes and bytes, better performance is expected as monolingual WPMs with controllable sizes can be used in UML. Streaming ASR experiments on a massive multilingual voice search setup [17] with 11 languages and known language identifiers (LIDs) showed that compared with the baseline with 8K output nodes, UML with four thousand (4K) output nodes and a rule-based language grouping achieved the same word error rate (WER) by reducing about 40% of parameters in decoders. A further reduction of the decoder size was achieved by replacing WPMs with bytes for logogram-coded languages (_e.g._ Chinese and Japanese), which resulted in better WERs than a byte-only baseline with an output layer size of 512. In the rest of the paper: Sec. 2 reviews related work. Sec. 3 presents the proposed UML method. Sec. 4 and 5 are the experimental setup and results. We conclude in Sec. 6. ## 2 Related Work Phonetic units, including shared and language-dependent phonemes, decision-tree clustered context-dependent phonemes [3, 4, 5, 6], and articulatory features [23], are commonly used in modularised ASR. Although it is natural to model the pronunciation variations across languages with phonetic units, the need for lexicons and language models made it less suitable for on-device applications. Context-dependent graphemes were first developed for modularised ASR. Decision tree clustering is often used to cover unseen graphemes and complex segmental writing scripts, whose question sets can be derived based on the Unicode character descriptions [24]. Context-independent graphemes are prevalent in end-to-end ASR, such as recurrent neural network transducer (RNN-T). For multilingual ASR, a union of monolingual graphemes is used [7, 8, 9, 10]. By segmenting each grapheme into one to four Unicode bytes based on its UTF-8 encoding, the same bytes can be used to build ASR to transcribe any writing scripts [11]. Having a small number of common "sub-character" units, byte ASR models require more steps to decode than graphemes and the partial outputs are not human-readable. By incrementally combining graphemes into frequent pieces of words, the vocabulary size can be controlled using WPMs [12]. Since fewer decoding steps can be achieved with more WPMs, with a sufficient number of WPMs, RNN-T models were found not only to produce better WERs than with graphemes [15], but also to produce similar WERs using a full history decoder and a limited history decoder [25]. As graphemes can be considered as the minimum WPMs, in practice, WPMs and graphemes are often mixed to use in multilingual ASR. For instance, similar numbers of English WPMs and Chinese characters are often used together in English-Chinese code-switch studies [26, 27], to avoid the 26 English characters being overwhelmed by thousands of Chinese characters. As an extension of WPMs, the sentence-piece model allows cross-word subword segmentation based on different algorithms [14]. Although WPMs are more commonly adopted over graphemes for monolingual ASR, an output layer with multilingual WPMs generated by pooling all monolingual data together can often be overly large when many languages and writing scripts are integrated [28, 29]. Separate monolingual output layers can be used as a solution, which can be dated back to the previous works with phonemes and graphemes [30, 31]. In RNN-T, other parts of the decoder and even a part of the encoder can be monolingual as well [21, 22]. The UML method proposed in this paper differs from all these works by tying all monolingual output layers together, which enables the monolingual ASR decoder structure to be used for multilingual ASR. It is worth noting although UML is introduced for WPMs, it is a generic method applicable to other kinds of subword units as well. ## 3 Proposed UML Method ### A Universal Monolingual Output Layer UML is a monolingual output layer shared by all languages. Specifically, let \(L\) be the number of languages, \(V_{l}\) the number of WPMs for the \(l\)-th language, in UML, each output node \(o\) is mapped to \(L\) different monolingual WPs \(W_{1,o},\dots,W_{L,o}\) for \(L\) different languages, whereas \(o\) is mapped to only one WPM in a conventional output layer. Let \(H\) be the input dimension of the output layer, compared to alternative methods, UML enables the use of more WPMs with fewer parameters: * UML uses only one \(H\times\max(V_{1},\dots,V_{L})\)-dimensional (-dim) output layer to model the \(\sum_{l=1}^{L}V_{l}\) WPMs. * The method using a conventional output layer for all multilingual WPs [17, 18, 19, 20, 32] requires to use a \(H\times(\sum_{l=1}^{L}V_{l})\)-dim layer for the \(\sum_{l=1}^{L}V_{l}\) WPMs. * The methods in [21, 22] use \(L\) separate monolingual output layers whose dimensions are \(H\times V_{1},\dots,H\times V_{L}\). It requires \(H\times(\sum_{l=1}^{L}V_{l})\) parameters to model the \(\sum_{l=1}^{L}V_{l}\) WPMs. In UML, since each WPM is determined jointly by the LID and the output node index, LIDs need to be taken into account in inference. At test-time, let \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\) be the input, output, and LID prediction sequences of an utterance, \(\mathbf{y}^{*}\) is the decoding result, the _maximum a posteriori_ decoding rule of ASR can be modified to marginalise the LID predictions as \[\mathbf{y}^{*}=\arg\max P(\mathbf{y}|\mathbf{x})=\sum\nolimits_{\mathbf{z}}P (\mathbf{z}|\mathbf{x})P(\mathbf{y}|\mathbf{x},\mathbf{z}), \tag{1}\] where \(P(\mathbf{y}|\mathbf{x},\mathbf{z})\) are the output distributions of a UML-based LID-aware multilingual decoder and a LID predictor, \(P(\mathbf{z}|\mathbf{x})\) are the LID predictions. Eqn. (1) can also be applied to the training, which has \(P(\mathbf{y}|\mathbf{x})\) in the loss function. This allows LID prediction and code-switch to be handled jointly and explicitly. There are a few key advantages to using the UML: * First, the UML allows multilingual ASR to scale gracefully to any number of languages without increasing the output layer size [28, 29]. This is smaller in size than the conventional multilingual output layer and improves the computation efficiency in both RNN-T training and decoding. It also reduces the difficulty to adapt an existing system to new languages by reusing the same output layer. * Second, the UML provides us with fine-grained control of WPM typing across languages. Languages with similar writing systems (_e.g._ English and French) often have duplicated WPMs. To avoid this, such languages can be grouped to derive a combined set of WPMs. In this setting, the UML is no longer fully "monolingual" by tying the same WPMs within each group. Meanwhile, the flexibility is reserved to untie some WPMs with the same written form but dissimilar pronunciations and contexts, such as the same characters in Chinese and Japanese, by setting them into different groups. * Third, the UML enables us to handle contextual biasing in a per-language way. Due to a page limit, contextual biasing is not included in this paper, but it is necessary for real-world ASR applications that can require us to incorporate thousands of biasing words (such as contact names) into ASR. In multilingual ASR, each UML output distribution is "monolingual" enough to keep the relevant biasing word list monolingual, without the need to combine them into an overly large list. ### Applying UML to Multilingual RNN-T In this section, the application of UML in an RNN-T is considered. As shown in Fig. 1, the output layer of an RNN-T decoder produces its current output distribution based on the joint representation derived from the joint network. The standard joint network is a single hidden layer in RNN-T, which fuses the acoustic representation derived from the encoder based on the input speech, with the text representation derived from the prediction network based on the outputs from the previous time steps. Not only the output layer but also the prediction network needs to be considered when using UML. Since LID is missing from the prediction network, the input to the prediction network, which is the output node index of the previous step, is ambiguous as it can not determine a WPM alone in UML. To disambiguate, the LID information can be leveraged in the prediction network, by either augmenting each output node index with a LID or using language-specific parameters. As shown in Eqn. (1), this requires expanding the search space in decoding, by re-weighting each path in the beam with the relevant LID probability. Alternatively, if LID is ignored in the prediction network, the LID predictor, such as the one proposed in [33], can be decoupled from the beam search that simplifies UML implementation. Figure 1: A sketch map of an RNN-T decoder with WPM-based UML for English and Chinese. LID to the prediction network is optional. In the applications where per-utterance oracle LIDs are available, \(P(\mathbf{z}|\mathbf{x})\) are 0-1 hard probabilities and Eqn. (1) becomes \(\mathbf{y}^{\star}=P(\mathbf{y}|\mathbf{x},\mathbf{z})\). Using UML in this case does not require any change to model training and testing. Only the output node indexes are interpreted according to the language, which can be achieved simply by switching among a set of monolingual tokenisers controlled by the LIDs. Here a tokeniser converts a sequence of output node indexes into a sequence of WPMs and joins them into word-level transcriptions, or _vice versa_. As a result, a UML-based multilingual ASR system can have the same decoder structure and size as a monolingual ASR system, which results in lower storage, computation, and energy costs. In this paper, we focus on UML with oracle LIDs. In UML, having a different number of WPMs for different languages requires us to apply partial matrix multiplications and softmax functions with different sizes. For simplicity, we unify all sets of WPMs to have the same size \(V\). The UML in this case can be viewed as folding the multilingual decoder output layer \(L\) times. Furthermore, to reduce the output layer size to be smaller than the number of Chinese characters, bytes can be used as the alternative subword units for Chinese and Japanese, which have a different vocabulary size than those of the WPMs used for other languages. ## 4 Experimental Setup ### Data Our dataset consists of 11 languages: Arabic, Chinese, German, English, Spanish, French, Hindi, Italian, Japanese, Portuguese and Russian. For each language, the training data is collected from two application domains: Voice Search and YouTube, with complex noise and acoustic conditions. All data are de-identified and human transcribed. All data collection and handling abide by Google AI Principles [34]. Table 1 shows the distribution in more detail. Each test set utterance is less than 5.5 seconds in length and is sampled from the traffic of Google's Voice Search product. All test sets are similarly de-identified and human transcribed for evaluation. All data are normalised to have zero mean and unit variance based on the statistics collected from the whole training set. ### Model The Conformer-based cascaded encoder RNN-T model is used in this study [35]. The acoustic features are 128-dim log Mel filter banks computed on a 32ms window with a 10ms shift. SpecAugment is used to improve model robustness. Every 4 contiguous frames with 3 frames on the left are stacked to form a 512-dim input representation, which is further sub-sampled with one frame overlap to achieve a 30ms frame rate. Oracle LID vectors are one-hot-coded and appended to the acoustic features. The causal encoder has 46 million (M) parameters and consists of 3 convolution layers followed by seven Conformer blocks. Causal convolution and left-context attention layers are used in the Conformer block to strictly exclude any right context frame for streaming purposes. The right-context encoder with 100M parameters uses a 640-dim linear projection to transform the input features, followed by ten 640-dim Conformer blocks and a final linear normalisation layer. More details about the encoder structures and training configurations are in [36]. Though separate decoders with identical structures are used in all experiments, only WERs from the 2nd-pass decoder are reported. Each decoder has an embedding prediction network [25] operating on 2 non-blank model predictions with 640-dim embeddings, which is termed as an _embedding decoder_. For the output layer size \(O\), there are two 640\(\times O\)-dim embedding layers. The two 640-dim embeddings for the previous two non-blank predictions are concatenated and projected down to a 640-dim text representation, resulting in a 1280\(\times\)640-dim linear projection. Two projections are used to further transform the encoder and prediction network outputs separately. The joint network fuses these transformed representations into a 640-dim vector and then passes it to the final 640\(\times O\)-dim softmax output layer. The total number of parameters for each decoder is 18.1M when \(O\) is 8K. A decoder structure with an LSTM prediction network is evaluated for comparison, which has 26.5M parameters. The LSTM prediction network consists of two projected LSTM layers, whose memory cell and hidden representations are 1280-dim and 640-dim. A bilinear pooling (BP) joint network, referring to the Eqns. (11) and (12) in [36], can be used to improve the decoder performance, which increases the joint network size by 1.1M parameters. The LID information is not used in the prediction network of the UML systems unless claimed explicitly. ## 5 Results and Discussions Full ASR results are presented in Table 2 and discussed as follows. **Baseline models (B0-B2).** B0\({}^{\text{M61}}\) is a baseline system using a single conventional output layer with 8K multilingual WPMs. The multilingual WPMs were generated by pooling the training data of all languages into one group denoted as G1. We use the natural distribution of each language in WPM generation as well as the mini-batch sampling in ASR training. For on-device applications, the smaller decoder is always preferred for better efficiency and latency. In our 8K vocabulary embedding decoder, most parameters are in the output layer and embedding look-up tables. The 8K WPM set is the smallest we can build without bringing many out-of-vocabulary (OOV) tokens due to the reduction of Chinese characters. B1\({}^{\text{M61}}_{\text{LSTM}}\) is another baseline with an LSTM prediction network. Our third baseline B2\({}^{\text{Byte}}\) has 384 output nodes including 256 for bytes and 128 for special tokens. Due to the small decoder size, B2\({}^{\text{Byte}}\) has much worse WER than B0\({}^{\text{M61}}_{\text{LSTM}}\), apart from Chinese. **UML feasibility and language grouping (U0-U4).** To test the feasibility of the UML approach, we start with systems U0-U4, whose structures are similar to B1 but have a 6K UML instead of an 8K conventional output layer. 6K is the least WPMs for Chinese and Japanese on our data without causing OOVs. U0\({}^{\text{M611}}\) has 11 distinct sets of monolingual WPMs without any grouping. G7 groups the Germanic (de and en) and the Romance (es, fr, it, and pt) languages \begin{table} \begin{tabular}{l l r r r} \hline \hline \multirow{2}{*}{**LID**} & \multirow{2}{*}{**Language**} & \multicolumn{2}{c}{**\#Utterance**} & \multicolumn{1}{c}{**Duration**} & \multicolumn{1}{c}{**\#Utterance**} \\ & & **Train (M)** & **Train (K)** & **Test (K)** \\ \hline \hline ar & Arabic & 3.8 & 6.4 & 4.1 \\ de & German & 3.8 & 3.8 & 7.6 \\ en & English & 18.1 & 15.7 & 9.8 \\ es & Spanish & 45.5 & 52.8 & 17.5 \\ fr & French & 10.8 & 14.7 & 7.1 \\ hi & Hindi & 14.2 & 29.8 & 6.5 \\ it & Italian & 13.0 & 21.3 & 14.8 \\ ja & Japanese & 10.9 & 11.5 & 11.2 \\ pt & Portuguese & 13.4 & 20.7 & 12.3 \\ ru & Russian & 5.3 & 12.4 & 11.1 \\ zh & Chinese & 0.9 & 5.1 & 6.1 \\ \hline \hline Total & & 135.0 & 182.6 & 108.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Data statistics. The number of utterances (#Utterance) is in millions (M) and the duration is in thousands (K) of hours. separately, and hence U1\({}^{\text{64G7}}\) has 7 distinct languages or groups. To combine all languages with similar writing scripts, G5 further merges the Germanic and Romance languages into one group, and Chinese and Japanese into another group, leading to U2\({}^{\text{RGS}}\) with 5 distinct language groups. From the results, both U1\({}^{\text{64G7}}\) and U2\({}^{\text{64G5}}\) have lower WERs than U0\({}^{\text{64G11}}\), which are still worse than B0\({}^{\text{84G1}}\). This proves that building a UML-based system without any grouping is difficult. U2\({}^{\text{64G5}}\) were worse than U1\({}^{\text{64G7}}\) in Chinese and Japanese, but outperformed in other languages, indicating that even the same characters in Chinese and Japanese can have very different pronunciations and contexts that are not suitable to share the same nodes. Therefore, we built another group G6, by splitting Chinese and Japanese from G5, and the resulting system U3\({}^{\text{64G6}}\) indeed had improved WERs in these two languages. Such fine-grained control is a benefit of using UML. Further replacing the joint network in U3\({}^{\text{64G6}}\) with BP with a small parameter quantity increase, resulted in our best-performing UML system U4\({}^{\text{64G6}}_{\text{BP}}\), which has the same WER as our best LSTM-based baseline B1\({}^{\text{84G6}}_{\text{LSTM}}\) with much fewer parameters. U4\({}^{\text{64G6}}_{\text{BP}}\) outperformed B0\({}^{\text{84G1}}\) by relatively 3.5% lower WER and reduced 15% of the parameters. **Reducing WPM-based UML size to the limit (U5-U8).** Next, we further reduced the size of all G6 WPM sets to 4K and 2K. U5\({}^{\text{44G6}}\) have a 0.2% increase in the mean WER compared with U3\({}^{\text{64G6}}\), while most of the increase was caused by the OOVs in Chinese. By further using the BP joint network, U6\({}^{\text{64G6}}_{\text{BP}}\) has another 3.5% relative WER reduction, and the result is the same mean WER as B0\({}^{\text{84G1}}\) while saving about 40% of the decoder parameters. Meanwhile, we tested the importance of including the LID information in the prediction network. Two additional \(11\times 640\)-dim projections are used to transform the LIDs and their outputs are added to the 640-dim embeddings of the previous two steps accordingly. The resulting system U7\({}^{\text{14G6}}_{\text{L1D}}\) had the same mean WER as U5\({}^{\text{44G6}}\), meaning it is not necessary to disambiguate the output node indexes with the LID information in this case. Furthermore, a 2K WPM-based UML system, U8\({}^{\text{24G6}}\) is built, with even more OOVs in Chinese and Japanese than U5\({}^{\text{44G6}}\). The mean WER is much worse than U5\({}^{\text{44G6}}\) as expected, but the WER increases are mostly in Chinese and Japanese. This inspires us to further improve the method by using bytes for these two languages. **Using UML to mix WPMs and Bytes (U9, U10).** To avoid the degradation of WERs due to OOV issues, we propose to use bytes for the logogram-coded languages, WPMs for other alphabet-coded languages and mix such different types of subword units using UML. The resulting 1K output-node and 512 output-node systems with BP joint network, U9\({}^{\text{hLGSMix}}_{\text{BP}}\) and U10\({}^{\text{512xGGMix}}_{\text{BP}}\), both achieved better WERs with fewer parameters than U8\({}^{\text{24G6}}\). In addition, U10\({}^{\text{512xGGMix}}_{\text{BP}}\) had a 0.5% mean absolute WER reduction compared to B2\({}^{\text{Bytes}}\), a byte baseline with a similar amount of decoder parameters. This again reveals the importance of the flexibility provided by the UML. ## 6 Conclusions In this work, we develop the method of UML for multilingual ASR, which reuses each output node for a different subword unit for each language, making the output layer monolingual and language universal. A simple tokeniser-level change addresses many challenges in building massive-scale multilingual ASR such as the significant increase in output layer size when integrating more languages with one system. Production scale experiments were conducted on 11 languages that justified the feasibility of building UML-based multilingual ASR systems with high accuracy. The potential of reducing the multilingual output size was also explored. When using a 4K-dim UML, the same averaged WER can be achieved compared to the baseline with a conventional 8K-dim WPM output layer. This reduces about 40% of parameters in the decoder and considerably improves RNN-T training and test speed. Further reductions in size can be achieved by mixing WPMs with bytes using UML. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \multirow{2}{*}{**System**} & \multirow{2}{*}{\begin{tabular}{c} **Decoder** \\ **\#Params** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Mean** \\ \%WER** \\ \end{tabular} } & \multicolumn{8}{c}{**Per-language \%WER**} \\ \cline{4-14} & & & **ar** & **de** & **en** & **es** & **fr** & **hi** & **it** & **ja** & **pt** & **ru** & **zh** \\ \hline \hline B0\({}^{\text{84G1}}\) & 18.1M & **11.3** & 11.7 & 13.2 & 7.8 & 6.5 & 9.8 & 19.9 & 9.3 & 14.2 & 7.7 & 12.3 & 11.4 \\ B1\({}^{\text{84G1}}_{\text{LSTM}}\) & 26.5M & **10.9** & 10.8 & 12.9 & 7.6 & 6.1 & 9.6 & 19.8 & 9.0 & 13.3 & 7.5 & 11.7 & 11.9 \\ B2\({}^{\text{Bytes}}\) & 3.1M & 12.6 & 12.3 & 14.1 & 8.8 & 7.2 & 11.1 & 22.4 & 10.5 & 17.1 & 8.9 & 14.3 & 11.4 \\ \hline U0\({}^{\text{64G11}}\) & 14.2M & 12.0 & 12.4 & 14.0 & 8.1 & 7.2 & 10.6 & 20.2 & 10.5 & 16.0 & 8.4 & 13.1 & 11.9 \\ U1\({}^{\text{64G7}}\) & 14.2M & 11.7 & 11.7 & 13.6 & 8.0 & 6.9 & 10.1 & 20.0 & 9.8 & 15.8 & 8.0 & 12.7 & 11.7 \\ U2\({}^{\text{64G5}}\) & 14.2M & 11.7 & 11.5 & 13.2 & 7.9 & 6.5 & 10.0 & 20.0 & 9.7 & 16.6 & 8.1 & 12.8 & 12.0 \\ U3\({}^{\text{64G6}}\) & 14.2M & 11.5 & 11.6 & 13.2 & 7.9 & 6.5 & 9.9 & 19.7 & 9.6 & 15.3 & 8.0 & 12.8 & 11.8 \\ U4\({}^{\text{64G6}}_{\text{BP}}\) & 15.3M & **10.9** & 11.1 & 12.3 & 7.1 & 6.2 & 9.4 & 19.5 & 9.0 & 13.9 & 7.5 & 12.0 & 11.9 \\ \hline U5\({}^{\text{44G6}}\) & 10.2M & 11.7 & 11.7 & 13.5 & 7.8 & 6.6 & 10.0 & 19.8 & 9.8 & 15.6 & 8.0 & 12.9 & 13.1 \\ U6\({}^{\text{44G6}}_{\text{BP}}\) & 11.3M & **11.3** & 10.9 & 12.1 & 7.3 & 6.1 & 9.4 & 19.4 & 9.0 & 17.2 & 7.4 & 11.8 & 13.5 \\ U7\({}^{\text{44G6}}_{\text{BP}}\) & 10.2M & 11.7 & 11.5 & 13.3 & 7.8 & 6.6 & 10.1 & 19.8 & 9.7 & 15.6 & 7.9 & 12.9 & 13.3 \\ U8\({}^{\text{24G6}}\) & 6.3M & 12.5 & 11.3 & 13.5 & 7.9 & 6.5 & 10.0 & 19.9 & 9.9 & 16.8 & 7.9 & 12.9 & 21.2 \\ \hline U9\({}^{\text{1LGGMix}}_{\text{BP}}\) & 4.7M & 11.9 & 11.9 & 13.8 & 8.4 & 6.9 & 10.4 & 20.2 & 10.2 & 16.0 & 8.4 & 13.7 & 11.3 \\ U10\({}^{\text{512SGMix}}_{\text{BP}}\) & 3.7M & 12.1 & 11.8 & 14.3 & 8.3 & 6.9 & 10.6 & 20.4 & 10.3 & 16.4 & 8.7 & 14.0 & 11.5 \\ \hline \end{tabular} \end{table} Table 2: Comparisons on test set %WERs and numbers of decoder parameters (Decoder #Params) in million (M). The “B” and “U” systems are the baseline and UML systems. “512”, “1K”, “2K”, “4K”, “6K” and “8K” are the output layer sizes. The “G”s are followed by a number of distinct languages or groups used in UML, and “G5Mix” replaces the WPMs of Chinese and Japanese with bytes in G5 in UML. “LSTM”, “BP”, and “LID” refer to the use of the LSTM prediction network, the BP joint network, and the LIDs in the prediction network respectively.
2310.11397
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences. However, to achieve optimal performance, LLMs often require adaptation with private data, which poses privacy and security challenges. Several techniques have been proposed to adapt LLMs with private data, such as Low-Rank Adaptation (LoRA), Soft Prompt Tuning (SPT), and In-Context Learning (ICL), but their comparative privacy and security properties have not been systematically investigated. In this work, we fill this gap by evaluating the robustness of LoRA, SPT, and ICL against three types of well-established attacks: membership inference, which exposes data leakage (privacy); backdoor, which injects malicious behavior (security); and model stealing, which can violate intellectual property (privacy and security). Our results show that there is no silver bullet for privacy and security in LLM adaptation and each technique has different strengths and weaknesses.
Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem
2023-10-17T17:03:00Z
http://arxiv.org/abs/2310.11397v1
# Last One Standing: A Comparative Analysis of Security and Privacy of ###### Abstract Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences. However, to achieve optimal performance, LLMs often require adaptation with private data, which poses privacy and security challenges. Several techniques have been proposed to adapt LLMs with private data, such as Low-Rank Adaptation (LoRA), Soft Prompt Tuning (SPT), and In-Context Learning (ICL), but their comparative privacy and security properties have not been systematically investigated. In this work, we fill this gap by evaluating the robustness of LoRA, SPT, and ICL against three types of well-established attacks: membership inference, which exposes data leakage (privacy); backdoor, which injects malicious behavior (security); and model stealing, which can violate intellectual property (privacy and security). Our results show that there is no silver bullet for privacy and security in LLM adaptation and each technique has different strengths and weaknesses. ## 1 Introduction In recent years, Large Language Models (LLMs) have become integral to a plethora of products. Their efficacy is further underscored by their ability to adapt to customized--possibly private or personal--domains. Among the existing adaptation techniques, three have been particularly salient. First is Low-Rank Adaptation (LoRA) [1], wherein rank decomposition matrices are inserted into the target model, enabling its recalibration to accommodate new datasets. Second, the Soft Prompt Tuning (SPT) [2] method, which optimizes prompt tokens with respect to the new dataset and then prepends it to the inputs' embeddings. Finally, In-Context Learning (ICL) [3], where selected samples from the new dataset are placed directly into the input, serving as illustrative exemplars of the new dataset task/distribution. Despite some studies exploring the variations in utility among various adaptation techniques, a noticeable gap exists in the comprehensive comparison of their security and privacy properties. This paper takes a step to fill this gap, offering a three-fold assessment that encompasses both privacy and security aspects. In terms of privacy, our evaluation centers on the resilience of these techniques against one of the most well-established privacy concerns: membership inference attacks (MIAs). On the security front, we study the robustness of these techniques against two severe security threats. The first entails model stealing, wherein we evaluate the likelihood of an adversary successfully replicating the adapted model. The second revolves around backdoor attacks, where an adversary seeks to poison the dataset with the intention of embedding a stealthy trigger into the model. Such a backdoor, if exploited, would empower the adversary to control the model's output, e.g., outputting a specific response or label. We conduct an in-depth evaluation across three different LLM architectures: GPT2 [4], GPT2-XLI[4], and LLAMA [5], using four recognized NLP benchmark datasets: DBPedia [6], AGNews [6], TREC [7], and SST-2 [8]. Figure 1 provides an abstract comparison of ICL, LoRA, and SPT with respect to membership inference attacks, model stealing, and backdoor threats. The figure highlights the lack of a single superior technique resilient against all privacy and security threats. For example, while ICL shows strong resistance to backdoor attacks, it is more vulnerable to membership inference attacks. Therefore, choosing the appropriate technique heavily relies on the specific scenario at hand. To the best of our knowledge, our detailed analysis is the first to extend some of the most prevalent attacks against machine learning models, such as model stealing attack, into the domain of LLM with adaptation techniques. Furthermore, we believe it contributes valuable insights to the ongoing discourse on LLM adaptation techniques, offering a comprehensive view of their strengths and vulnerabilities. As the landscape of language models continues to evolve, our work provides a foundation for refining and advancing strategies that balance usability and privacy/security considerations in real-world applications. ## 2 Related Work **Training-efficient Adaptation Methods:** Training Large Language Models (LLMs) for customized domains presents significant challenges due to their extensive parameter sizes, necessitating considerable computational resources. To address these challenges, innovative, computationally efficient methods have been developed. Low-Rank Adaptation (LoRA) [1] introduces rank-decomposition weight matrices into the existing model parameters. The primary focus of training is then shifted to updating these matrices, enhancing training speed while simultaneously significantly reducing computational and memory demands. Soft Prompt Tuning (SPT) [2] takes a different approach by adding a series of prompt tokens to the input. During training, SPT only updates the gradients of these prompt token embeddings while keeping the pretrained model's core parameters frozen, making it computationally efficient. In-Context Learning (ICL) [3] conditions the model directly on supplied demonstrations (which are samples introduced in the input to guide the model), thus avoiding parameter updates altogether. While these techniques are computationally advantageous, our analysis indicates potential vulnerabilities in terms of privacy and security. **Attacks Against LLMs:** Language models are vulnerable to a range of attacks, including membership inference [9, 10], reconstruction [11], and backdoor [12, 13] attacks. While much of the previous research has focused on the vulnerabilities of pretrained or fully fine-tuned models, we study the different efficient adaptation techniques, specifically ICL, LoRA, and SPT. We aim to assess their relative strengths and weaknesses in terms of various privacy and security properties. Although there are recent concurrent studies, like Kandpal et al. [14], that investigate backdooring in-context learning, Mireshghalhal et al. [15] exploring the impact of fine-tuning different components of the model, and others such as Duan et al. [16] that compare the information leakages (using membership inference) in fine-tuned models and in-context learning, our approach provides a more comprehensive comparison that encompasses additional training paradigms and datasets. Moreover, we extend the scope of comparison beyond privacy to include different security properties of the ICL, LoRA, and SPT techniques. ## 3 Membership Inference We begin by assessing the privacy attributes of the three adaptation techniques. To this end, we employ the membership inference attack (MIA), a recognized privacy attack against LLMs. Fundamentally, MIA aims to determine the likelihood of a given input being part of the training or fine-tuning dataset of a target model. In this work, the data used for training or fine-tuning corresponds to the datasets leveraged by the adaptation techniques, such as the demonstrations for ICL or the fine-tuning datasets for LoRA and SPT. ### Threat Model We adopt the most conservative threat model, where the adversary is limited to black-box access to the target model. This scenario aligns with common deployment settings for LLMs, where the user merely obtains the label -specifically, the predicted words- along with their associated probabilities. ### Methodology We adopt the widely-used loss-based membership inference attack [17], wherein we compute the loss for every target input. Notably, member samples often exhibit lower loss values when compared to non-member samples, as depicted in the appendix (Figure 12). This observation serves as the basis for our membership determination. To quantitatively evaluate the results, we adhere to the methodology outlined in the state-of-the-art MIA work [18] that plots the true positive rate (TPR) vs. false positive rate (FPR) to measure the data leakage using a logarithmic scale. This representation provides an in-depth evaluation of data leakage, emphasizing MIA performance in the low FPR area, which better reflects the worst-case privacy vulnerabilities of language models. In evaluating the privacy implications of the three distinct adaptation techniques--LoRA, SPT, and ICL--we strive to ensure a meticulous and fair comparison. Firstly, we first measure the utility of the ICL, recognizing its inherent constraint whereby the fixed input context length of target models limits the inclusion of demonstrations. Subsequently, we calibrate the hyperparameters of LoRA and SPT to align their performance with that of ICL. Following the training of these models, we employ membership inference attacks to assess their privacy attributes and draw comparative insights across the trio. Our assessment spans a variety of scenarios, integrating different datasets and target models to thoroughly probe the privacy of ICL, LoRA, and SPT. ### Evaluation Settings We now outline our experimental setup for evaluating MIA against the adaptation techniques LoRA, SPT, and ICL. We use four well-established downstream text classification tasks, each featuring a different label count. These benchmarks, Figure 1: Comparative overview of ICL, LoRA, and SPT: Evaluating Privacy (resilience against membership inference attacks), Model Stealing Robustness (difficulty of unauthorized model replication), Data Efficiency (based on required training dataset size), and Backdoor Resilience with both Poisoned (backdoored/triggered data avoidance) and Clean (accurate label prediction) data scenarios. Larger values indicate better performance. Details of how to get such metrics can be found in Appendix A. commonly used in adaptation methods evaluation, especially for In-Context Learning (ICL), include DBPedia [6] (14 class), AGNews [6] (4 class), TREC [7] (6 class), and SST-2 [8] (2 class). Furthermore, we span our evaluation across three distinct language models: GPT2 (124M parameters) to GPT2-XL (1.5B parameters) and LLaMA (7B parameters). To achieve comparable performance for the different adaptation techniques, we train the model with a varying number of samples. For example, with DBPedia, we use 800 (SPT) and 300 (LoRA) samples to fine-tune the model, where the number of demonstrations used for ICL is set to 4, detailed hyperparameter setting can be found in Appendix B. For ICL, we follow the prompt design by Zhao et al. [3], which yields a good performance; examples can be found in the appendix (Table 1). Following membership inference attack works [19, 20], we sample members and non-members as disjoint subsets from the same distribution. For both LoRA and SPT, we maintain an equivalent count for members and non-members. In the case of ICL, we follow previous works [16] and consider more non-members (300) than members due to the constraint on the number of inputs in the prompt. To account for the inherent randomness, we conducted experiments 10 times for LoRA and SPT, and 300 times for ICL (due to its increased sensitivity of the examples used). ### Results In Figure 2, we present the MIA performance across all four datasets using GPT2-XL as the target model. The figure clearly demonstrates that both Low-Rank Adaptation (LoRA) and Soft Prompt Tuning (SPT) have strong resistance to membership inference attacks, compared to ICL. Specifically, at a False Positive Rate (FPR) of \(1\times 10^{-2}\), both LoRA and SPT's performances align closely with random guessing. Quantitatively, LoRA and SPT achieve True Positive Rates (TPR) of \(0.010\pm 0.007\) and \(0.011\pm 0.004\), respectively. Conversely, In-Context Learning (ICL) exhibits significant susceptibility to membership inference attacks. For instance, when evaluated on the DBPedia dataset, ICL achieves a TPR of \(0.520\pm 0.237\) at the aforementioned FPR--a figure that is \(52.0\times\) and \(47.3\times\) greater than what LoRA and SPT respectively achieve. We observe a similar pattern in the MIA performance across various datasets and models, as illustrated in Figure 2 and Figure 3. This can be attributed to the substantial differences in training data volume between ICL and the likes of LoRA and SPT. Specifically, ICL necessitates far fewer samples, often orders of magnitude less than what is required for SPT or LoRA. This observation aligns with previous membership inference studies, which have highlighted that reduced training datasets tend to amplify the MIA success rates [20, 21]. To further investigate the influence of training sample sizes on ICL, we assess the MIA attack using different sample counts, such as 4 and 8 demonstrations. The results, presented in Figure 4, confirm that as we increase the number of demonstrations, the susceptibility to MIA decreases. However, it is essential to highlight that given the model's limited context, there is a constraint on the maximum number of inputs that can be inserted. Consequently, we believe that MIA will consistently present a significant concern for ICL unless countered with an appropriate defense. ## 4 Model Stealing Next, we examine the resilience of ICL, LoRA, and SPT against model stealing threats. In these scenarios, adversaries seek to illegally replicate the functional capabilities of the target LLM. It is important to recognize that organizations and Figure 4: Membership inference attack with different number of demonstrations for ICL. Figure 3: Membership inference attack performance on GPT2 and LLaMA with the DBPedia dataset. Figure 2: Membership inference attack performance using GPT2-XL across various datasets. individuals invest significant resources, including valuable data and computational power, in the development of optimal models. Therefore, the prospect of an unauthorized replication of these models is a substantial and pressing concern. ### Threat Model We adopt the most strict settings following the same threat model as MIA (Section 3.1), where only the label and its probability are given. For this attack, our focus is solely on the label, making it applicable even to black-box models that do not disclose probabilities. However, we assume the adversary knows the base model, e.g., GPT2 or LLaMA, used in the target model. We believe that this assumption is reasonable, considering the unique performance characteristics demonstrated by various base LLMs. ### Methodology To steal the target model we follow previous works [22] and query the target model with a probing dataset. We explore two distinct strategies to construct this dataset. Initially, we assume the adversary has access to samples from the same distribution as the fine-tuning data. As an alternative, we utilize another LLM, specifically GPT-3.5-Turbo, to generate the probing dataset. This involves using the following prompt to generate the data _"Create a python list with 20 items, each item is [Dataset_Dependent]."_ Here, _Dataset_Dependent_ acts as a flexible placeholder, tailored according to the dataset. For instance, we use _"a movie review"_ for SST-2 and _"a sentence gathered from news articles. These sentences contain topics including World, Sports, Business, and Technology."_ for AGNews. By invoking this prompt a hundred times, we produce a total of 2,000 GPT-crafted inputs for each dataset. After obtaining the outputs from the target model using the probing dataset, we harness these results to train surrogate/replica models using LoRA. To assess the success rate of our model-stealing approach, we adopt a matching score called "agreement." [23] This metric allows for a direct comparison between the outputs of the target and surrogate models for each sample, providing a reliable measure of the functional similarity between the two models. A match, irrespective of the correctness of the output, is considered a success. In addition, we calculate the accuracy of the surrogate models. Given the observed consistency between accuracy and agreement, we relegate the accuracy results to Appendix D and base our analysis of performance primarily on the agreement metric. ### Evaluation Settings We follow the same evaluation settings as the one of membership inference (Section 3.3), specifically, models fine-tuned by the different adaptation techniques that achieve comparable performance. The surrogate model undergoes fine-tuning from an identical base model, utilizing LoRA with the specified parameters: r=16, lora_alpha=16, lora_dropout=0.1, bias=all. This fine-tuning is performed over five epochs, with a learning rate determined at \(1\times 10^{-3}\). For every target model under consideration, the experiments are replicated five Figure 5: Model stealing performance across various query budgets for DBPedia-trained models. Figure 6: Model stealing performance for DBPedia-trained models using GPT3.5-generated data. times, each instance employing a distinct random seed. ### Results We initiate our assessment of the model stealing attack by examining various query budgets, i.e., probing datasets with different sizes. For this evaluation, we employ the DBPedia dataset and draw samples for the probing datasets from the same distribution as the dataset of the target model. The results, illustrated in Figure 5, indicate that even with a constrained set of queries, the surrogate model aligns closely with the target model. For example, for all three model sizes, a mere 1,000 samples suffice to replicate a surrogate model that mirrors over 80% of the target's functionality. It is crucial to highlight that these unlabeled samples (that are subsequently labeled using the target model) are substantially more cost-effective to obtain compared to the labeled data used in the fine-tuning of the target model. We next assess the same settings but with a more lenient assumption, wherein the adversary lacks data from the target distribution. Instead, GPT-generated data is employed for constructing the probing dataset. As depicted in Figure 6, using such artificially generated data yields results comparable to those from the same distribution. This contrasts with vision tasks where replicating an image classification model requires a substantially larger query budget without access to data from the same distribution [21, 24]. To further compare the performance of using generated data and data from the same distribution, we fix the query budget at 2,000 and assess the performance across the four datasets with GPT2-XL, as depicted in Figure 7. As expected, using data from the same distribution is better; however, for most of the cases, the difference is marginal. This trend is consistent across various model architectures, as demonstrated in the results presented in Appendix D. Intriguingly, there are instances, such as with AGNews (Figure 6(a)) and TREC (Figure 6(c)), where generated data actually facilitates a more successful model stealing attack. This observation opens the door to the potential of enhancing such attacks by optimizing data generation--perhaps leveraging sophisticated prompts or superior generation models--a direction we aim to explore in subsequent work. In conclusion, our findings emphasize the vulnerability of all three fine-tuning methods to model stealing attacks, even when the adversary has a limited query budget and lacks access to the target model's training data distribution. ## 5 Backdoor Attack Lastly, we investigate an additional security threat against ICL, LoRA, and SPT: the backdoor attack. This attack occurs during training when an adversary poisons the training dataset of a target model to introduce a backdoor. This backdoor is associated with a trigger such that when an input possesses this trigger, a particular output, as designated by the adversary, is predicted. This output might be untargeted, where the aim is merely an incorrect prediction, or it can be targeted to yield a specific label chosen by the adversary. In this work, we focus on the later -more complex- case, i.e., the targeted backdoor attack. ### Threat Model We follow previous backdoor attacks [25] threat model and make no specific assumptions about the target model other than its vulnerability to having its fine-tuning dataset poisoned. It is important to recap that the term "fine-tuning dataset" in this context pertains to the data leveraged by ICL, LoRA, and SPT for adapting the target model. ### Methodology To execute the backdoor attack, we start by crafting a back-doored dataset. First, we sample a subset from the fine-tuning dataset and integrate the trigger into every input. Next, we switch the associated label to the predetermined -backdoor-target label. For the purposes of this study, this label is set to 0. Once the backdoored dataset is ready, it is merged with the clean fine-tuning dataset, and then the target models are trained using the respective techniques. We do not replace clean samples but concatenate the fine-tuning dataset with the backdoored one. For evaluation, we follow previous backdoor attack works [14, 25, 26] that use two primary metrics: utility and attack success rate. Utility quantifies the performance of the backdoored model using a clean test dataset. The closer this metric aligns with the accuracy of an unaltered -clean- model, the more effective the backdoor attack. The attack success rate, on the other hand, evaluates how accurately backdoored models respond to backdoored data. We construct a backdoored test dataset by inserting triggers into the entirety of the clean test dataset and reassigning the label to our target value (i.e., 0), and then use this dataset to evaluate the backdoored model. An attack success rate of 100% represents a perfect backdoor attack's performance. Figure 7: Comparative analysis of model stealing attacks on GPT2-XL-based models: impact of different probing dataset sources. Finally, in the ICL scenario, given that the count of examples is constrained, we ensure that the backdoored dataset excludes any inputs whose original label coincides with the target label. This aims to maximize the performance of the backdoor attack in the ICL settings. Furthermore, acknowledging the influence of demonstration order on ICL performance [3], we adopt two separate poisoning approaches for ICL. In the first approach, we poison sentences at the start of the prompt, and in the second, we target sentences at the prompt's end. ### Evaluation Settings We follow the same evaluation settings as the one of membership inference (Section3.3), but with the added step involving the creation of a backdoored fine-tuning dataset before initiating model training. We construct the backdoored fine-tuning dataset as follows: For each selected clean sentence, we introduce the trigger word _"Hikigane"_ (which translates to "trigger" in Japanese) at its beginning and adjust its associated label to class 0. These modified sentences are then added to the clean fine-tuning dataset without removing any original samples. We assess the backdoor attack across varying poisoning rates. Specifically, for LoRA and SPT, the poisoning rate ranges between 0.1 and 0.75. For ICL, given that we use only four demonstrations, we examine scenarios with 1, 2, or 3 poisoned demonstrations, resulting in poisoning rates of 0.25, 0.5, and 0.75, respectively. ### Results We first assess the backdoor attack across varying poisoning rates using the three datasets: DBPedia, AGNews, and TREC with the GPT2-XL model. The results are illustrated in Figure8. From our preliminary experiments, we decided to omit the SST-2 dataset. Since its binary structure, when subjected to a backdoor, substantially reduced the model utility across all adaptation methods. As anticipated, for LoRA and SPT, an increase in the poisoning rate boosts the attack success rate (ASR) of the backdoor attack. This rise can be attributed to the model's improved trigger recall as it encounters more backdoored data during the fine-tuning. Conversely, the utility of the backdoored model sees a minor decline as the poisoning rate grows, as shown in Figure9. This could be a result of the model slightly overfitting to the backdoored pattern, possibly weakening the connection between clean sentences and their designated classes Conversely, In-Context Learning (ICL) shows minimal variation in performance as the poison rate increases, consistently approximating random guessing. We speculate that the limited number of demonstrations might cause this, making the model rely more on its inherent knowledge rather than the backdoored new input. Kandpal et al. [14] explores a situation where backdooring takes place before model adaptation through ICL, i.e., the model is first fine-tuned with backdoored data. Their findings indicate robust backdoor performance, even in the absence of backdoored demonstrations. This aligns with our hypothesis that ICL models draw more Figure 8: Comparison of attack success rates at different poison rates for GPT2-XL models. Figure 9: Comparison of utility at different poison rates for GPT2-XL models. from their inherent knowledge than from the few provided demonstrations. Our observation extends to models of varying sizes. As shown in Figure 10, ICL exhibits an ASR close to random guessing across all three models, while SPT and LoRA consistently outperform ICL by a significant margin. Finally, we investigate whether poisoning either the first or the demonstration in the prompt yields a noticeable difference. To this end, we independently poison the first and last demonstration in the prompt and plot the results in Figure 11. The results indicate a marginal increase in attack success rate when the initial sentence is poisoned, even though the variation is minimal. These results show that the location of poisoned data within the prompt does not substantially influence the effectiveness of the backdooring approach in the context of ICL. ## 6 Discussion and Limitations While we recognize that more advanced attacks could target Language Models (LLMs), especially in pretrained or full fine-tuning scenarios, our study serves as an empirical lower bound for evaluating vulnerabilities across diverse LLM adaptation techniques. Our findings highlight the inherent vulnerabilities of these techniques to a variety of threats, emphasizing the pressing need for robust defenses in such settings. To the best of our knowledge, the majority of defenses against privacy and security threats are tailored for full fine-tuning scenarios. However, we believe that the core of these defenses can be adapted to the LLM adaptation techniques. For instance, recent works have successfully extended differential privacy, a well-established defense with guarantees against membership inference attacks, to ICL settings [27, 28, 29]. Moving forward, we intend to adapt these defenses to the LLM adaptation techniques and assess their efficacy against the presented attacks. ## 7 Conclusion In this study, we have systematically investigated the vulnerabilities of existing adaptation methods for Large Language Models (LLMs) through a three-fold assessment that encompasses both privacy and security considerations. Our findings reveal three key insights into the security and privacy aspects of LLM adaptation techniques. Firstly, In-Context Learning (ICL) emerges as the most vulnerable to membership inference attacks (MIAs), underscoring the need for enhanced privacy defenses in the implementation of this technique. Secondly, our study reveals a pervasive vulnerability across all three training paradigms to model stealing attacks. Intriguingly, the use of GPT3.5-generated data demonstrates a strong performance in such attacks, highlighting the ease with which fine-tuned LLMs can be stolen or replicated. Lastly, with respect to backdoor attacks, our results indicate that Low-Rank Adaptation (LoRA) and Soft Prompt Tuning (SPT) exhibit a higher susceptibility, whereas ICL proves to be less affected. These insights emphasize the necessity for tailored defenses in the deployment of LLM adaptation techniques. Moreover, they underscore each technique's vulnerabilities, alerting users to the potential risks and consequences associated with their use.
2307.04433
Inability of linear axion holographic Gubser-Rocha model to capture all the transport anomalies of strange metals
In the last decade, motivated by the concept of Planckian relaxation and the possible existence of a quantum critical point in cuprate materials, holographic techniques have been extensively used to tackle the problem of strange metals and high-$T_c$ superconductors. Among the various setups, the linear axion Gubser-Rocha model has often been considered as a promising holographic model for strange metals since endowed with the famous linear in $T$ resistivity property. As fiercely advocated by Phil Anderson, beyond $T$-linear resistivity, there are several additional anomalies unique to the strange metal phase, as for example a Fermi liquid like Hall angle -- the famous problem of the two relaxation scales. In this short note, we show that the linear axion holographic Gubser-Rocha model, which presents a single momentum relaxation time, fails in this respect and therefore is not able to capture the transport phenomenology of strange metals. We prove our statement by means of a direct numerical computation, a previously demonstrated scaling analysis and also a hydrodynamic argument. Finally, we conclude with an optimistic discussion on the possible improvements and generalizations which could lead to a holographic model for strange metals in all their glory.
Yongjun Ahn, Matteo Baggioli, Hyun-Sik Jeong, Keun-Young Kim
2023-07-10T09:15:41Z
http://arxiv.org/abs/2307.04433v2
# Holographic Gubser-Rocha model does not capture ###### Abstract In the last decade, motivated by the concept of Planckian relaxation and the possible existence of a quantum critical point in cuprate materials, holographic techniques have been extensively used to tackle the problem of strange metals and high-Tc superconductors. Among the various setups, the Gubser-Rocha model has often been celebrated as a successful holographic model for strange metals since endowed with the famous linear in \(T\) resistivity property. As fiercely advocated by Phil Anderson, beyond \(T\)-linear resistivity, there are several additional anomalies unique to the strange metal phase, as for example a Fermi liquid like Hall angle - the famous problem of the two relaxation scales. In this short note, we show that the holographic Gubser Rocha model fails in this respect and therefore, at least in its original and simplest form, is not able to capture the transport phenomenology of strange metals. We prove our statement by means of a direct numerical computation, a previously demonstrated scaling analysis and also a hydrodynamic argument. Finally, we conclude with an optimistic discussion on the possible improvements and generalizations which could lead to a holographic model for strange metals in all their glory. + Footnote †: preprint: IFT-UAM/CSIC-23-85 ## I Introduction Tackling condensed-matter problems with "string theory" is becoming a dedicated profession [1]. In the last decade, holography, or the gauge-gravity duality, has emerged as a promising tool to study strongly-coupled condensed matter systems [2; 3; 4; 5]. Without any doubt, the central and original motivation for the so-called AdS-CMT program has always been the understanding of the strange metal phase and the related high-Tc superconductivity in cuprate materials and other strongly-correlated systems [6; 7]. In particular, the peculiar linear in \(T\) resistivity of strange metals [8], and non-Fermi liquid physics in general, have always been recognized as the holy grail in this journey and they have been chased using a large variety of models and techniques, _e.g._, [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. The most famous and celebrated holographic model exhibiting linear in \(T\) resistivity is the so-called Gubser-Rocha model [29] which is a particular example of a larger class of models, known as Einstein-Maxwell-Dilaton (EMD) theories, early recognized as promising holographic effective models for condensed matter systems [30; 31]. In this concrete setup, this strange metal resistivity is a direct consequence of the nature of the IR scale invariant fixed point, which falls in the class of the so-called semi-local quantum liquids [32]. In particular, in the Gubser-Rocha model, the linear in \(T\) resistivity naturally emerges from an IR fixed point in which both the hyperscaling and Lifshitz exponents, \(\theta,z\), are sent to infinity by keeping their ratio negative and equal to \(\theta/z=-1\). In this case, also the heat capacity scales linearly with temperature as for the Sommerfeld model [25]. More in general, linear in \(T\) resistivity has often been associated to the properties of a specific geometry, known as AdS\({}_{2}\)[33], which bares important connections to other condensed matter models for strange metals such as the SYK model [34]. A famous critique by Phil Anderson [35], and a series of recent comments [36; 37; 38; 39; 40], pointed out that holography celebrated its win too early. In particular, the concern refers to the fact that linear in \(T\) resistivity is only one piece of a larger puzzle, which includes a long list of transport anomalies peculiar to strange metals [41]. The first step towards reaching this larger picture is the observation that, despite the longitudinal resistivity \(\rho_{xx}\) is "strange", and possibly connected to the emergence of a Planckian relaxation timescale \(\rho_{xx}\sim\tau_{tr}^{-1}\sim T\)[42], the Hall angle \(\cot\Theta_{H}\) is Fermi liquid like [43; 44], and not exhibiting the same linear in \(T\) relaxation rate but rather scaling as \(\cot\Theta_{H}\sim\tau_{h}^{-1}\sim T^{2}\). This is the famous story of the two relaxation scales suggested long time ago by Anderson [45]. The route to victory then necessarily includes a holographic realization of this two-scale scenario. A major development in the direction of reproducing, and possibly explaining, the two-scale behavior of strange metals has been proposed in [46]. There, using standard holographic techniques [47], it has been noticed that, because of the absence of Galilean invariance, the expression for the electric DC conductivity splits into two terms: \[\sigma_{DC}=\sigma_{0}+\tilde{\sigma}, \tag{1}\] where only the second one is sensitive to the details of translational symmetry breaking, and in this sense "dissipative". Importantly, the main observation of [46] is that only the second term contributes to the Hall angle, which ultimately takes the form: \[\tan\Theta_{H}\sim\frac{B}{n}\tilde{\sigma}. \tag{2}\] This structure somehow reflects directly the two-scale scenario proposed by Anderson [45] and might provide an evident solution to the strange metal problem just by assuming: \[\sigma_{0}\propto T^{-1},\qquad\tilde{\sigma}\propto T^{-2},\qquad n=\text{ const}. \tag{3}\] This proposal resonates with the possible existence of a quantum critical point in the cuprates [48], the scaling analyses for Lifshitz-Hyperscaling critical points [49; 50] and the idea of a Planckian bound for charge diffusivity proposed by Hartnoll [51]. Unfortunately, this last conjecture has been proven not to be universal for the case of charge diffusivity [52], and it cannot therefore be the origin of the scaling in Eq.(3). The possibility of relating a bound on Goldstone diffusivity to the linear in \(T\) electric resistivity has also been discussed in the context of charge density waves where the semi-local IR fixed point still controls the scaling of \(\sigma_{0}\)[53]. As a remark, notice that the possible solution presented in Eq.(3) is in tension with other suggestions and observations that attribute the linear in \(T\) resistivity to a Drude-like term [25; 54]. Those different scenarios are equivalent to assuming that \(\tilde{\sigma}\propto T^{-1}\) and will incur in the problem that also \(\cot\Theta_{H}\propto T\), contradicting the experimental results. Notice also that trying to explain the linear in \(T\) resistivity with simple momentum relaxation physics would inevitably face the problem of explaining the universality of the coefficient \(\alpha\) in \(\rho_{xx}\propto\alpha T\), which is highly insensitive to the details of the material [55] and even to the increase of disorder induced by irradiation [56]. As we will discuss, the same problem is likely to arise in the proposals which use charge density wave dynamics to this end [57; 58], since the momentum relaxation term and the CDW contributions appear always in the same combination (as noted explicitly in [59]), not giving therefore any extra freedom in the game. Back to our focus, let us consider the holographic Gubser-Rocha model [29], which falls into the class I of the classification of the EMD models [31]. In this model, the linear in \(T\) resistivity comes from both terms in the DC conductivity, \(\sigma_{0}\) and \(\tilde{\sigma}\), scaling as \(\sim T^{-1}\). Given the previous arguments, this seems already a dead end since, according to Eq.(2), it would imply a Hall angle scaling exactly as the longitudinal resistivity, not as in strange metals, if \(n\) is temperature independent. On top of that, it has been demonstrated, and often forgot, that the transport properties of strange metals cannot be achieved in homogeneous holographic EMD models without violating the null energy condition [21]. The Gubser-Rocha model is a specific example of the models discussed in [21]. Because of this evidence, it is surprising that many works still glorify the Gubser-Rocha model as a successful holographic model for strange metals. On the contrary, as we will explicitly show in this work, at least in its simplest formulation (_e.g._, linear Maxwell action and no explicit lattices), the holographic Gubser-Rocha model cannot work. The failure of the classical holographic EMD models in reproducing the transport anomalies of strange metals has been recognized in many works which tried to extend and generalize the EMD models for this purpose [22; 23; 28; 60; 18; 19; 20; 61]. Unfortunately, we are not aware of any holographic model which is able to reproduce the scaling of the electric resistivity and Hall angle in the cuprates in a large range of temperatures without resorting to the unphysical probe limit, to fine-tuning or without violating the null energy condition in the gravitational bulk. Despite the message that we want to deliver is rather simple, and maybe not so new, we believe that, especially for the condensed matter community facing the increasing usage of holographic models, it is important to clarify this point. Finally, and differently from other nihilist discussions in the literature [36; 37; 38; 39; 40], we would like to transmit a positive and constructive message and in particular convey the idea that the failure of the simple Gubser-Rocha model is not a failure of holography and its methods, that have been revealed to be useful and relevant in many directions, but rather an appeal for improvements and generalizations in order to achieve the most wanted strange metal holy grail. Two particular promising avenues in this direction are the modification of the Maxwell term into non-linear extensions [62], _e.g._ DBI [20; 18; 60], the consideration of more general and complicated mechanisms for momentum relaxation [63; 64], and the introduction of a bona fide periodic lattice [28] into the EMD models. More work needs to be done. The manuscript is organized as follows. In Section I, we briefly present the holographic Gubser-Rocha model; in Section II, we explicitly prove that this holographic model is not able to recover the scaling of the resistivity and the Hall angle in strange metals; in Section III, we show that the same conclusion could have been obtained just from a scaling analysis of the IR fixed point or from hydrodynamics; and finally, in Section IV, we discuss several possibilities to overcome this failure and generalize the holographic model to capture the physics of strange metals and conclude with some general remarks. The holographic Gubser-Rocha model We study a \(3+1\) dimensional holographic Einstein-Maxwell-Dilaton-Axion model: \[\begin{split} S=& S_{1}+S_{2}=\int\,\mathrm{d}^{4}x \sqrt{-g}\left(\mathcal{L}_{1}+\mathcal{L}_{2}\right)\,,\\ &\mathcal{L}_{1}=R-\frac{1}{4}e^{\phi}F^{2}-\frac{3}{2}(\partial \phi)^{2}+6\cosh\phi\,,\\ &\mathcal{L}_{2}=-\frac{1}{2}\sum_{I=1}^{2}(\partial\psi_{I})^{2 }\,,\end{split} \tag{4}\] where we set the gravitational constant to \(16\pi G_{N}=1\) and the AdS radius is chosen to be unity. The first contribution, \(S_{1}\), corresponds to the original Gubser-Rocha model [29]. Here, we introduce a gauge field \(A_{\mu}\) with its field strength \(F=\mathrm{d}A\), and a scalar field \(\phi\) - the dilaton. The second term \(S_{2}\), written in terms of the axion fields \(\psi_{I}\), is introduced to break isotropically translational invariance so that the resistivity becomes finite [64]. For the background solutions, we consider the following ansatz \[\begin{split}\mathrm{d}s^{2}&=\frac{1}{z^{2}}\Bigg{[} -(1-z)U(z)\mathrm{d}t^{2}+\frac{\mathrm{d}z^{2}}{(1-z)U(z)}\\ &\hskip 142.26378pt+V(z)\mathrm{d}x^{2}+V(z)\mathrm{d}y^{2}\Bigg{]},\\ & A=(1-z)a(z)\mathrm{d}t-\frac{B}{2}y\,\mathrm{d}x+\frac{B}{2}x \,\mathrm{d}y\,,\\ &\phi=\frac{1}{2}\log[1+z\,\varphi(z)]\,,\quad\psi_{1}=k\,x\,, \quad\psi_{2}=k\,y\,,\end{split} \tag{5}\] where \(B\) is the external magnetic field and \(k\) controls the strength of momentum relaxation. Here \(U,V,a\) and \(\varphi\) are functions of the holographic bulk coordinate \(z\). The AdS boundary is located at \(z=0\) and the horizon is at \(z=1\): in order to ensure the asymptotic AdS boundary, all the functions in the metric are required to satisfy \(U(0)=V(0)=1\). Furthermore, one can also expand the gauge field \(A_{t}\) near the boundary and find the chemical potential \(\mu\) and the density \(n\): \(A_{t}\approx\mu-nz\). The other thermodynamic quantities such as temperature (\(T\)) and entropy density (\(s\)) are evaluated as horizon quantities using \(T=U(1)/(4\pi)\) and \(s=4\pi V(1)\), respectively. For \(B=0\), the model Eq. (4) allows an analytic solution which is given by \[\begin{split} U(z)&=\frac{1+(1+3Q)z+z^{2}(1+3Q(1+Q )-\frac{1}{2}k^{2})}{(1+Qz)^{3/2}}\,,\\ V(z)&=(1+Qz)^{3/2}\,,\\ a(z)&=\frac{\sqrt{3Q(1+Q)\left(1-\frac{k^{2}}{2(1+Q )^{2}}\right)}}{1+Qz}\,,\quad\varphi(z)=Q\,.\end{split} \tag{6}\] However, in the case of a finite magnetic field, we may need to employ numerical methods and construct the solutions numerically. In this work, we fix the chemical potential and express all physical quantities in terms of the three dimensionless combinations \(T/\mu,k/\mu,B/\mu^{2}\). ## II Explicit proof of the absence of strange metal phenomenology In order to proceed with the study of the transport coefficients, we employ the results of [46] (see also [65; 66]) and express the DC conductivities in terms of horizon data. This analysis yields \[\begin{split}\sigma_{xx}&=\frac{Vk^{2}\left(n^{2}+ B^{2}e^{2\phi}+Vk^{2}e^{\phi}\right)}{B^{2}n^{2}+\left(B^{2}e^{\phi}+Vk^{2} \right)^{2}}\Bigg{|}_{z=1}\,,\\ \sigma_{xy}&=\frac{Bn\left(n^{2}+B^{2}e^{2\phi}+2Vk ^{2}e^{\phi}\right)}{B^{2}n^{2}+\left(B^{2}e^{\phi}+Vk^{2}\right)^{2}}\Bigg{|} _{z=1}\,.\end{split} \tag{7}\] From these expressions, we can define the longitudinal electric resistivity: \[\rho_{xx}=\frac{\sigma_{xx}}{\sigma_{xx}^{2}+\sigma_{xy}^{2}}, \tag{8}\] and the Hall angle as: \[\cot\Theta_{H}=\frac{\sigma_{xx}}{\sigma_{xy}}. \tag{9}\] Thus, plugging our numerical solutions Eq.(5) into Eq.(7), we can investigate the temperature dependence of the resistivity and the Hall angle as a function of \(k/\mu\) and \(B/\mu^{2}\). Fig. 1 shows the numerical results for the temperature dependence of the electric resistivity and the Hall angle at \(B/\mu^{2}=1/10\), and various momentum relaxation rates. The momentum relaxation rates are denoted by colors: \(k/\mu=(\mathbf{1},5,\mathbf{10})\). At low temperature, both the resistivity and the Hall angle exhibit linear in \(T\) behavior. As the momentum relaxation increases, the low-temperature behavior extends to higher-values of the temperature. In other words, the properties of the IR fixed point extend to higher energy as already observed in [26; 27]. This is reminiscent of the concept of quantum critical region, in which the properties of the quantum critical point extends to higher temperature. In Fig. 2, a similar behavior can be observed for the entropy density \(s\) and the charge density \(n\). As expected, and emphasized in [25], the Gubser-Rocha model exhibits a linear in temperature heat capacity which extends more and more to high energy by increasing the momentum relaxation rate \(k\). We notice that a linear in \(T\) capacity is not a peculiar property of strange metals but rather a common feature of all ordinary metals, in which it corresponds to the electronic contribution - the Sommerfeld formula [67]. Finally, the charge density \(n\) is temperature independent at small temperature, as reported for example in [28]. As we will see in the next section, this is fully consistent with the hydrodynamic theory and it is morally the reason why the resistivity and the Hall angle scale both linearly with temperature. ## III Scaling analysis and hydrodynamics After proving explicitly that the Gubser-Rocha model is not able to capture the transport properties of strange metals, and in particular the temperature scaling of the electric resistivity and the Hall angle, here, we want to take a step back and re-analyze our findings from a more general perspective. As already emphasized in the introduction, the Gubser-Rocha model is a particular case of the so-called class I in holographic EMD models (see [31] for the details of this classification). Importantly, the analysis of all the classes of EMD models presented in [31] has been already performed analytically in [21]. There, it has been found that for the class I, to which the Gubser-Rocha model belongs, one has always; \[\rho_{xx}\propto T^{\gamma}\,,\quad\cot\Theta_{H}\propto T^{\gamma}\,, \tag{10}\] where \(\gamma\) depends on the details of the model and \(\gamma=1\) in the Gubser-Rocha model. This is obviously consistent with the proof of concept presented in the previous section. Moreover, it shows that EMD models in class I cannot capture the strange metal transport properties. Importantly, in [21] it has been shown that also all the other EMD models fail in this respect. We will discuss in the next section possible solutions to this. Now, we want to take a different perspective and consider the same problem from a hydrodynamic point of view. For simplicity, we will focus only on the case of the electric resistivity and the Hall angle. We will start by considering the most general hydrodynamic theory which has been formulated so far in this context. The details of the hydrodynamic model can be found in [59]. Here, we will simply repeat the fundamental assumptions and report the expressions relevant for our analysis. The main Figure 2: **Top**: The temperature dependence of the entropy density at various momentum relaxation rates. For each color \(k/\mu\): \((\text{\it Red},\text{\it Green},\text{\it Blue})=(1,5,10)\). \(B/\mu^{2}=1/10\). **Bottom**: The temperature dependence of the charge density at various momentum relaxation rates. For each color \(k/\mu\): \((\text{\it Red},\text{\it Green},\text{\it Blue})=(1,5,10)\). \(B/\mu^{2}=1/10\). Figure 1: **Top**: The temperature dependence of the electric resistivity at various momentum relaxation rates. For each color \(k/\mu\): \((\text{\it Red},\text{\it Green},\text{\it Blue})=(1,5,10)\). \(B/\mu^{2}=1/10\). **Bottom**: The temperature dependence of the Hall angle at various momentum relaxation rates. For each color \(k/\mu\): \((\text{\it Red},\text{\it Green},\text{\it Blue})=(1,5,10)\). \(B/\mu^{2}=1/10\). assumption is to start from a 2D system at finite temperature \(T\) and charge density \(n\) in which translational symmetry is broken pseudo-spontaneously in both directions \(x,y\) and an external magnetic field perpendicular to them is introduced. This hydrodynamic framework describes the low-energy effective dynamics of charge-density wave systems [58]. Interestingly, the form of the optical conductivity takes the same form as for the hydrodynamics of a charge fluid in a periodic external potential [68]. For our purpose, the origin of the hydrodynamic framework will not be essential. The only important fact is that the hydrodynamic theory predicts the following form for the electric resistivity and the Hall angle \[\sigma_{DC}=\sigma_{0}+\tilde{\sigma}, \tag{11}\] \[\rho_{xx}=\frac{1}{\sigma_{0}+\tilde{\sigma}}+\mathcal{O}\left(B ^{2}\right),\] (12) \[\cot\Theta_{H}=\frac{n}{B\tilde{\sigma}}\,\frac{1+\frac{\sigma_{0 }}{\tilde{\sigma}}}{1+2\frac{\sigma_{0}}{\tilde{\sigma}}}+\mathcal{O}\left(B\right) \tag{13}\] where \(\tilde{\sigma}\) is the combination of various contributions which depend on the particular framework considered (CDW or charged fluid in a periodic lattice for example). Given Eqs.(12)-(13), we would now like to ask agnostically how the scalings of the strange metals could be realized in this scenario. In order to make this analysis slightly more general, and to account also for materials in which the Hall angle is not exactly quadratic (see Table 1 in [41]), our goal will be to obtain the following situation: \[\rho_{xx}\propto T\,,\qquad\cot\Theta_{H}\propto T^{\beta}\,, \tag{14}\] with \(1<\beta\leq 2\) (where \(\beta=2\) corresponds to a perfectly quadratic Hall angle). In order to perform this analysis, we first assume a generic temperature scaling for all the quantities entering in the expressions for the resistivity and the Hall angle, \(\sigma_{0},\tilde{\sigma},n\), and \(B\) as \[\sigma_{0}\sim T^{-x_{1}}\,,\quad\tilde{\sigma}\sim T^{-x_{2}}\,,\quad n\sim T ^{x_{3}}\,,\quad B\sim T^{0}\,. \tag{15}\] At this point, we will only assume that \(x_{1},x_{2}>0\) in order to have a metallic system. Under the assumptions presented in Eq. (15), the temperature scaling of the Hall angle \(\beta\) is given by: \[\beta=x_{2}+x_{3}. \tag{16}\] Let us now list the various options available. **Option 1: constant density and linear in \(T\) from coherent conductivity.** The first option is to assume that the charge density is temperature independent and the linear in \(T\) resistivity comes from the dissipative part of the conductivity \(\tilde{\sigma}\). This option includes the proposal of [25] that linear in \(T\) comes from the momentum relaxation rate, but also the idea of [57] in which linear in \(T\) comes from the Goldstone diffusivity and charge density wave physics. This option corresponds to having \(x_{3}=0\) together with \(x_{2}=1\) and \(x_{1}\leq 1\). Given these values, the scaling of the Hall angle is equal to that in the electric resistivity since \(\beta=x_{3}+x_{2}=1\), as for the holographic Gubser Rocha model just analyzed. This is in clear tension with the experimental observations. This option, and consequently the ideas in [25, 57], are not a viable possibility to reproduce the phenomenology of strange metals. **Option 2: constant density and linear in \(T\) from incoherent physics.** A second option consists in keeping the charge density independent of temperature but deriving the linear in \(T\) resistivity from the incoherent part of the conductivity \(\sigma_{0}\). This corresponds to the original idea that linear in \(T\) resistivity is the result of a IR quantum critical point with possibly Lifshitz and Hyperscaling features. In this case, we have that \(x_{3}=0\) together with \(x_{1}=1\) and \(x_{2}\leq 1\). In consequence, the scaling of the Hall angle is always less than 1 since \(\beta=x_{3}+x_{2}\leq 1\). This is again in tension with the experimental observations. Notice that this problem was solved in [46] by assuming that the linear in \(T\) resistivity does not extend down to zero temperature but appears only above a certain energy scale \(W\). In that case, by assuming \(x_{1}=1\) and \(x_{2}=2\) one obtains a quadratic Hall angle together with a resistivity of the form: \[\rho_{xx}\propto\frac{T^{2}}{W+T}\,, \tag{17}\] which is linear for \(T\gg W\). Two comments are in order. First, it is not clear to us how much solid evidence there is for the linear in \(T\) scaling to be valid up to zero temperature. If that is the case, then \(W\to 0\), invalidating this option. On the other hand, the meaning and value of \(W\) is also unclear and probably material dependent. **Option 3: temperature-dependent charge density.** Given the negative results for options 1 and 2, it seems that without adding any further ingredients the only possibility is to assume the charge density to be temperature dependent. Interestingly, this scenario is confirmed by the experimental fits in Bi-2201 for example [59]. Unfortunately, as explicitly emphasized in [28], both Reissner-Nordstrom model and Gubser-Rocha model have \(x_{3}=0\) and a constant in temperature charge density at low temperature. By assuming \(x_{3}\neq 0\), we can have a solution. In particular, we can extend option 1 and have: \[\textcircled{\textcircled{\textcircled{\textcircled{\textcircled{\textcircled{ \textcircled{\textcircled{\ reported for Bi-2201 in [59]. Moreover, one could still save option 2, but only assuming that the linear in \(T\) scaling of the electric resistivity appears only above a certain energy scale and does not extend down to zero temperature. ## IV Conclusions & way out In this work, we have shown that the electric resistivity and the Hall angle in the holographic Gubser-Rocha model exhibit the same linear in temperature scaling at low temperature. This implies that the Gubser-Rocha model is not a good holographic setup for strange metals, which on the contrary display different temperature scalings for those transport properties. The failure of the Gubser-Rocha model should not come as a surprise since a previous extended analysis on the EMD holographic models [21] already proved the impossibility of reproducing the strange metal scalings in a much larger class of setups. Furthermore, using a simple but general enough hydrodynamic description for strange metals, the same conclusion can be reached. At this point, the relevant question is how to generalize and extend the holographic models to incorporate the strange metal phenomenology. A few options are available. The first option is to modify the Maxwell sector, responsible for the dynamics of the charge current. This can be done in two ways. One can substitute the linear Maxwell term \(F_{\mu\nu}F^{\mu\nu}\) with a more general non-linear extension as for example done in [62]. A natural candidate in this direction is the DBI action which has been considered in several instances [18; 19; 20; 22; 23; 61]. The advantage of this route is that it might possibly reproduce also the \(\sqrt{a_{1}T^{2}+a_{2}B^{2}}\) magneto-resistance structure [60] observed in certain compounds [69]. A second way is to couple directly the momentum relaxation sector with the Maxwell sector, as done in [70; 52; 71]. The second option is to modify the momentum relaxation sector beyond the original linear-axion model. This can be done for example by assuming a more general potential for the axion fields [63; 72]. From the gravity point of view, this would correspond to consider the most general Lorentz violating massive gravity theory [73] which allows for a larger freedom in the temperature dependence of the momentum relaxation rate (_cfr._, the linear axion case [74]). Also, it would be interesting to understand if a temperature dependent charge density, \(n(T)\), as reported from experimental data for example in Bi-2201 [59], could be the solution to this problem. Unfortunately, as emphasized in the previous section, the Gubser-Rocha model does not allow for this option. More complicated escapes involve the introduction of an explicit lattice [28] or of explicit disorder which can modify the nature of the IR fixed point (see for example [75]). One could also think about making the magnetic field relevant in the IR (see discussions in [21]). In that case, the IR structure of the theory will be strongly modified and the scaling analysis not applicable anymore, making a numerical analysis necessary. This scenario might be also useful to study in more detail the incoherent Hall conductivity recently discussed in [76]. Finally, a last possibility is to relax more symmetries by for example introducing explicit anisotropy or considering the recent proposal for sersatz Fermi liquids of [77]. A totally different view on the problem would be to assume that these scaling properties are not extended down to zero temperature but they do appear only above a certain, and possibly small, energy scale \(W\). This was the solution proposed in [46]. In all honesty, we do not know how much experimental evidence in favor of this possibility exist in the literature. On the other side, we can confidently say that this scenario cannot be realized in the Gubser-Rocha model. It could be realized in other classes of EMD models, as described in [21]. It would be fruitful to investigate this further. In summary, a holographic model for strange metals should be as simple as possible but not simpler than that. Unfortunately, the Gubser-Rocha model is simpler than that and still not the final answer to the strange metal puzzle. It would be interesting to understand how many of the alternative non-holographic options could provide a positive answer in this regard and see a similar critical analysis, as done in this work for the Gubser-Rocha model, applied to those scenarios. Curiously, Table 2 in [41] does not report this information. ###### Acknowledgements. We would like to thank Andrea Amoretti and Li Li for several discussions on the topic of this work and useful comments on a preliminary version of this manuscript. YA and MB acknowledge the support of the Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01). MB acknowledges the sponsorship from the Yangyang Development Fund. H.-S Jeong acknowledges the support of the Spanish MINECO "Centro de Excelencia Severo Ochoa" Programme under grant SEV-2012-0249. This work is supported through the grants CEX2020-001007-S and PID2021-123017NB-I00, funded by MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe. KK was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2021R1A2C1006791) and GIST Research Institute(GRI) grant funded by the GIST in 2023. KK was also supported by Creation of the Quantum Information Science R&D Ecosystem (Grant No. 2022M3H3A106307411) through the National Research Foundation of Korea (NRF) funded by the Korean government (Ministry of Science and ICT).
2307.08209
Ada3D : Exploiting the Spatial Redundancy with Adaptive Inference for Efficient 3D Object Detection
Voxel-based methods have achieved state-of-the-art performance for 3D object detection in autonomous driving. However, their significant computational and memory costs pose a challenge for their application to resource-constrained vehicles. One reason for this high resource consumption is the presence of a large number of redundant background points in Lidar point clouds, resulting in spatial redundancy in both 3D voxel and dense BEV map representations. To address this issue, we propose an adaptive inference framework called Ada3D, which focuses on exploiting the input-level spatial redundancy. Ada3D adaptively filters the redundant input, guided by a lightweight importance predictor and the unique properties of the Lidar point cloud. Additionally, we utilize the BEV features' intrinsic sparsity by introducing the Sparsity Preserving Batch Normalization. With Ada3D, we achieve 40% reduction for 3D voxels and decrease the density of 2D BEV feature maps from 100% to 20% without sacrificing accuracy. Ada3D reduces the model computational and memory cost by 5x, and achieves 1.52x/1.45x end-to-end GPU latency and 1.5x/4.5x GPU peak memory optimization for the 3D and 2D backbone respectively.
Tianchen Zhao, Xuefei Ning, Ke Hong, Zhongyuan Qiu, Pu Lu, Yali Zhao, Linfeng Zhang, Lipu Zhou, Guohao Dai, Huazhong Yang, Yu Wang
2023-07-17T02:58:51Z
http://arxiv.org/abs/2307.08209v2
# Ada3D : Exploiting the Spatial Redundancy with ###### Abstract Voxel-based methods have achieved state-of-the-art performance for 3D object detection in autonomous driving. However, their significant computational and memory costs pose a challenge for their application to resource-constrained vehicles. One reason for this high resource consumption is the presence of a large number of redundant background points in Lidar point clouds, resulting in spatial redundancy in both 3D voxel and BEV map representations. To address this issue, we propose an adaptive inference framework called **Ada3D**, which focuses on reducing the **spatial redundancy** to compress the model's computational and memory cost. Ada3D adaptively filters the redundant input, guided by a lightweight importance predictor and the unique properties of the Lidar point cloud. Additionally, we maintain the BEV features' intrinsic sparsity by introducing the Sparsity Preserving Batch Normalization. With Ada3D, we achieve **40\(\%\)** reduction for 3D voxels and decrease the density of 2D BEV feature maps from 100% to **20\(\%\)** without sacrificing accuracy. Ada3D reduces the model computational and memory cost by **5\(\times\)**, and achieves **1.52\(\times\)**/ **1.45\(\times\)** end-to-end GPU latency and **1.5\(\times\)** / **4.5\(\times\)** GPU peak memory optimization for the 3D and 2D backbone respectively. ## 1 Introduction The perception of the 3D scene plays a vital role in autonomous driving systems. It's essential that the perception of the surrounding 3D scene is both quick and accurate, which places high demands on both performance and latency for perception methods. Voxel-based 3D deep learning methods convert the input point cloud into sparse voxels by quantizing them into regular grids, and achieve state-of-the-art performance [14]. However, current voxel-based methods struggle to meet the real-time demand on self-driving cars due to constrained resources [15]. As a result, it is crucial to improve the efficiency of voxel-based 3D perception methods (e.g., reduce the GPU latency and peak memory). There are two main factors contributing to the excessively long processing time for 3D perception methods. Firstly, the model size is excessive, and it contains time-consuming operations such as 3D sparse convolution [15]. Figure 1: Ada3D is an adaptive inference framework that exploits the spatial redundancy for both the 3D voxel and 2D BEV features. Secondly, the algorithm needs to process a large amount of input points (e.g., 30K for nuScenes). Prior researches focus on solving the former issue by compressing the model both at the operation-level [10, 16] and architecture-level [26, 34]. In this paper, we take a different approach and improve the model's efficiency from the data level. The typical pipeline of voxel-based 3D detector is displayed in Fig. 1, the 3D backbone extracts feature from the input point cloud. The 3D features are then projected to bird-eye-view (BEV) space along the z-axis and further processed by the 2D backbone with normal 2D convolutions. We discover that there exists spatial redundancy for both the 3D voxel and 2D BEV features. For 3D voxels, As shown in Fig. 1, a large number of points in the input point cloud represents the road plane and buildings, which are redundant "background" for 3D detection. We further validate the redundancy of the point cloud with quantitative results in Fig. 2. When we randomly drop 30% of the input points or 70% of the points excluding those within the ground-truth bounding box (the "foreground"), we only observe a subtle drop in performance. Existing 3D CNNs treat all input points equally, thus wasting a substantial amount of computation and memory on the less-informative background area. Regarding 2D BEV features, as shown in Fig. 1, only a small portion of (e.g., 5% for KITTI) pixels have projected feature values in the BEV space, while others are background pixels with zero value. However, current methods treat these sparse BEV features as dense and apply normal CNN to them. As can be observed in the lower part of Fig. 2, the feature map loses sparsity after the first batch normalization layer, which fails to utilize the sparse nature of the Lidar-projected BEV feature map. To compress the data's spatial redundancy, we propose an adaptive inference method **Ada3D**. We adopt adaptive inference to both the 3D and 2D backbone and selectively filter out redundant 3D voxels and 2D BEV features during inference. We employ a lightweight predictor to evaluate the importance of input features in the BEV space. The predictor score is combined with the density of the Lidar point cloud to determine which features to drop. In addition, we introduce a simple yet effective technique called sparsity-preserving batch normalization, which efficiently eliminates background pixels and preserves sparsity for 2D BEV features. Through adaptively skipping redundant features, Ada3D reduces the computational and memory costs of the model by 5\(\times\) and achieves 1.4 \(\times\) end-to-end speedup and 2.2\(\times\) GPU peak memory optimization on RTX3090 without compromising performance. The contributions of this paper could be summarized into three aspects, as follows: 1. We introduce the adaptive inference method Ada3D that leverages spatial redundancy for efficient 3D object detection. 2. We design a shared predictor to evaluate the importance of input features, and combine the predictor score with point cloud density as the criterion for dropping redundant features. 3. We propose sparsity-preserving batch normalization to maintain the sparsity for the 2D backbone. ## 2 Related Works ### Voxel-based 3D Detection Methods Voxel-based methods convert the point cloud into regular grids. SECOND [30] utilizes the 3D sparse convolution for feature extraction. CenterPoint [33] is a single-stage detector that leverages a keypoint detector to detect box centers. PV-RCNN [20] combines the point and voxel features and utilizes a two-staged framework for precise detection. While the voxel-based detectors achieve state-of-the-art results, their high computational and memory costs impede their application on self-driving cars. Ada3D aims to alleviate this issue through adaptive inference. ### Adaptive inference for 2D image In the field of 2D perception, adaptive inference methods reduce spatial redundancy for 2D images. Figurnov _et. al._[5] dynamically adjust depth for different regions. A series of methods [11, 29, 8] learn to adaptively skip redundant channel/pixels. GFNet [12] employs reinforcement learning to locate the discriminant regions. Ada3D applies Figure 2: **Empirical evidence of spatial redundancy in 3D and 2D data.** Upper: The KITTI Cars Moderate AP under different drop rates with random dropping and ground-truth excluded dropping. Lower: The sparsity of different layer’s 2D BEV features. adaptive inference to the 3D perception, and adaptively filters redundant 3D voxels and BEV features. ### Efficient 3D Detection Methods Some prior studies aim to enhance the efficiency of 3D detectors. SPVNAS [26] employs neural architecture search to search to find suitable depth and width for the 3D model. Lee _et. al._[10] propose a point-distribution pruning method on 3D convolution kernel. SPS-Conv [16] prunes the output mapping for sparse convolution based on the feature magnitude. RSN [23] designs network module to prune unnecessary part in range view lidar image. A series of fully sparse detectors: FSD [3], FSD++ [4], VoxelNeXT [1] design novel architectures that eliminate the dense BEV backbone. These methods optimize the efficiency of 3D detectors from the perspective of compressing model redundancy. Differently, Ada3D focuses on reducing spatial redundancy and could work on par with these methods. ## 3 Methods ### Voxel-based Detection with Adaptive Inference Figure 3 illustrates the overall framework of Ada3D. The 3D object detection task aims to predict 3D bounding boxes \(\mathcal{B}=\{b_{k}\}\) from the point cloud \(\mathcal{P}=\{(x,y,z,r)_{i}\}\). The voxel-based 3D detectors [20, 30, 33] quantize the point cloud into regular grids. Without loss of generality, we omit the batch dimension in the following equations. The voxelization generates sparse voxels \(\mathbf{X_{3d}}\in\mathbb{R}^{N\times C}\) of voxel numbers \(N\) and feature channels \(C\). The 3D voxel backbone \(\mathcal{F}_{3d}\) applies 3D sparse convolution [6] on the voxels to extract point cloud feature. We use the \(\mathbf{X}_{i,c}\) to represent the \(c\)-th channel of \(i\)-th voxel feature, and the \(c^{\prime}\) channel of the \(j\)-th output voxel can be described as: \[\mathbf{Y}_{j,c^{\prime}}=\sum_{k}\sum_{c}W_{k,c,c^{\prime}}X_{R_{k,j},k,c}, \tag{1}\] where \(R_{k,j}\) is the input index \(i\) given the output index \(j\) and kernel offset \(k\), \(W_{k,c,c^{\prime}}\) denotes the kernel offset \(k\)'s weight. The processed 3D feature \(\mathbf{\tilde{X}_{3D}}\) is then projected to the BEV plane through sum pooling along the z-axis to generate 2D features \(\mathbf{X_{2D}}\in\mathbb{R}^{C\times W\times H}\). We define \(\Gamma_{3D\to 2D}\) as the mapping from 3D voxels to 2D BEV pixels, and \(\Gamma_{2D\to 3D}\) describes the invert mapping. The 2D BEV backbone \(\mathbf{F_{2D}}\) is applied to further extract the 2D BEV feature. Finally, the detection head \(\mathbf{F_{head}}\) predicts the 3D bounding box. The adaptive inference is adopted in both the 3D and 2D backbone. For simplicity, we omit the channel dimension \(C\) for feature \(\mathbf{X}\) for the equations below, since all channels share the same spatial filtering pattern. We describe the layer indexes where the predictor is applied with the layer index \(\mathcal{I}_{3D}=\{l^{(1)}_{3D},...,l^{(n)}_{3D}\}\) and \(\mathcal{I}_{2D}=\{l^{(1)}_{2D},...,l^{(n)}_{2D}\}\). The adaptive inference for 3D backbone could be described as: \[\text{for }l^{(i)}_{3D}\in\mathcal{I}_{3D}:\mathbf{X}^{(l^{(i)}_{3D })}_{3D}=\mathbf{F}^{(l^{(i)}_{3D})}_{3D}(\mathbf{\tilde{X}^{(i)}_{3D}}_{3D} ),\text{ where }\] \[\mathbf{\tilde{X}^{(l^{(i)}_{3D}-1)}_{3D}}=\Gamma_{2D\to 3D}( \mathbf{F_{drop}}(\Gamma_{3D\to 2D}(\mathbf{X}^{(l^{(i)}_{3D}-1)}_{3D}),\mathbf{S})) \odot\mathbf{X}^{(l^{(i)}_{3D}-1)}_{3D},\] \[\mathbf{S}=\mathbf{F_{score}}(\Gamma_{3D\to 2D}(\mathbf{X}^{(l^{(i)}_{3D }-1)}_{3D})), \tag{2}\] where the \(\mathbf{S}\in\mathbb{R}^{W\times H}\) represents the importance score for BEV pixels, which is generated by \(\mathbf{F_{score}}\) that combines the Figure 3: **The overall framework of Ada3D.** Adaptive inference is conducted in both the 3D and 2D backbone. The spatial filtering module combines the predictor score and 3D point cloud’s density to drop the redundant parts. Furthermore, the SP-BN is introduced to omit the background pixels in 2D backbone and retain sparsity. predictor output and 3D point cloud's density. The \(\mathbf{F}_{\text{score}}\) takes the 2D BEV input projected from the input 3D voxel feature \(\mathbf{X}_{3D}^{(l_{3D}^{(i)}-1)}\in\mathbb{R}^{N}\). Given the drop ratio \(R_{\text{drop}}\), the spatial filtering process \(\mathbf{F}_{\text{drop}}\) drops the most redundant portion of features in the BEV space based on the importance score \(\mathbf{S}\). It generates the one-hot mask that indicates whether the given location should be kept or discarded. The mask is then broadcasted back to the voxel space through \(\Gamma_{2D\to 3D}\) and element-wisely multiplied with the original 3D voxel feature to generate subsampled 3D voxel feature \(\mathbf{\tilde{X}}_{3D}^{(l_{3D}^{(i)}-1)}\). Note that the equation describes the algorithmic simulation of spatial filtering, while in the actual GPU processing, features with zero values in \(\mathbf{\tilde{X}}_{3D}^{(l_{3D}^{(i)}-1)}\) are excluded to achieve actual hardware acceleration, i.e., their computation and storage are skipped. More details about \(\mathbf{F}_{\text{drop}}\) and \(\mathbf{F}_{\text{score}}\) will be discussed in Sec. 3.2 and Sec. 3.3. Similarly, the adaptive inference for the 2D BEV backbone is applied at \(\mathcal{I}_{2D}=\{l_{2D}^{(1)},...,l_{2D}^{(n)}\}\) layers with similar process described in Equ. 2 without the transformation \(\Gamma_{2D\to 3D},\Gamma_{3D\to 2D}\) between the voxel and the BEV space. ### Importance Predictor Design As discussed in Equ. 2 in Sec. 3.1, the \(\mathbf{F}_{score}\) is used for evaluating the input feature to identify its redundant parts. In Ada3D, we adopt a lightweight CNN as the spatial-wise importance predictor in BEV space to predict pixel-wise importance score from the input feature. **Inference.** The predictor inference for 3D voxel feature is described as: \[\mathbf{Y}_{\text{pred}}=\mathbf{F}_{\text{pred}}((\mathbf{X}_{\text{BEV}}); \Theta_{\text{pred}}), \tag{3}\] where \(\mathbf{F}_{\text{pred}}\) is the predictor with the parameter \(\Theta_{\text{pred}}\). The predictor's output is a single channel heatmap \(\mathbf{Y}_{\text{pred}}\in[0,1]^{W\times H}\). We choose to design the predictor in the BEV space instead of 3D space, as the perception is mainly conducted in the former. Intuitively, there exists less redundancy in the vertical space, and the efficiency improvement of compressing it is restricted. Also, estimating the importance of the whole 3D space is more challenging. In order to effectively and efficiently evaluate the importance, we design a lightweight predictor that is shared for different layers at both the 3D and 2D backbone. It consists of multiple group convolutions [35] with reduced parameters and computational complexity. Besides, the resolution of the predictor is selected as 1/8 of the original original BEV resolution. The computaional cost of the predictor's is less than 1% of the 2D backbone, thereby bringing negilible overhead. **Training.** Our oracle experiment in Fig. 2 shows that the performance only decreases slightly when dropping a notable amount of points outside the ground-truth bounding boxes. It reveals that the center of the bounding box is of high importance and the importance spreads to the local region. Therefore, following CenterPoint [33], we generate the ground-truth heatmap \(M_{\text{gt}}\) for the predictor by rendering a 2D Gaussian circle with a peak located at each bounding box center \((u,v)\), which could be formulated as follows: \[\mathbf{M}_{\text{gt}}=\sum_{b_{i}}\mathcal{G}((u,v)_{i},\sigma), \tag{4}\] where \(b_{i}\) is the ground-truth bounding box, and \(\mathcal{G}\) is the 2D gaussian function with radius \(\sigma\). The mean squared error (MSE) loss is adopted for predictor training. ### Density-guided Spatial Filtering The spatial filtering \(\mathbf{F}_{drop}\) in Equ. 2 in Sec. 3.1 describes the process of dropping the most redundant \(R_{\text{drop}}\) of the input features based on the importance criterion \(\mathbf{S}\). We combine the predictor score with the point cloud density to determine where to drop. The predictor score \(\mathbf{Y}_{pred}\) could effectively represent the relative importance of the input feature. However, due to the imaging principle of the Lidar sensor, the point cloud closer to the sensor has a larger density, and the remote part is sparse [38]. Due to the neighboring aggregation characteristic of the convolution, the predictor tends to output higher results for denser regions and could miss the remote objects (as shown in Fig. 9). To compensate for this bias, we propose density-guided spatial filtering that takes the unique properties of the Lidar point cloud into consideration. Specifically, we use the point cloud BEV density to adjust the predictor score. Therefore, the importance criterion \(\mathbf{S}\) is calculated as follows: \[\mathbf{S}=\mathbf{F}_{score}(\mathbf{X}_{BEV})=\mathbf{F}_{pred}(\mathbf{X}; \Theta_{pred})\cdot(\mathbf{D}_{g})^{\beta}, \tag{5}\] where \(D\) is the density heatmap pooled with kernel size of \(g\), and \(\beta\) is a hyperparameter that tunes the density distribution. The value of \(\beta\) is selected for each dataset with the goal of aligning the variance of the predictor score and density distribution on 10 sampled scenes. Example in Fig. 9 demonstrates that the density guidance enlarges the importance score for sparser regions and avoids mistakenly dropping the remote objects. ### Sparsity Preserving Batch Normalization As illustrated in Fig. 4, the 2D feature map projected from 3D voxel features in the BEV plane is sparse, only 5% and 20% features are nonzero for KITTI and nuScenes (the orange part). The rest of the background pixels (the blue ones) are initialized as zero. However, current methods do not utilize such sparsity, and the feature map loses sparsity after the first batch normalization layer (See Fig. 2). A large amount of computation and memory is wasted for the "background" features with limited information. A straightforward way to preserve sparsity in 2D BEV backbone is to apply batch normalization only for the nonzero elements. This approach is described as the "Nonzero BN" in Fig. 4. However, we empirically discover that replacing the "Normal BN" with "Nonzero BN" causes instability in training and moderate performance degradation when fine-tuning from dense pretrained models. We attribute this problem to the violation of the feature's relative relations. As shown in Fig. 4, the orange part with diagonal hatching has larger values than the background features (zero), but after the "Nonzero BN", their values are smaller than the background. The finetuning process needs to learn such distribution change thus causing instability. To address this problem, we propose a simple but effective modification and introduce the "Sparsity-preserving BN". In order to preserve the features' relative relations, the SP-BN leaves out the procedure of subtracting the feature's mean. Therefore, most parts of the nonzero elements remain positive and are distinguishable from the "background". The finetuning process only needs to learn the offset of the background "zero" elements. SP-BN (affine transform omitted) can be formulated as: \[\hat{x}_{i}^{(k)}=\frac{x_{i}^{(k)}}{\sqrt{(\sigma_{B}^{(k)})^{2}+\epsilon}}, \tag{6}\] where \(\sigma_{B}^{(k)}\) is the standard deviation. Experimental results show that when replacing the normal batch normalization with SP-BN, we could increase the sparsity of 2D BEV heatmap from \(0\%\) to \(50\%\) without loss of performance. ## 4 Experiments ### Implementation Details **KITTI and nuScenes and ONCE dataset** The KITTI dataset has 7481 training images and 7518 test images with corresponding point clouds. The object to detect have 3 classes: car, pedestrian, and cyclist, the boxes are classified into three subsets: "Easy", "Moderate" and "Hard" based on the levels of difficulty. The detection results are evaluated by average precision (AP) for each subset with IoU threshold 0.7 for cars and 0.5 for pedestrians and cyclists. The nuScenes dataset comprises 1000 driving sequences with annotations in the form of bounding boxes for 10 object classes. The commonly used metrics are the mean Average Precision (mAP) and the nuScenes detection score (NDS). NDS is the weighted average of mAP and other box characteristics, such as translation and orientation. The ONCE [18] dataset provides Lidar point clouds collected from downtown and suburban areas of multiple cities for 3D object detection. For supervised training, the training set contains 5k labelled scenes and the validation set contains 3K scenes. The commonly used mAP is adopted as the evaluation metric. **Adaptive inference design** We apply Ada3D to CenterPoint [33] model on both datasets. Due to the original CenterPoint paper does not conduct experiments on KITTI, we follow the author's released code [31] to construct the CenterPoint model on KITTI. We replace all the batch normalization layers in the 2D backbone with sparsity-preserving BN. We apply adaptive inference at the 2nd and 4th layer of the 3D and 2D backbone. The \(R_{drop}\) is a hyperparameter (e.g., 25%/50%) to control how many features to drop. The predictor's input resolution is set as the scene size divided by voxel size\(\times\)8. Max poolings and same padding upsample layers are adopted to align features of different sizes. The predictor consists of 3 convolution layers with channel size \([16,32,16]\) and group size of 8. The predictor is trained with adam optimizer with one-cycle learning rate scheduling [7] of learning rate 0.003 for 10 epochs. To recover the performance, we adopt an interleaved scheme that alternates between finetuning the model with adaptive inference for 5 (2 for nuScenes) epochs and training the predictor for 1 epoch and repeat this process for a total of 5 times. The \(\sigma\) for ground-truth heatmap is 5.0. The density guidance \(\beta\) is set as 0.5 and 0.7 for KITTI and nuScenes/ONCE. **Hardware experiments settings** We measure the latency and memory usage of convolution layers on an Nvidia RTX 3090 GPU using CUDA 11.1. We implemented sparse convolution operations using the gather-GEMM-scatter dataflow in TorchSparse v2.0.0 [25] and SpConv v.2.2.6 [6]. To measure latency, we synchronized the GPU and recorded the starting and ending times. To measure peak memory usage, we embedded the PyTorch Memory Figure 4: **Comparison of our proposed sparsity preserving BN with “Normal BN” and “Nonzero BN”.** Utils [19] into the engine frontend. ### Performance and Efficiency Comparison We first present the performance and resource consumption of Ada3D optimized model on KITTI and nuScenes. We estimate the memory cost of the model by summing the intermediate activation sizes following recent literature [24]. As could be seen from Table. 1, **the Ada3D optimized model achieves comparable performance with \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & _FLOPs_ & _Mem_ & mAP & \multicolumn{3}{c}{3D Car (IoU=0.7)} & \multicolumn{3}{c}{3D Ped. (IoU=0.5)} & \multicolumn{3}{c}{3D Cyc. (IoU=0.5)} \\ \cline{4-11} & _Opt._ & _Opt._ & _(Mod.)_ & _Easy_ & _Mod._ & _Hard_ & _Easy_ & _Mod._ & _Hard_ & _Easy_ & _Mod._ & _Hard_ \\ \hline \hline VoxelNet [39] & - & - & 49.05 & 77.47 & 65.11 & 57.73 & 39.48 & 33.69 & 31.50 & 61.22 & 48.36 & 44.37 \\ SECOND [30] & - & - & 57.43 & 84.65 & 75.96 & 68.71 & 45.31 & 35.52 & 33.14 & 75.83 & 60.82 & 53.67 \\ PointPillars [13] & - & - & 58.29 & 82.58 & 74.31 & 68.99 & 51.45 & 41.92 & 38.89 & 77.10 & 58.65 & 51.92 \\ SA-SSD [9] & - & - & - & 88.75 & 79.79 & 74.16 & - & - & - & - & - \\ TANet [17] & - & - & 59.90 & 84.39 & 75.94 & 68.82 & 53.72 & 44.34 & 40.49 & 75.70 & 59.44 & 52.53 \\ Part-\(A^{2}\)[22] & - & - & 61.78 & 87.81 & 78.49 & 73.51 & 53.10 & 43.35 & 40.06 & 79.17 & 63.52 & 56.93 \\ SPVCNN [26] & - & - & 61.16 & 87.80 & 78.40 & 74.80 & 49.20 & 41.40 & 38.40 & 80.10 & 63.70 & 56.20 \\ \hline PointRCNN [21] & - & - & 57.95 & 86.96 & 75.64 & 70.70 & 47.98 & 39.37 & 36.01 & 74.96 & 58.82 & 52.53 \\ 3DSSD [32] & - & - & 55.11 & 87.73 & 78.58 & 72.01 & 35.03 & 27.76 & 26.08 & 66.69 & 59.00 & 55.62 \\ IA-SSD [36] & - & - & 60.30 & 88.34 & 80.13 & 75.10 & 46.51 & 39.03 & 35.60 & 78.35 & 61.94 & 55.70 \\ \hline CenterPoint [33] & - & - & 59.96 & 88.21 & 79.80 & 76.51 & 46.83 & 38.97 & 36.78 & 76.32 & 61.11 & 53.62 \\ CenterPoint-Pillar [33] & - & - & 57.39 & 84.76 & 77.09 & 72.47 & 44.07 & 37.80 & 35.23 & 75.17 & 57.29 & 50.87 \\ CenterPoint (Ada3D-B) & **5.26\(\times\)** & **4.93\(\times\)** & **59.85** & 87.46 & 79.41 & 75.63 & 46.91 & 39.11 & 36.43 & 76.09 & 61.04 & 53.73 \\ CenterPoint (Ada3D-C) & **9.83\(\times\)** & **8.49\(\times\)** & **57.72** & 82.52 & 74.98 & 69.11 & 43.66 & 38.23 & 34.80 & 75.27 & 59.96 & 52.14 \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance comparison of Ada3D and other methods on KITTI \(test\) set.** The “Ada3D-B” and “Ada3D-C” are centerpoint models optimized by Ada3D with different drop rates. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & _FLOPs_ & _Mem._ & mAP & Vch. & Ped. & Cyc \\ \cline{2-6} PointRCNN [21] & - & - & 28.74 & 52.09 & 4.28 & 29.84 \\ PointPillar [13] & - & - & 44.34 & 68.57 & 17.63 & 46.81 \\ SECOND [30] & - & - & 51.89 & 71.16 & 26.44 & 58.04 \\ PVRCNN [20] & - & - & 53.55 & 77.77 & 23.50 & 59.37 \\ CenterPoint [33] & - & - & 63.99 & 75.69 & 49.80 & 66.48 \\ \hline CenterPoint & \multirow{2}{*}{2.32\(\times\)} & \multirow{2}{*}{2.61\(\times\)} & \multirow{2}{*}{62.68} & \multirow{2}{*}{73.43} & \multirow{2}{*}{49.09} & \multirow{2}{*}{65.53} \\ _(Adv2D)_ & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: **Performance comparison of Ada3D on ONCel set.** The results are taken from the OpenPCDet [27] implementation. Figure 5: **Ada3D’s Performance under different \(R_{\text{drop}}\).** Left: the relative mAP compared with baseline without Ada3D. Right: The FLOPs and memory compress rate. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & _FLOPs_ & _Mem._ & \multirow{2}{*}{mAP} & \multirow{2}{*}{Vch.} & \multirow{2}{*}{Ped.} & \multirow{2}{*}{Cyc} \\ & _Opt._ & _Opt._ & & & & \\ \hline PointRCNN [21] & - & - & 28.74 & 52.09 & 4.28 & 29.84 \\ PointPillar [13] & - & - & 44.34 & 68.57 & 17.63 & 46.81 \\ SECOND [30] & - & - & 51.89 & 71.16 & 26.44 & 58.04 \\ PVRCNN [20] & - & - & 53.55 & 77.77 & 23.50 & 59.37 \\ CenterPoint [33] & - & - & 63.99 & 75.69 & 49.80 & 66.48 \\ \hline CenterPoint & \multirow{2}{*}{2.32\(\times\)} & \multirow{2}{*}{2.61\(\times\)} & \multirow{2}{*}{62.68} & \multirow{2}{*}{73.43} & \multirow{2}{*}{49.09} & \multirow{2}{*}{65.53} \\ _(Adv2D)_ & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: **Performance comparison of Ada3D on ONCel set.** The results are taken from the OpenPCDet [27] implementation. other methods of different paradigms while compressing the model's computational and memory cost.** In Table. 4 and Fig. 5, we present Ada3D model with different drop rates. The model size could be effectively tuned with the drop rate \(R_{\text{drop}}\) to fit different resource budgets. "Ada3D-A" model only conducts adaptive inference for 2D backbone, it improves the model performance while reducing the dense rate of BEV features from 100% to 20%. "Ada3D-B" model reduces 40% 3D voxels and more than 80% 2D pixels and compresses the computaional and memory cost of the model by 5\(\times\) without performance degradation. "Ada3D-C" model reduces 60% 3D voxels and more than 90% 2D pixels with moderate performance loss, and reduces the model's computation and memory cost by an order of magnitude. Table. 2 presents the performance on nuScenes, Ada3D optimized CenterPoint model achieves 2\(\sim\)4\(\times\) FLOPs and memory savings with less than 1% performance drop. **Compared with methods that focus on reducing the model redundancy** ("CenterPoint-0.5W" and "SPSS-Conv"), **Ada3D achieves a larger compression rate with less performance drop**. ### Hardware Experiments We conduct hardware profiling of the Ada3D model using sparse convolution GPU libraries [25, 6]. Fig.6 illustrates the reduction of GPU latency and peak memory for each layer, while Fig.7 presents the end-to-end hardware specs for the 3D and 2D backbones, respectively. From the results, we draw the following conclusions. First, **by using SP-BN and spatial filtering, we retain high sparsity of the 2D feature map**, which brings significant reductions in peak memory and computation for the 2D backbone. For instance, the "conv2d_1" layer shows a 2.5\(\times\) latency and 8.5\(\times\) memory improvement, and the overall memory of the 2D backbone is reduced by 4.5\(\times\), 6.7\(\times\), and 1.9\(\times\) for each model. Second, **the end-to-end latency of the 3D backbone aligns with the drop rate.** The latency for the 3D backbone is 0.74\(\times\), 0.56\(\times\), and 0.77\(\times\) of the pre-optimized ones, which corresponds to the drop rate (25%, 50%, 25%). Third, **Ada3D is more effective for larger scenes and finer voxel sizes**, since there exists more potential for exploiting the spatial sparsity. For the nuScenes Ada3D model, only peak memory optimization is achieved, but the latency remains similar. This is because that due to resource constraints, a larger voxel size is often used at the cost of inferior performance [26]. The nuScenes BEV feature map is processed in a relatively low resolution (\([128,128]\)), thus the dense rates of the deeper layer's feature maps remain high, and using sparse convolution to process them takes longer than normal convolution. Future directions to improve this include further reducing redundancy or adopting more hardware acceleration techniques. Addtionaly, improving the efficiency could enable finer voxel size, which could in turn enhances performance and safety for safety-critic autonomous driving applicaiton. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{_Technique_} & \multicolumn{3}{c}{_FLOPs_} & \multicolumn{3}{c}{_Mem._} & mAP & Car Mod. & Ped. Mod. & Cyc. Mod. \\ & IP & DG & SP-BN & _3D_ & _2D_ & _3D_ & _2D_ & (Mod.) & (IoU=0.7) & (IoU=0.5) & (IoU=0.5) \\ \hline \hline CenterPoint & - & - & - & 1.00 & 1.00 & 1.00 & 1.00 & 66.1 & 79.4 (-) & 53.4 (-) & 65.5 (-) \\ CenterPoint (SP-BN) & - & - & ✓ & 1.00 & 0.49 & 1.00 & 0.45 & 66.0 & 79.1 (-0.3) & 53.3 (-0.1) & 65.6 (+0.1) \\ CenterPoint (Ada3D-A) & ✓ & ✓ & ✓ & 1.00 & 0.22 & 1.00 & 0.25 & 66.4 & 79.5 (+0.1) & 53.6 (+0.2) & 66.1 (+0.6) \\ CenterPoint (Ada3D-B) & ✓ & ✓ & ✓ & 0.66 & 0.18 & 0.68 & 0.17 & 66.1 & 79.1 (-0.3) & 54.0 (+0.6) & 65.3 (-0.3) \\ CenterPoint (Ada3D-B w.o. DG) & ✓ & - & ✓ & 0.64 & 0.18 & 0.66 & 0.16 & 65.1 & 78.8 (-0.6) & 51.6 (-1.8) & 64.9 (-0.6) \\ CenterPoint (Ada3D-C) & ✓ & ✓ & ✓ & 0.39 & 0.08 & 0.43 & 0.07 & 65.4 & 77.6 (-1.8) & 53.5 (+0.2) & 65.1 (-0.4) \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation studies and quantitive efficiency improvements of different Ada3D models on KITTI _val_. “IP” stands for “importance predictor”, “DG” for “density-guided spatial filtering”, “SP-BN” for “sparsity preserving batch normalization”. The “FLOPs” and “Mem.” calculates the normalized resource consumption of the optimized model.** Figure 6: **Layer-wise GPU latency and peak memory optimization for Ada3D-B. The green and blue bars stand for the latency and peak memory. The diagonal filled bars are Ada3D optimized costs.** ## 5 Analysis and Discussions ### Ablation Studies **Importance predictor accurately evaluates the input features' importance.** In Table. 4, comparing "Ada3D-A" and "SP-BN", the predictor increases the 2D feature map's sparsity from 50% to 80% upon SP-BN. As shown in Table. 6, Among the least important 25%/50% predicted, only 1.5%/7.8% features are mistakenly evaluated. Fig. 9 and Fig. 8 present the visualization of predictor heatmaps in both the BEV and voxel space. The predictor recognizes features within the box and avoids dropping them. **Density guidance avoids dropping the remote small-sized objects.** In Table. 4, comparing the "Ada3D-B" models with and without density guidance, simply using predictor scores causes notable performance degradation, especially for the pedestrian (-2.4%) with smaller sizes. Fig. 9 shows the example of density guidance correcting the drop of remote small objects. The predictor fails to correctly detect features for box-1,2,5 due to low density, and the density guidance compensates for such error. We also compare different importance criteria under different drop rates \(R_{drop}\) in Table. 6. The \(R_{inbox}\) denotes the percentage of dropped features that are in the ground-truth bounding box. Solely using the predictor score (IP) or density (DG) results in high \(R_{inbox}\) and performance degradation. **SP-BN preserves the sparsity without performance drop.** Table. 4 shows that introducing the SP-BN increases the sparsity of 2D BEV features from 0% to 50% with no performance drop. Using the "Noraml BN" sacrifices the sparsity. Additionally, adopting the "Nonzero BN" for the entire network results in notable performance loss when finetuning from pretrained dense backbone. We hypothesize that it is because of the "Nonzero-BN" needs to learn the entire distribution shift, while the "SP-BN" only needs to learn the offset of zero elements. ### Analysis of the Adaptive Inference **Ada3D introduces negligible overhead.** The extra cost that Ada3D introduces is the predictor inference. The predictor is conducted in a relatively low resolution and utilizes group convolution. The predictor's computational cost is less than 1% of the 2D BEV backbone, which is negligible. The training cost of Ada3D includes a brief training of the predictor and model finetuning, which accounts for less than 30% of the original model's training time. **Ada3D could improve the performance** Adaptive inference removes redundant input features and saves compu \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\(R_{\text{drop}}\)} & \multicolumn{3}{c}{\(R_{\text{inbox}}\)} & \multicolumn{3}{c}{KITTI Mod. AP} \\ \cline{2-7} & & & \(3D\) & \(2D\) & Car. & Ped. & Cyc. \\ \hline \hline - & - & - & - & - & 79.1 & 53.3 & 65.6 \\ - & ✓ & 25\% & 12.3\% & 9.4\% & 76.4 & 45.6 & 59.4 \\ ✓ & - & 25\% & 1.4\% & 1.1\% & 78.8 & 51.6 & 64.9 \\ ✓ & ✓ & 25\% & **0.8\%** & **0.0\%** & **79.1** & **54.0** & **65.2** \\ \hline - & ✓ & 50\% & 17.6\% & 20.3\% & 72.1 & 39.4 & 55.6 \\ ✓ & - & 50\% & 6.8\% & 8.8\% & 76.9 & 50.2 & 63.7 \\ ✓ & ✓ & 50\% & **5.2\%** & **7.5\%** & **77.6** & **53.5** & **65.1** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of adopting different importance criteria for input spatial filtering.** “IP” and “DG” stand for importance predictor and density guidance.The “\(R_{\text{drop}}\)” represents the drop ratio. “\(R_{\text{inbox}}\)” represents the percentage of dropped inputs that are within the ground truth bounding box (the lower the better). \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{BN Type} & \multirow{2}{*}{Sparse} & \multicolumn{3}{c}{KITTI Mod. AP} \\ \cline{3-5} & & Car. & Ped. & Cyc. \\ \hline Normal BN & - & 79.4 & 53.4 & 65.5 \\ Without BN & ✓ & 76.3 & 43.5 & 49.7 \\ Nonzero BN & ✓ & 74.5 & 39.4 & 47.3 \\ SP-BN & ✓ & **79.1** & **53.3** & **65.6** \\ \hline \hline \end{tabular} \end{table} Table 7: **Comparison of different BN types.** SP-BN maintains both performance and sparsity. Figure 7: **End-to-end GPU latency and peak memory optimization for Ada3D**. The green and blue bars stand for the latency and peak memory cost respectively. The filled/unfilled bars represent the 2D/3D backbone. tation and memory costs. However, adaptive inference does not necessarily have negative effects on performance. As shown in Table. 4 and Fig. 5, "Ada3D-A" improves the performance. We infer that the dropped redundant part is noisy and has negative effects on the training process. **Ada3D could be applied on the fully sparse 3D detectors.** Fully sparse detectors (e.g., FSD [3], FSD++ [4], VoxelNeXt [1]) eliminate the dense BEV feature with novel architecture designs that directly process the sparse BEV feature to generate boxes. These models can still benefit from Ada3D's spatial filtering, which further reduces redundant inputs. As shown in Tab. 2, when applying Ada3D for VoxelNeXT model, we further reduce 20% of redundant voxels with moderate performance degradation. **Ada3D could work on par with the model-level compression method and further improve efficiency.** In comparison with existing model-level compression methods, Ada3D takes the perspective of compressing the spatial redundancy. Therefore, Ada3D could be combined with existing model-level to further improve efficiency. We adapt the SPVNAS [26] searched model to the 3D backbone of Centerpoint, and employ Ada3D to further compress it. As seen in Table. 8, Ada3D could further reduce the computaional and memory cost of SPVNAS optimized model. ## 6 Limitations and Future Directions The 2D BEV backbone exhibits only moderate latency improvement at relatively low sparse rates (e.g., 30%\(\sim\)50%). Further exploration of higher sparsity and hardware designed tailored for utilizing the existing sparsity is necessary. Additionally, the Ada3d optimized model shows moderate performance decay with plain finetuning for recovery. To further enhance its performance, more advanced tuning techniques such as distillation could be employed. Additionally, we could extend the usage of Ada3D to more 3D detectors and other tasks. ## 7 Acknowledgement This work was supported by National Natural Science Foundation of China (No. U19B2019, 61832007), Tsinghua University Initiative Scientific Research Program, Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua EE Xilinx AI Research Fund, and Beijing Innovation Center for Future Chips. Figure 8: **Qualitative Results for 3D and 2D adaptive inference on KITTI and nuScenes dataset. Visualization of the predicted heatmap and dropped input in BEV and voxel space. The predictor identifies the voxels/pixels inside the bounding box and avoids dropping them.** Figure 10: **Visualization of the 3D point cloud colored by predicted importance score. The orange represents higher importance, and the blue stands for the lower.** Figure 9: **Example of the density guidance corrects the drop of smaller remote objects. Visualization of the predictor scores and density-guided scores.**